-
Notifications
You must be signed in to change notification settings - Fork 65
Installation Guide
The main (and recommended) way to run tt-rss is under Docker.
Docker images for https://github.com/tt-rss/tt-rss are being built (for linux/amd64 and linux/arm64) and published
(via GitHub Actions) to:
- Docker Hub (as supahgreg/tt-rss and supahgreg/tt-rss-web-nginx).
- GitHub Container Registry (as ghcr.io/tt-rss/tt-rss and ghcr.io/tt-rss/tt-rss-web-nginx).
Warning
Podman is not Docker. Please don't report issues related to running tt-rss when using Podman or Podman Compose.
This setup uses PostgreSQL and runs tt-rss using several containers as outlined below.
Consider using an external Patroni cluster instead of a single db container in "production" deployments.
Place both .env and docker-compose.yml together in a directory, edit .env as you see fit, run docker compose up -d.
# Put any local modifications here.
# Run FPM under this UID/GID.
# OWNER_UID=1000
# OWNER_GID=1000
# FPM settings.
#PHP_WORKER_MAX_CHILDREN=5
#PHP_WORKER_MEMORY_LIMIT=256M
# ADMIN_USER_* settings are applied on every startup.
# Set admin user password to this value. If not set, random password
# will be generated on startup, look for it in the 'app' container logs.
#ADMIN_USER_PASS=
# Sets admin user access level to this value. Valid values:
# -2 - forbidden to login
# -1 - readonly
# 0 - default user
# 10 - admin
#ADMIN_USER_ACCESS_LEVEL=
# Auto create another user (in addition to built-in admin) unless it already exists.
#AUTO_CREATE_USER=
#AUTO_CREATE_USER_PASS=
#AUTO_CREATE_USER_ACCESS_LEVEL=0
# Default database credentials.
TTRSS_DB_USER=postgres
TTRSS_DB_NAME=postgres
TTRSS_DB_PASS=password
# You can customize other config.php defines by setting overrides here.
# See tt-rss/.docker/app/Dockerfile for a complete list.
# You probably shouldn't disable auth_internal unless you know what you're doing.
# TTRSS_PLUGINS=auth_internal,auth_remote
# TTRSS_SINGLE_USER_MODE=true
# TTRSS_SESSION_COOKIE_LIFETIME=2592000
# TTRSS_FORCE_ARTICLE_PURGE=30
# ...
# Bind exposed port to 127.0.0.1 to run behind reverse proxy on the same host.
# If you plan to expose the container, remove "127.0.0.1:".
HTTP_PORT=127.0.0.1:8280
#HTTP_PORT=8280Warning
See this FAQ entry if you're upgrading between PostgreSQL major versions (e.g. 15 to 17).
Warning
Regarding PostgreSQL 18:
- The
backupscontainer image currently includespostgresql17-client, meaning it won't be able to back up your DB if you use PostgreSQL 18. Consider using an alternative backup solution if you're using PostgreSQL 18. - The PostgreSQL 18 Docker image changed the volume from
/var/lib/postgresql/datato/var/lib/postgresql. The example below includes a commented-out volume mapping that demonstrates this.
services:
db:
image: postgres:17-alpine
restart: unless-stopped
env_file:
- .env
environment:
- POSTGRES_USER=${TTRSS_DB_USER}
- POSTGRES_PASSWORD=${TTRSS_DB_PASS}
- POSTGRES_DB=${TTRSS_DB_NAME}
volumes:
- db:/var/lib/postgresql/data
# or, if 18+
# - db:/var/lib/postgresql
app:
image: supahgreg/tt-rss:latest
# or
# image: ghcr.io/tt-rss/tt-rss:latest
restart: unless-stopped
env_file:
- .env
volumes:
- app:/var/www/html
- ./config.d:/opt/tt-rss/config.d:ro
depends_on:
- db
# optional, makes weekly backups of your install
# backups:
# image: supahgreg/tt-rss:latest
# # or
# # image: ghcr.io/tt-rss/tt-rss:latest
# restart: unless-stopped
# env_file:
# - .env
# volumes:
# - backups:/backups
# - app:/var/www/html
# depends_on:
# - db
# command: /opt/tt-rss/dcron.sh -f
updater:
image: supahgreg/tt-rss:latest
# or
# image: ghcr.io/tt-rss/tt-rss:latest
restart: unless-stopped
env_file:
- .env
volumes:
- app:/var/www/html
- ./config.d:/opt/tt-rss/config.d:ro
depends_on:
- app
command: /opt/tt-rss/updater.sh
web-nginx:
image: supahgreg/tt-rss-web-nginx:latest
# or
# image: ghcr.io/tt-rss/tt-rss-web-nginx:latest
restart: unless-stopped
env_file:
- .env
ports:
- ${HTTP_PORT}:80
volumes:
- app:/var/www/html:ro
depends_on:
- app
volumes:
db:
app:
backups:If you're using an OS or architecture that isn't currently supported you'll likely need to
build your own Docker images by using an override and running docker compose build.
# docker-compose.override.yml
services:
app:
image: supahgreg/tt-rss:latest
# or
# image: ghcr.io/tt-rss/tt-rss:latest
build:
dockerfile: .docker/app/Dockerfile
context: https://github.com/tt-rss/tt-rss.git
args:
BUILDKIT_CONTEXT_KEEP_GIT_DIR: 1
web-nginx:
image: supahgreg/tt-rss-web-nginx:latest
# or
# image: ghcr.io/tt-rss/tt-rss-web-nginx:latest
build:
dockerfile: .docker/web-nginx/Dockerfile
context: https://github.com/tt-rss/tt-rss.gitBUILDKIT_CONTEXT_KEEP_GIT_DIR build argument is needed to display tt-rss version info properly.
If that doesn't work for you (no BuildKit?) you'll have to resort to terrible hacks.
Warning
Self-built images are not necessarily supported (i.e. best effort and/or community support).
We'll use the following error message as an example of what you might see in the logs:
Error message: The data directory was initialized by PostgreSQL version 12, which is not compatible with this version 15.4.
Official PostgreSQL containers have no support for migrating data between major versions. Using the aforementioned example, you could do one of the following:
- Replace
postgres:15-alpinewithpostgres:12-alpineindocker-compose.yml(or usedocker-compose.override.yml, see below) and keep using PG 12 - Use this DB container which would automatically upgrade the database
- Migrate the data manually using
pg_dumpandpg_restore(somewhat complicated if you haven't done it before)
Alternatively, you've changed something related to /var/www/html/tt-rss in docker-compose.yml.
Your Docker setup is messed up for some reason, so tt-rss can't update itself to the persistent storage location on startup (this is just an example of one issue, there could be many others).
Consider undoing any recent changes, looking up error messages, etc.
Set the following variables in .env:
APP_WEB_ROOT=/var/www/html/tt-rss
APP_BASE=Don't forget to remove /tt-rss/ from TTRSS_SELF_URL_PATH.
There are two sets of options you can change through the environment - options specific to tt-rss (those are prefixed with TTRSS_) and options affecting container behavior.
For example, to set tt-rss global option SELF_URL_PATH, add the following to .env:
TTRSS_SELF_URL_PATH=http://example.com/tt-rssDon't use quotes around values. Note the prefix (TTRSS_) before the value.
Look here for more information.
Some options, but not all, are mentioned in .env-dist. You can see all available options in the Dockerfile.
You can use docker-compose.override.yml. For example, customize db to use a different postgres image:
# docker-compose.override.yml
services:
db:
image: postgres:17-alpineIn your Docker Compose directory, run something like one of the examples below. Check https://github.com/tt-rss/tt-rss/blob/main/.docker/app/Dockerfile for the latest image's PHP version.
docker compose exec --user app app php84 /var/www/html/tt-rss/update.php --help
# ^ ^
# | |
# | +- service (container) name
# +----- run as useror
docker compose exec app sudo -Eu app php84 /var/www/html/tt-rss/update.php --helpor
docker exec -it <container_id> sudo -Eu app php84 /var/www/html/tt-rss/update.php --helpNote: sudo -E is needed to keep environment variables.
Note
First party plugins can be added using plugin installer in Preferences → Plugins.
By default, tt-rss code is stored on a persistent Docker volume (app). You can find
its location like this:
docker volume inspect ttrss-docker_app | grep MountpointAlternatively, you can mount any host directory as /var/www/html by updating docker-compose.yml, i.e.:
volumes:
- app:/var/www/htmlReplace with:
volumes:
- /opt/tt-rss:/var/www/htmlCopy and/or git clone any third party plugins into plugins.local as usual.
First, check that all containers are running:
$ docker compose ps
Name Command State Ports
------------------------------------------------------------------------------------------------------------
ttrss-docker-demo_app_1_f49351cb24ed /bin/sh -c /startup.sh Up 9000/tcp
ttrss-docker-demo_backups_1_8d2aa404e31a /dcron.sh -f Up 9000/tcp
ttrss-docker-demo_db_1_fc1a842fe245 docker-entrypoint.sh postgres Up 5432/tcp
ttrss-docker-demo_updater_1_b7fcc8f20419 /updater.sh Up 9000/tcp
ttrss-docker-demo_web-nginx_1_fcef07eb5c55 /docker-entrypoint.sh ngin ... Up 127.0.0.1:8280->80/tcp
Then, ensure that frontend (web-nginx or web) container is up and can contact FPM (app) container:
$ docker compose exec web-nginx ping app
PING app (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.144 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.128 ms
64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.206 ms
^C
--- app ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.128/0.159/0.206 ms
Containers communicate via DNS names assigned by Docker based on service names defined in docker-compose.yml. This means that services (specifically, app) and Docker DNS service should be functional.
Similar issues may be also caused by Docker iptables functionality either being disabled or conflicting with nftables.
You can but you'll need to pass APP_UPSTREAM environment variable to the web-nginx container with its new name.
- Don't forget to pass
X-Forwarded-Prototo the container if you're using HTTPS, otherwise tt-rss would generate plain HTTP URLs. - Upstream address and port are set using
HTTP_PORTin.env:
HTTP_PORT=127.0.0.1:8280location /tt-rss/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8280/tt-rss/;
break;
}If you run into problems with global PHP-to-FPM handler taking priority over proxied location, define the tt-rss location like this so it takes higher priority:
location ^~ /tt-rss/ {
....
}If you want to pass an entire nginx virtual host to tt-rss:
server {
server_name rss.example.com;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8280/;
break;
}
}Note that proxy_pass in this example points to container website root.
<IfModule mod_proxy.c>
<Location /tt-rss>
ProxyPreserveHost On
ProxyPass http://localhost:8280/tt-rss
ProxyPassReverse http://localhost:8280/tt-rss
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
</Location>
</IfModule>
I have internal web services tt-rss is complaining about (URL is invalid, loopback address, disallowed ports)
Put your local services on the same Docker network with tt-rss, then access them by service (= host) names, i.e. http://rss-bridge/.
services:
rss-bridge:
....
networks:
default:
external:
name: ttrss-docker_defaultIf your service uses a non-standard (i.e. not 80 or 443) port, make an internal reverse proxy sidecar container for it.
If you have backups container enabled, stock configuration makes automatic backups (database, local plugins, etc.) once a week to a separate storage volume.
Note that this container is included as a safety net for people who wouldn't bother with backups otherwise. If you value your data, you should invest your time into setting up something like WAL-G instead.
To run .docker/app/backup.sh (the backup script that executes weekly):
docker compose exec backups /etc/periodic/weekly/backup
Alternatively, if you want to initiate backups from the host (or if you're using PostgreSQL 18+, currently incompatible with the backup container) you can do something like this:
source .env
docker compose exec \
-e PGPASSWORD="$TTRSS_DB_PASS" \
db \
/bin/bash \
-c "export PGPASSWORD=$TTRSS_DB_PASS \
&& pg_dump -U $TTRSS_DB_USER $TTRSS_DB_NAME" \
| gzip -9 > backup.sql.gzThe process to restore the database from a backups container backup might look like this:
- Enter
backupscontainer shell:docker compose exec backups /bin/sh - Inside the container, locate and choose the backup file:
ls -t /backups/*.sql.gz - Clear database (THIS WOULD DELETE EVERYTHING IN THE DB):
psql -h db -U $TTRSS_DB_USER $TTRSS_DB_NAME -e -c "drop schema public cascade; create schema public" - Restore the backup:
zcat /backups/ttrss-backup-yyyymmdd.sql.gz | psql -h db -U $TTRSS_DB_USER $TTRSS_DB_NAME
You need to mount custom certificates into the app and updater containers like this:
volumes:
....
./ca1.crt:/usr/local/share/ca-certificates/ca1.crt:ro
./ca2.crt:/usr/local/share/ca-certificates/ca2.crt:ro
....Don't forget to restart the containers.
You'll need to set several mandatory environment values to the container running the web-nginx image:
-
APP_UPSTREAMshould point to the fully-qualified DNS service name provided by the app (FPM) container/pod -
RESOLVERshould be set tokube-dns.kube-system.svc.cluster.local
You'll have to make your own.
We neither test against nor support Podman.