Backing up and restoring data in rails projects is trivial enough. But when using docker and docker-compose things may become a bit cumbersome.
Project is split into services, overall achitecture looks like this:
[proxy(nginx)] | +----------+-> [app1 (rails)] -+-----> [ db (postgres) ] -+ |-> [app2 (rails)] -| | +-> [app3 (rails)] -+-----> ((public)) | +-----> ((uploads)) | ((dbdata)) <-------+
[someth(img)] means container named “someth” with basic image “img”.
((vol)) is an external volume, created with
docker volume create --name <vol>.
Deployment is managed by Capistrano (which is out of scope now) with Docker Compose. Compose file is as follows:
version: '2' volumes: dbdata: external: name: 'prj-dbdata' uploads: external: name: 'prj-uploads' assets: external: name: 'prj-assets' services: db: image: 'dtheus/postgres' env_file: .rbenv-vars volumes: - 'dbdata:/var/lib/pgsql' app1: &app build: . links: - db depends_on: - prep env_file: .rbenv-vars command: [bundle, exec, thin, start, --config=config/thin.yml] volumes_from: - prep:rw app2: <<: *app command: [bundle, exec, thin, start, --config=config/thin.yml, --ssl] app3: <<: *app command: [bundle, exec, thin, start, --config=config/thin.yml, --ssl] prep: build: . links: - db env_file: .rbenv-vars user: root volumes: - 'assets:/home/web/app/public/assets' - 'uploads:/home/web/app/public/uploads' command: [bundle, exec, rake, setup] proxy: image: 'dtheus/nginx' ports: - '80:80' - '443:443' volumes: - './config/nginx.conf:/etc/nginx/nginx.conf' - './config/certs:/etc/ssl/certs' volumes_from: - prep:ro links: - app1 - app2 - app3
Backing up data
$> cd /project/dir $> docker-compose exec --user postgres db pg_dump --create prj | tee dump.sql # ^ ^ ^ ^ ^ # | | | | | # to use peer authentication -----+ | | | | # container is named "db" as in dc.yml -+ | | | # include this for db to be created with output sql ----+ | | # db name --------+ | # pipe docker exec output to both file and stdout ----+
If you wondering why I prefer SQL dump to “custom”, the answer is:
$> <long command to start pg restore withing docker container> pg_restore out of memory
Apparently this approach consumes lots of memory, which I was not posessed of on VPS.
Restoring from backup
$> cd /project/dir $> cat dump.sql | docker exec -i --user postgres `docker-compose ps -q db` psql # ^ ----------------------- # | | # interactive mode, no '-t' though | # fetch db container id (more on this in a moment) --+ #
To fetch db container id subcommand is used. This is caused by default docker-compose
behavior of creating new containers with highly dynamic names, such as