Original link: https://editor.leonh.space/2023/postgresql-dump-restore/
This article is a note on PostgreSQL backup, and the backup here is a cold backup.
The following is the postgres account, which is a Linux account, and there is an account with the same name in PostgreSQL. Because of this, you can perform subsequent operations through the PostgreSQL peer authentication mechanism. Simply put, as long as the Linux user is sure to be postgres, you can directly Authenticated through PostgreSQL.
The basic command of PostgreSQL backup is pg_dump, in contrast, the basic command of restore is pg_restore, let’s talk about backup first.
backup
Assuming there is a database mocha, the simplest backup command is as follows:
$ pg_dump --file =mocha.sql --dbname =mocha
The backed up mocha.sql is a plain text file full of SQL statements, excerpted as follows:
-- -- PostgreSQL database dump --
-- Dumped from database version 15.2 (Ubuntu 15.2-1.pgdg22.04+1) -- Dumped by pg_dump version 15.2 (Ubuntu 15.2-1.pgdg22.04+1)
SET statement_timeout = 0 ; SET lock_timeout = 0 ; SET idle_in_transaction_session_timeout = 0 ; SET client_encoding = ' UTF8 '; SET standard_conforming_strings = on; SELECT pg_catalog . set_config (' search_path ', '', false ); SET check_function_bodies = false ; SET xmloption = content; SET client_min_messages = warning; SET row_security = off;
SET default_tablespace = '';
SET default_table_access_method = heap;
-- -- Name: _prisma_migrations; Type: TABLE; Schema: public; Owner: ap --
CREATE TABLE public. _prisma_migrations ( id character varying ( 36 ) NOT NULL , checksum character varying ( 64 ) NOT NULL , finished_at timestamp with time zone , migration_name character varying ( 255 ) NOT NULL , logs text , rolled_back_at timestamp with time zone , started_at timestamp with time zone DEFAULT now() NOT NULL , applied_steps_count integer DEFAULT 0 NOT NULL );
ALTER TABLE public . _prisma_migrations OWNER TO ap;
It can be seen from the inside that its SQL statement has specified OWNER
, which means that when restoring in the future, the owner must be established on the new machine first. In this example, it is the account of ap. If you do not want to To behave like this, you can add --no-owner
to cancel it:
$ pg_dump --file =mocha.sql --no-owner --dbname =mocha
Other parameters can be used as you like, some of which are listed below:
-
--clean
, add a command to delete the original database in the backup file, no need to manually cut off and retrain. -
--create
, add the command to create the database in the backup file, no need to manually build the mocha database.
example:
$ pg_dump --file =mocha.sql --clean --create --no-owner --dbname =mocha
Turn this line of command into a script:
#!/bin/bash
pg_dump \ --file =/mocha-data/mocha_"$ ( date --iso-8601 =' date ' ) ".sql \ --clean \ --create \ --no-owner \ --dbname =mocha \
The part of the file name is automatically brought in by the date variable of bash.
Save the script to /usr/local/bin/backup-postgresql-mocha.sh, remember to add executable permissions.
In addition, the stored folder is in /mocha-data/, remember to set the write permission as well.
Set this script up as a cron job to execute periodically.
remote backup
After the local backup is completed, it can be uploaded to a different place, region, country, or star, so that even if the earth explodes, the data can be preserved.
There is nothing in Linux that one script can’t solve, if there is, two.
Below we use a Bash script to send the backup SQL file to another Synology NAS via HTTP:
#!/bin/bash
curl \ --data " api=SYNO.API.Auth " \ --data " version=7 " \ --data " method=login " \ --data " account=OneDogIsCute " \ --data " passwd=TwoDogsAreSilly " \ -c /tmp/cookies.txt \ http://200.0.0.78:5000/webapi/entry.cgi \
curl \ --form " api=SYNO.FileStation.Upload " \ --form " version=3 " \ --form " method=upload " \ --form " path=/db_backup/mocha " \ --form " create_parents=True " \ --form " overwrite=overwrite " \ -L \ -b /tmp/cookies.txt \ -F " file=@/mocha-data/mocha- $ ( date --iso-8601 =' date ' ).sql " \ http://200.0.0.78:5000/webapi/entry.cgi \
The first part is to log in to the Synology NAS to save the cooke, and the second step is to use HTTP POST to upload the file.
Similarly, this script can also be set as a cron job to execute regularly. It should be noted that there should be enough time difference between it and the previous backup task to avoid uploading before the backup file is generated.
reduction
For the restored part, you can ask ChatGPT. :p
This article is transferred from: https://editor.leonh.space/2023/postgresql-dump-restore/
This site is only for collection, and the copyright belongs to the original author.