DevOps Hands-on : Postgresql backup using Ansible Playbook to S3 bucket

  1. Take two Ubuntu servers
  2. Create an Iam User and access keys
  3. Create an S3 bucket
  4. Install ansible one server and make the other server as slave
  5. Add above task 1 data and postgresql on the server  slave
  6. Create ansible playbook and run it.
  7. Check backup on s3 bucket

Step 1 — Installing Ansible

To begin using Ansible as a means of managing your server infrastructure, you need to install the Ansible software on the machine that will serve as the Ansible control node.

From your control node, run the following command to include the official project’s PPA (personal package archive) in your system’s list of sources:

sudo apt-add-repository ppa:ansible/ansible

Press ENTER when prompted to accept the PPA addition.

Next, refresh your system’s package index so that it is aware of the packages available in the newly included PPA:

sudo apt update

Following this update, you can install the Ansible software with:

sudo apt install ansible -y

Your Ansible control node now has all of the software required to administer your hosts. Next, we will go over how to add your hosts to the control node’s inventory file so that it can control them.

On ansible master create keys for ssh

ssh-keygen -t rsa -b 4096

It will generate the keys at /home/ubuntu/.ssh/id_rsa*

Copy the public key from Generated keys

cat /home/ubuntu/.ssh/id_rsa.pub

And paste in the slave server

vi .ssh/authorized_keys

Step 2 — Setting Up the Inventory File

The inventory file contains information about the hosts you’ll manage with Ansible. You can include anywhere from one to several hundred servers in your inventory file, and hosts can be organized into groups and subgroups. The inventory file is also often used to set variables that will be valid only for specific hosts or groups, in order to be used within playbooks and templates. Some variables can also affect the way a playbook is run, like the ansible_python_interpreter variable that we’ll see in a moment.

To edit the contents of your default Ansible inventory, open the /etc/ansible/hosts file using your text editor of choice, on your Ansible control node:

sudo nano /etc/ansible/hosts

Note: Although Ansible typically creates a default inventory file at etc/ansible/hosts, you are free to create inventory files in any location that better suits your needs. In this case, you’ll need to provide the path to your custom inventory file with the -i parameter when running Ansible commands and playbooks. Using per-project inventory files is a good practice to minimize the risk of running a playbook on the wrong group of servers.

The default inventory file provided by the Ansible installation contains a number of examples that you can use as references for setting up your inventory. The following example defines a group named [servers] with three different servers in it, each identified by a custom alias: server1, server2, and server3. Be sure to replace the highlighted IPs with the IP addresses of your Ansible hosts.

/etc/ansible/hosts

[servers]
pgtest ansible_host=<ip>

[all:vars]
ansible_python_interpreter=/usr/bin/python3

The all:vars subgroup sets the ansible_python_interpreter host parameter that will be valid for all hosts included in this inventory. This parameter makes sure the remote server uses the /usr/bin/python3 Python 3 executable instead of /usr/bin/python (Python 2.7), which is not present on recent Ubuntu versions.

When you’re finished, save and close the file by pressing CTRL+X then Y and ENTER to confirm your changes.

Whenever you want to check your inventory, you can run:

ansible-inventory --list -y

You’ll see output similar to this, but containing your own server infrastructure as defined in your inventory file:

Now that you’ve configured your inventory file, you have everything you need to test the connection to your Ansible hosts.

Step 3 — Testing Connection

After setting up the inventory file to include your servers, it’s time to check if Ansible is able to connect to these servers and run commands via SSH.

For this guide, we’ll be using the Ubuntu root account because that’s typically the only account available by default on newly created servers. If your Ansible hosts already have a regular sudo user created, you are encouraged to use that account instead.

You can use the -u argument to specify the remote system user. When not provided, Ansible will try to connect as your current system user on the control node.

From your local machine or Ansible control node, run:

ansible all -m ping -u ubuntu

This command will use Ansible’s built-in ping module to run a connectivity test on all nodes from your default inventory, connecting as root. The ping module will test:

  • if hosts are accessible;
  • if you have valid SSH credentials;
  • if hosts are able to run Ansible modules using Python.

If this is the first time you’re connecting to these servers via SSH, you’ll be asked to confirm the authenticity of the hosts you’re connecting to via Ansible. When prompted, type yes and then hit ENTER to confirm.

Once you get a “pong” reply back from a host, it means you’re ready to run Ansible commands and playbooks on that server.

Install Postgresql on slave

sudo apt install postgresql postgresql-contrib -y

Before that log in again to psql

sudo -u postgres psql
ALTER USER postgres WITH PASSWORD 'password';

Let’s Create a Database for store and take backup

CREATE DATABASE my_database;

Connect to new database

\c my_database

Create a Users table to store Data

CREATE TABLE users (
   id SERIAL PRIMARY KEY,
   name VARCHAR(100),
   email VARCHAR(100)
);

Here is a dummy data to store in database that we created now

-- Insert 50 dummy data entries into the users table
INSERT INTO users (name, email) VALUES
('Alice Johnson', 'alice.johnson@example.com'),
('Bob Smith', 'bob.smith@example.com'),
('Charlie Brown', 'charlie.brown@example.com'),
('Diana Prince', 'diana.prince@example.com'),
('Ethan Hunt', 'ethan.hunt@example.com'),
('Fiona Gallagher', 'fiona.gallagher@example.com'),
('George Washington', 'george.washington@example.com'),
('Hannah Montana', 'hannah.montana@example.com'),
('Ian Malcolm', 'ian.malcolm@example.com'),
('Julia Roberts', 'julia.roberts@example.com'),
('Kevin Bacon', 'kevin.bacon@example.com'),
('Laura Croft', 'laura.croft@example.com'),
('Michael Scott', 'michael.scott@example.com'),
('Nina Simone', 'nina.simone@example.com'),
('Oscar Wilde', 'oscar.wilde@example.com'),
('Paula Abdul', 'paula.abdul@example.com'),
('Quentin Tarantino', 'quentin.tarantino@example.com'),
('Rachel Green', 'rachel.green@example.com'),
('Steve Jobs', 'steve.jobs@example.com'),
('Tina Fey', 'tina.fey@example.com'),
('Uma Thurman', 'uma.thurman@example.com'),
('Victor Hugo', 'victor.hugo@example.com'),
('Wanda Maximoff', 'wanda.maximoff@example.com'),
('Xena Warrior', 'xena.warrior@example.com'),
('Yara Shahidi', 'yara.shahidi@example.com'),
('Zach Galifianakis', 'zach.galifianakis@example.com'),
('Alice Cooper', 'alice.cooper@example.com'),
('Bob Marley', 'bob.marley@example.com'),
('Cathy Freeman', 'cathy.freeman@example.com'),
('David Beckham', 'david.beckham@example.com'),
('Eva Mendes', 'eva.mendes@example.com'),
('Frank Sinatra', 'frank.sinatra@example.com'),
('Gina Rodriguez', 'gina.rodriguez@example.com'),
('Henry Cavill', 'henry.cavill@example.com'),
('Isla Fisher', 'isla.fisher@example.com'),
('Jack Sparrow', 'jack.sparrow@example.com'),
('Kylie Jenner', 'kylie.jenner@example.com'),
('Leonardo DiCaprio', 'leonardo.dicaprio@example.com'),
('Megan Fox', 'megan.fox@example.com'),
('Nicolas Cage', 'nicolas.cage@example.com'),
('Olivia Wilde', 'olivia.wilde@example.com'),
('Pablo Picasso', 'pablo.picasso@example.com'),
('Queen Latifah', 'queen.latifah@example.com'),
('Ryan Gosling', 'ryan.gosling@example.com'),
('Selena Gomez', 'selena.gomez@example.com'),
('Tom Hanks', 'tom.hanks@example.com'),
('Uma Thurman', 'uma.thurman@example.com'),
('Vin Diesel', 'vin.diesel@example.com'),
('Will Smith', 'will.smith@example.com'),
('Xander Cage', 'xander.cage@example.com'),
('Yasmine Bleeth', 'yasmine.bleeth@example.com'),
('Zoe Saldana', 'zoe.saldana@example.com');

Okay check data is added or not

SELECT * FROM users;

Give permissions for above file to execute

Facing issue right how to solve it

cd /etc/postgresql/14/main/

line 90

sudo vi pg_hba.conf

Update postgres with md5 instead of pee

sudo systemctl restart postgresql
sudo systemctl reload postgresql
  • Create IAM USER
  • Create S3 Bucket

Update your vars.yml file to include AWS credentials

postgres_user: postgres
postgres_db: my_database
postgres_password: password
s3_bucket: your_s3_bucket_name
s3_prefix: test0409
aws_access_key: YOUR_AWS_ACCESS_KEY
aws_secret_key: YOUR_AWS_SECRET_KEY
aws_region: YOUR_AWS_REGION

Make sure vars.yml is encrypted using Ansible Vault

ansible-vault encrypt vars.yml

Give password

Create a new playbook

vi pg_backup.yml
---
- name: PostgreSQL Backup and S3 Upload Mrcloudbook
  hosts: servers
  remote_user: ubuntu
  become: yes

  vars_files:
    - vars.yml

  tasks:
    - name: Update all packages
      apt:
        name: "*"
        state: latest
        update_cache: yes
  
    - name: Install required packages
      apt:
        name:
          - python3-pip
          - python3-boto3
          - python3-botocore
          - postgresql-client
        state: present
    
    - name: Install AWS CLI
      shell: |
        curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
        unzip awscliv2.zip
        ./aws/install
      args:
        creates: /usr/local/bin/aws

    - name: Configure AWS CLI
      shell: |
        aws configure set aws_access_key_id {{ aws_access_key }}
        aws configure set aws_secret_access_key {{ aws_secret_key }}
        aws configure set region {{ aws_region }}
      no_log: true

    - name: Create backup directory
      file:
        path: "/var/backups/postgresql"
        state: directory
        mode: "0700"

    - name: Perform PostgreSQL backup
      shell: |
        PGPASSWORD={{ postgres_password }} pg_dump -U {{ postgres_user }} -d {{ postgres_db }} > /var/backups/postgresql/{{ postgres_db }}-{{ ansible_date_time.date }}-{{ ansible_date_time.time }}.sql
      args:
        executable: "/bin/bash"

    - name: Compress the backup
      shell:
        cmd: gzip -f /var/backups/postgresql/{{ postgres_db }}-{{ ansible_date_time.date }}-{{ ansible_date_time.time }}.sql
        executable: "/bin/bash"

    - name: Upload backup to S3
      aws_s3:
        bucket: "{{ s3_bucket }}"
        object: "{{ s3_prefix }}/{{ postgres_db }}-{{ ansible_date_time.date }}-{{ ansible_date_time.time }}.sql.gz"
        src: "/var/backups/postgresql/{{ postgres_db }}-{{ ansible_date_time.date }}-{{ ansible_date_time.time }}.sql.gz"
        mode: put
        permission: "private"

Done

mrcloudbook.com avatar

Ajay Kumar Yegireddi is a DevSecOps Engineer and System Administrator, with a passion for sharing real-world DevSecOps projects and tasks. Mr. Cloud Book, provides hands-on tutorials and practical insights to help others master DevSecOps tools and workflows. Content is designed to bridge the gap between development, security, and operations, making complex concepts easy to understand for both beginners and professionals.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *