Deploying Django web application on AWS EC2 with Ubuntu, nginx, Gunicorn and PostgreSQL

Introduction

There are many tutorials available online on how to deploy a web application. But If someone asked me to recommend one, I wouldn’t be able to, because I’ve never seen one that I could follow myself to a satisfying result. I would have to piece it together from many separate sources and then probably tweak it for a while until I would get what I want. So I decided to share the steps I went through to deploy my last Django project on AWS. It covers every step from creating an EC2 instance to configuring a wsgi server and reverse proxy, plus a few tips on how to make the deployment more secure.

 

AWS initial setup

  1. Go to EC2 panel and click Launch Instance
  2. Choose a name
  3. Select AMI: Ubuntu (or Debian, doesn't really matter)
  4. Select instance size: t2.micro (Free for the first year, after that about $9/month)
  5. Create a new key pair. You can choose any encryption, but make sure you don't lose the .pem file (the download should start automatically in the background).
  6. In Network settings check: Allow SSH traffic from Anywhere, Allow HTTPS traffic, Allow HTTP traffic
  7. In Configure Storage create 3+ gp3 images: one for system (8+GiB), one for home directory (1-2 GiB), rest for media files. The idea here is to separate static system files and dynamic data and media files. Then we can create a snapshot of our system volume only and store it in S3 Bucket. If you want to enjoy free storage for the first year, stay under 30 GiB.
  8. Click Launch instance

 

Server Setup

  1. SSH into your instance. Make sure you have correct permissions set for the .pem file (chmod 400)

    $ ssh -i mykey.pem ubuntu@ec2-12-345-678-9.us-east-2.compute.amazonaws.com
  2. It's time to mount the drives. First look up the names of the devices. Then format them in ext4.

    $ sudo su -

    # lsblk

    # fdisk /dev/xvdb

    # fdisk /dev/xvdc

    # mkfs.ext4 /dev/xvdb1

    # mkfs.ext4 /dev/xvdc1

    Let's mount home directory

    # mkdir /mnt/home_move

    # mount /dev/xvdb1 /mnt/home_move/

    # rsync -av /home/* /mnt/home_move/

    # mv /home /home_old

    # mkdir /home

    # umount /dev/xvdb1

    # mount /dev/xvdb1 /home

    If everything went ok you can remove home_old directory. Now let's mount a directory to store media and backup files for our project.

    # mkdir /mnt/media

    # mount /dev/xvdc1 /mnt/media/

    To make changes persist after system restart add these lines to the end of /etc/fstab file

    /dev/xvdb1 /home ext4 defaults 0 0

    /dev/xvdc1 /mnt/media ext4 defaults 0 0

  3. Install packages necessary for our deployment.

    # sh -c 'echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

    # wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

    # apt-get update

    # apt-get -y install python3-venv python3-dev libpq-dev nginx curl postgresql-16 postgresql-contrib

  4. Create service user for our project.

    # mkdir /var/opt/myproject

    # adduser --system --home=/var/opt/myproject --no-create-home --disabled-password --group --shell=/bin/bash myproject

    # chown -R myproject:myproject /var/opt/myproject

 

PostgreSQL Setup

  1. Change location for database files to /home volume.

    $ sudo -u postgres psql

    postgres=# SHOW data_directory;

    /var/lib/postgresql/16/main

    # systemctl stop postgresql

    # mkdir /home/postgres

    # chown postgres:postgres /home/postgres

    # rsync -av /var/lib/postgresql /home/postgres

    # nano /etc/postgresql/16/main/postgresql.conf

    ...

    data_directory = '/home/postgres/postgresql/16/main'

    # rm -rf /var/lib/postgresql/16/main.bak

  2. Setup project database

    $ sudo -u postgres psql

    postgres=# CREATE USER root SUPERUSER;

    postgres=# CREATE DATABASE myproject;

    postgres=# CREATE USER myprojectuser WITH PASSWORD 'p@@ssw0rd';

    postgres=# ALTER ROLE myprojectuser SET client_encoding TO 'utf8';

    postgres=# ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';

    postgres=# ALTER ROLE myprojectuser SET timezone TO 'UTC';

    postgres=# GRANT ALL PRIVILEGES ON DATABASE myproject TO myprojectuser;

  3. In the previous step we created a superuser role root, now we can create a backup script ~/db_backup.sh that will run as system root and will backup our database to media volume. In this example I create both full cluster backup and a project DB backup, but you can choose to only do one if you prefer.

    #!/bin/bash

    mkdir -p /mnt/media/myproject/backup/$(date +%d)

    chown myproject:www-data /mnt/media/myproject/backup/$(date +%d)

    pg_dumpall -c | gzip > /mnt/media/myproject/backup/$(date +%d)/dumpall.gz

    pg_dump -Fc myproject | gzip > /mnt/media/myproject/backup/$(date +%d)/dump_myproject.gz

    chown -R myproject:www-data /mnt/media/myproject/backup/$(date +%d)

    chmod -R 770 /mnt/media/myproject/backup/$(date +%d)

    Allow execution and create a cron job to run it every night.

    $ chmod u+x ~/db_backup.sh

    $ sudo crontab -e

     

    0 0 * * * /home/ubuntu/db_backup.sh

 

Setup Django application

  1. Create virtual environment

    # mkdir /opt/myproject

    # chown myproject:myproject /opt/myproject

    # cd /opt/myproject

    # python3 -m venv ve

    # source ve/bin/activate

  2. Install dependencies. You can make a requirements file for your project with pip freeze > requirements.txt and use that file to install all packages with:

    (ve)# pip install -r requirements.txt

  3. Clone project and collect static files

    (ve)# git clone https://github.com/myaccount/myproject.git

    (ve)# python manage.py collectstatic

  4. Next step is to set up a wsgi server. We will use Gunicorn for that. First create a socket file /etc/systemd/system/myproject.socket

    [Unit]

    Description=myproject gunicorn socket

     

    [Socket]

    ListenStream=/run/myproject.sock

     

    [Install]

    WantedBy=sockets.target

    And a service file /etc/systemd/system/myproject.service

    [Unit]

    Description=myproject gunicorn daemon

    Requires=myproject.socket

    After=network.target

     

    [Service]

    User=myproject

    Group=www-data

    EnvironmentFile=/var/opt/myproject/.env

    WorkingDirectory=/opt/myproject/src

    ExecStart=/opt/myproject/ve/bin/gunicorn \

            --access-logfile - \

            --log-level=warning \

            --capture-output \

            --log-file /var/log/myproject.log \

            --workers 3 \

            --timeout 300 \

            --bind unix:/run/myproject.sock \

            myproject.wsgi:application

     

    [Install]

    WantedBy=multi-user.target

  5. Create environment variables file /var/opt/myproject/.env. In your Django project you can retrieve them with os.environ.get('VARIABLE_NAME').

    DJANGO_HOSTS="myproject.com"

    MYPROJECT_SECRET_KEY="django-insecure-12345"

    MYPROJECT_DB_USER="myproject_user"

    MYPROJECT_DB_PASSWORD="p@@ssw0rd"

    DJANGO_MEDIA_ROOT="/mnt/media/myproject/media"

  6. Create a log file, set permissions and start the services.

    # chmod 640 /var/opt/myproject/.env

    # touch /var/log/myproject.log

    # chown myproject:www-data /var/log/myproject.log

    # chmod 660 /var/log/myproject.log

    # systemctl start myproject.socket

    # systemctl enable myproject.socket

  7. Test that the service starts ok. Poke the socket with curl, you should get an HTTP response and the service should become active.

    # curl --unix-socket /run/myproject.sock localhost

    # systemctl status myproject.service

Configure Nginx

  1. Now we need to set-up a reverse proxy to handle the outside traffic, handle media files and serve static files. First create a file /etc/nginx/sites-available/myproject

    server {

        listen 80;

        server_name myproject.com;

     

        location = /favicon.ico { access_log off; log_not_found off; }

     

        location / {

            include /etc/nginx/mime.types;

            include proxy_params;

            proxy_pass http://unix:/run/myproject.sock;

            client_max_body_size 100M; # Max size of served files

     

        # Location of the static files

        location /static/ {

            alias /opt/myproject/static/;

        }

     

        # Location of the media files

        location /media/ {

            alias /mnt/media/myproject/;

        }

    }

    If you want to prevent access to your files by unauthenticated users (by fuzzing, for example), you can use the following trick: in your django myproject.views file create the following view.


    from django.http import HttpResponse, HttpResponseForbidden
    
    def media_access(request, path):
        if request.user.is_authenticated:
            response = HttpResponse()
            del response['Content-Type']
            response['X-Accel-Redirect'] = '/protected/media/' + path
            return response
        else:
            return HttpResponseForbidden('Not authorized to access this media.')
    

    Then add this path to your myproject.urls file.


    from django.urls import re_path
    
    re_path(r'^media/(?P<path>.*)', views.media_access, name='media'),
    

    Now replace the previous media files address in your nginx config file with this:

    ...

        # Location for secures media files

        location /protected/ {

            internal;

            alias /mnt/media/myproject/;

        }

    }

    Now an unauthenticated user can't access the media files by simply knowing the url address for it. All that's left is make this config file active by creating a symlink for it in sites-enabled directory, checking the file for syntax errors and restarting nginx daemon.

    # ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled

    # nginx -t

    nginx: configuration file /etc/nginx/nginx.conf test is successful

    # systemctl restart nginx

  2. On the EC2 dashboard find the Public IPv4 address for your instance and create an A-record for it in your hostname provider's configuration dashboard. Every hostname provider will have their own process for doing that, so just follow their instructions. Once that is done we can create a free self-signed certificate so that we can use HTTPS to access our web-site. We will use certbot for that. After the certificate is created, it will automatically change our nginx config file and create a cron job to extend the certificate before it expires.

    # apt install certbot python3-certbot-nginx

    # certbot --nginx -d myproject.com

    # systemctl status certbot.timer

    # certbot renew --dry-run

  3. All that's left is set-up log rotation for our gunicorn log file and our web app is up and running. Just create a file /etc/logrotate.d/myproject with the following content:

    /var/log/myproject.log {

        daily

        rotate 7

        compress

        sharedscripts

        postrotate

            systemctl kill -s HUP myproject.service

        endscript

    }




Log in to leave comments