WittCode💻

Connect a Node Server to Postgres with Docker Compose

By

Learn how to connect a Node server to a Postgres database using Docker Compose. We will also go over Docker Compose healthchecks, initialization scripts, and the pg npm library.

Table of Contents 📖

Environment Variable Setup

To begin, lets define our environment variables inside a .env file. These variables will represent the location of our Node server and Postgres database along with some database initialization values.

SERVER_HOST=server-c
SERVER_PORT=6777

POSTGRES_HOST=postgres-c
POSTGRES_PORT=6778
POSTGRES_USER=wittcepter
POSTGRES_PASSWORD=theBestChromeExtension
POSTGRES_DB=my_db

The names of the Postgres environment variables are important as the Postgres Docker image reserves these names for database initialization. For example, POSTGRES_USER and POSTGRES_PASSWORD set a user and their password and give them superuser powers. The POSTGRES_DB environment variable sets the name of the default database.

Node Project Setup

Now lets initialize an empty directory as an ES6 npm project using npm init es6 -y.

npm init es6 -y

Now lets install nodemon as a development dependency so we can update our project with live code changes.

npm i nodemon -D

Next lets create a start script to run our application. Specifically, we will set the entry point of the program to be our server.js file and then run it with nodemon.

"main": "./src/server.js",
...
"scripts": {
  "start": "nodemon ."
}

Connect to Node to Postgres with pg

To connect node to postgres, we are going to use the pg npm library. This library is a pure JavaScript implementation of a PostgreSQL client. A PostgreSQL client is essentially a connection to a PostgreSQL server that can issue commands/database operations.

npm i pg

There are two different ways to connect to postgres with the pg library, Client and Pool. Both of these are objects that are exported from the pg library. A Client is one static connection to the Postgres server while a Pool is a dynamic number of Clients that have automatic re-connect functionality. For this demonstration we will use the Pool. To connect to PostgreSQL with a connection pool we instantiate a Pool object.

import pg from 'pg';

const pool = new pg.Pool();

We then need to supply it with the environment variables that we created inside our .env file.

const pool = new pg.Pool({
  host: process.env.POSTGRES_HOST,
  port: process.env.POSTGRES_PORT,
  user: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  database: process.env.POSTGRES_DB
});

The Pool is initially empty and clients are created when they are needed. Now, to ensure that we are connected, lets query the PostgreSQL server. First lets checkout a client from the pool. This is done with the connect method.

async function main() {
  const client = await pool.connect();
}

Now that we've checked out a client (connection) from the pool, we can use that client to query the postgres server. We can query it with the query method and extract the returned rows from the response. We will get the rows from a table called subscriber that we will create in the Postgres database using creation scripts.

try {
  const response = await client.query('SELECT * FROM subscriber');
  const {rows} = response;
  console.log(rows);
} catch (err) {
  console.log(err);
} finally {
  client.release();
}

We need to wrap the query inside a try, catch, finally block so that we can release the client back into the pool. We want to do this whether the query was successful or not. This is because if we don't release the client back into the pool then soon the pool will be depleted and there will be no clients to handle any requests. Finally, lets just run this main function.

main()
  .then(() => console.log('Connected to Postgres!'))
  .catch(err => console.error('Error connecting to Postgres!', err));

Creating the Node Docker Image

Now lets create our Node Docker image. Here we will use Node version 20 as the base image.

FROM node:20-alpine
WORKDIR /server
COPY package*.json .
RUN npm i
CMD [ "npm", "start" ]
  • Use Node version 20 as the base image.
  • Set the working directory to /server. This means any RUN, CMD, ENTRYPOINT, COPY, and ADD commands will be executed in this directory.
  • Copy over package.json and package-lock.json.
  • Install dependencies from npm.
  • Start the Node server.

Note how we don't copy over the source code. This is because we will do this using volumes.

Creating the Postgres Image

Now lets create our Postgres Docker image. We will use Postgres version 16 as the base image.

FROM postgres:16-alpine

Next, lets focus on filling our Postgres database with an initial table and data. To do this, we need to use initialization scripts. These are SQL or script files that are placed inside a specific folder in the docker image. For example, we can use the following SQL file to spin up a Postgres Docker container containing a table called subscriber with 1 row of data.

CREATE TABLE subscriber (
  subscriber_id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  email VARCHAR(255) UNIQUE NOT NULL
);

INSERT INTO subscriber (name, email) VALUES
('WittCepter', 'the-best-chrome-extension@a.com');

The folder that we need to copy this SQL file into is called docker-entrypoint-initdb.d. Any scripts placed inside this folder will be executed after the default postgres user and database are created. So lets place this file in the docker-entrypoint-initdb.d folder using our Dockerfile.

COPY subscriber.sql /docker-entrypoint-initdb.d/

Creating Volumes with Docker Compose

Now lets start working with Docker Compose. To begin, we will create the required volumes.

volumes:
  server-v-node-modules:
    name: "server-v-node-modules"
  database-v:
    name: "database-v"

The top-level volumes declaration lets us configure named volumes. The name property sets a custom name for the volume. Running the command docker compose up for the first time will create these volumes. The volumes will be reused when the command is ran subsequently. The server-v-node-modules volume will handle our node_modules folder and the database-v volume will persist our Postgres data.

node_modules Issues with Docker

The node_modules folder can be problematic for Docker if it contains packages with binaries specific to certain operating systems. In other words, certain packages will install different files depending on the operating system of the computer. This can cause issues if you are devloping an application with Docker as the Docker container doesn't always use the same OS as the host computer. This is why we create a volume called server-v-node-modules to handle our node_modules.

Creating the Node Service with Docker Compose

Now lets use Docker Compose to create our Node service.

version: "3.9"
services:

  server:
    image: server:1.0.0
    container_name: ${SERVER_HOST}
    build:
      context: ./server
      dockerfile: Dockerfile
    env_file: .env
    ports:
      - ${SERVER_PORT}:${SERVER_PORT}
    volumes:
      - ./server:/server
      - server-v-node-modules:/server/node_modules
    depends_on:
      database:
        condition: service_healthy
  • Create a service called server. This will be our Node server.
  • Name the image server:1.0.0. If both the image and build declarations are specified, the image declaration specifies the name and tag of the Docker image.
  • The container_name declaration specifies a container name as opposed to a generated default name.
  • The build declaration contains options applied at build time. The context declaration is the path to a directory containing a Dockerfile or a URL to a Git repository. The dockerfile property is the name of the Dockerfile to use.
  • The env_file declaration loads environment variables from a .env file into the image.
  • The ports declaration maps a container port to a host port.
  • The volumes declaration defines named volumes used by the container. The host volume will bring all our server code into the container.
  • The depends_on declaration makes it so the node service will wait until our database service is healthy.

depends_on service_healthy

We are using the depends_on attribute because before we attempt to connect to Postgres, we want to make sure the service is healthy. In other words, we want Postgres to be ready to accept connections. By default, depends_on will wait until the service has started but just because the service has started doesn't mean that it is ready to accept connections. Therefore, we set it to service_healthy. This allows us to set a custom condition to determine if the service is ready. We will create this healthcheck inside our database service.

Creating Postgres Service with Docker Compose

Now lets use Docker Compose to create our Postgres service.

database:
  image: database:1.0.0
  container_name: ${POSTGRES_HOST}
  build:
    context: ./database
    dockerfile: Dockerfile
  env_file: .env
  ports:
    - ${POSTGRES_PORT}:${POSTGRES_PORT}
  volumes:
    - database-v:/var/lib/postgresql/data
  command: "-p ${POSTGRES_PORT}"
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -p ${POSTGRES_PORT} -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
    start_period: 0s
    interval: 5s
    timeout: 5s
    retries: 5
  • Create a service called database.
  • Set the image name to database and give it the version 1.0.0.
  • Name the container using the environment variable POSTGRES_HOST.
  • Use the build property to specify the path to the Postgres Dockerfile.
  • Load environment variables from the .env file into the image.
  • Map a container port to a host port.
  • Setup the database-v named volume to persist the data stored in Postgres. To persist the data stored inside a Postgres image, we need to create a volume pointing to the /var/lib/postgresql/data directory inside the image.
  • Change the default postgres port from 5432 to our environment variable using the command attribute.
  • Create a healthcheck that tells the Node service when Postgres is ready to accept connections.

Postgres Healthcheck

The healthcheck attribute determines whether the service is healthy or not. Here, we are testing the health of the Postgres container every 5 seconds for 5 tries. We are testing the health by using the command supplied to test. The command we are running is the utility command pg_isready.

  • pg_isready - A utility command to check the connection status of a PostgreSQL server.
  • -p - Specifies the port number of the PostgreSQL server.
  • -U - Specifies the username to connect to the server.
  • -d - Specifies the name of the database to connect to.

Running the Program

To run the program, all we need to do is run the command docker compose up at the top level of the directory.

docker compose --env-file .env up

We also supply our environment variable file. This will load the environment variables into our docker-compose.yaml file. The output should be similar to the following.

postgres-c  | 2024-04-12 15:57:38.741 UTC [1] LOG:  database system is ready to accept connections
server-c    | 
server-c    | > start
server-c    | > nodemon .
server-c    | 
server-c    | 
server-c    | [nodemon] 3.1.0
server-c    | [nodemon] to restart at any time, enter `rs`
server-c    | [nodemon] watching path(s): *.*
server-c    | [nodemon] watching extensions: js,mjs,cjs,json
server-c    | [nodemon] starting `node .`
server-c    | [
server-c    |   {
server-c    |     subscriber_id: 1,
server-c    |     name: 'WittCepter',
server-c    |     email: 'the-best-chrome-extension@a.com'
server-c    |   }
server-c    | ]
server-c    | Connected to Postgres!

Notice how our Node application is ran after the Postgres container says it is ready to accept connections.