Postgresql

PostgreSQL Security: Roles, Permissions, and Row-Level Security

A comprehensive guide to PostgreSQL security for Node.js developers, covering roles, GRANT/REVOKE permissions, row-level security for multi-tenant apps, SSL enforcement, and audit logging.

PostgreSQL Security: Roles, Permissions, and Row-Level Security

Overview

PostgreSQL has the most mature and granular security model of any open-source database, but most Node.js developers never get past CREATE USER app PASSWORD 'password123' and a single superuser connection string. That leaves your database wide open to lateral movement, privilege escalation, and data leaks between tenants. This article covers the full security stack: roles and permissions, row-level security for multi-tenant isolation, SSL enforcement, audit logging, and how to wire it all into a Node.js application without losing your mind.

Prerequisites

  • PostgreSQL 14+ installed and running (15 or 16 preferred)
  • Node.js 18+ with the pg driver (npm install pg)
  • Basic SQL knowledge (CREATE TABLE, SELECT, INSERT)
  • A terminal with psql access
  • Understanding of connection pooling (PgBouncer or pg.Pool)

The PostgreSQL Role System

PostgreSQL does not have separate concepts of "users" and "groups." Everything is a role. A role with the LOGIN attribute can connect to the database. A role without it acts as a group. This is one of the cleanest permission models in any RDBMS, and most people ignore it.

-- A role that can log in (what most people call a "user")
CREATE ROLE app_service LOGIN PASSWORD 'strong-random-password-here';

-- A role that cannot log in (a group role)
CREATE ROLE app_readonly;

-- Grant the group role to the login role
GRANT app_readonly TO app_service;

The key insight: you build a hierarchy. Group roles define what permissions exist. Login roles define who can connect. Then you map login roles to group roles with GRANT. This separation matters because you can revoke a group role from a login role instantly without changing passwords or connection strings.

Users vs. Roles vs. Groups

Here is how I structure roles in every production system:

-- 1. The superuser (already exists, never used by applications)
-- postgres - only for admin tasks, migrations, emergencies

-- 2. Application owner role (owns the schema and tables)
CREATE ROLE app_owner NOLOGIN;

-- 3. Application service role (what your Node.js app connects as)
CREATE ROLE app_service LOGIN PASSWORD 'generated-64-char-password';
GRANT app_owner TO app_service;

-- 4. Read-only role (for reporting, dashboards, analysts)
CREATE ROLE app_readonly NOLOGIN;

-- 5. Read-only login role
CREATE ROLE app_reader LOGIN PASSWORD 'another-strong-password';
GRANT app_readonly TO app_reader;

-- 6. Migration role (for schema changes only)
CREATE ROLE app_migrator LOGIN PASSWORD 'migration-password';

Never let your application connect as postgres. Ever. The postgres superuser can bypass every security control discussed in this article, including row-level security. If your ORM migration tool needs superuser access, that is a red flag about the ORM, not a reason to run your app as superuser.

GRANT and REVOKE Patterns

Permissions in PostgreSQL cascade through a hierarchy: cluster, database, schema, table, column, row. Most developers only think about table-level access, but schema-level permissions are the first gate.

Schema-Level Permissions

-- Create a dedicated schema (stop using 'public' for application tables)
CREATE SCHEMA app AUTHORIZATION app_owner;

-- Revoke default public access
REVOKE ALL ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON SCHEMA app FROM PUBLIC;

-- Grant usage to the service role
GRANT USAGE ON SCHEMA app TO app_service;
GRANT USAGE ON SCHEMA app TO app_readonly;

-- Set default search path
ALTER ROLE app_service SET search_path TO app, public;
ALTER ROLE app_readonly SET search_path TO app, public;

The REVOKE ALL ON SCHEMA public FROM PUBLIC line is critical. By default, every role can create objects in the public schema. This is PostgreSQL's biggest security footprint and it has bitten me on multiple projects.

Table-Level Permissions

-- Grant full DML to the service role on all current tables
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA app TO app_service;

-- Grant only SELECT to the readonly role
GRANT SELECT ON ALL TABLES IN SCHEMA app TO app_readonly;

-- Grant sequence usage (needed for SERIAL/BIGSERIAL columns)
GRANT USAGE ON ALL SEQUENCES IN SCHEMA app TO app_service;

-- Set defaults for future tables (this is the one people forget)
ALTER DEFAULT PRIVILEGES IN SCHEMA app
  GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_service;

ALTER DEFAULT PRIVILEGES IN SCHEMA app
  GRANT SELECT ON TABLES TO app_readonly;

ALTER DEFAULT PRIVILEGES IN SCHEMA app
  GRANT USAGE ON SEQUENCES TO app_service;

That ALTER DEFAULT PRIVILEGES block is essential. Without it, every new table created by migrations will be inaccessible to your application role until you manually grant permissions. I have seen production deployments break at 2 AM because a migration created a new table and nobody ran the grants.

Column-Level Permissions

Sometimes you need to hide specific columns from certain roles. PostgreSQL supports this natively:

-- Create a table with sensitive data
CREATE TABLE app.users (
  id BIGSERIAL PRIMARY KEY,
  email TEXT NOT NULL,
  password_hash TEXT NOT NULL,
  ssn_encrypted BYTEA,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Grant SELECT on only non-sensitive columns to the readonly role
GRANT SELECT (id, email, created_at) ON app.users TO app_readonly;

-- The readonly role cannot see password_hash or ssn_encrypted
-- Attempting to SELECT * will fail

Column-level permissions are an underused feature. They are particularly useful when you have reporting or analytics roles that need access to user tables but should never see password hashes, encrypted PII, or API keys.

Row-Level Security Fundamentals

Row-Level Security (RLS) is PostgreSQL's mechanism for filtering rows based on the identity of the connected user or session variables. When RLS is enabled on a table, every query against that table is automatically rewritten to include a filter condition. The application code never sees the filter; it is enforced at the database level.

This is the single most important security feature for multi-tenant applications. Instead of relying on every query in your Node.js codebase to include WHERE tenant_id = $1, the database enforces the filter automatically.

-- Enable RLS on a table
ALTER TABLE app.projects ENABLE ROW LEVEL SECURITY;

-- Force RLS even for table owners (important!)
ALTER TABLE app.projects FORCE ROW LEVEL SECURITY;

-- Create a policy that filters by tenant
CREATE POLICY tenant_isolation ON app.projects
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

The USING clause defines which rows are visible. The current_setting('app.current_tenant_id') call reads a session-level variable that your Node.js application sets on each request. This is the bridge between your application's authentication layer and PostgreSQL's row filtering.

Policy Types

PostgreSQL supports four policy commands: SELECT, INSERT, UPDATE, and DELETE. You can create separate policies for each, or a single policy for ALL:

-- Separate policies for read vs. write
CREATE POLICY tenant_select ON app.projects
  FOR SELECT
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

CREATE POLICY tenant_insert ON app.projects
  FOR INSERT
  WITH CHECK (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

CREATE POLICY tenant_update ON app.projects
  FOR UPDATE
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

CREATE POLICY tenant_delete ON app.projects
  FOR DELETE
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

The difference between USING and WITH CHECK matters:

  • USING: filters which rows you can see (SELECT) or affect (UPDATE, DELETE)
  • WITH CHECK: validates the data being written (INSERT, UPDATE). If the new row does not satisfy the check, the insert/update fails.

For UPDATE, you specify both: USING controls which rows you can target, and WITH CHECK ensures the updated row still satisfies the policy. This prevents a tenant from updating a row's tenant_id to steal it from another tenant.

Implementing RLS for Multi-Tenant Applications

Here is the full schema for a multi-tenant SaaS application:

-- Tenant table (no RLS needed here, managed by admins)
CREATE TABLE app.tenants (
  id SERIAL PRIMARY KEY,
  name TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Users belong to tenants
CREATE TABLE app.users (
  id BIGSERIAL PRIMARY KEY,
  tenant_id INTEGER NOT NULL REFERENCES app.tenants(id),
  email TEXT NOT NULL,
  password_hash TEXT NOT NULL,
  role TEXT NOT NULL DEFAULT 'member',
  created_at TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE(tenant_id, email)
);

-- Projects belong to tenants
CREATE TABLE app.projects (
  id BIGSERIAL PRIMARY KEY,
  tenant_id INTEGER NOT NULL REFERENCES app.tenants(id),
  name TEXT NOT NULL,
  description TEXT,
  created_by BIGINT REFERENCES app.users(id),
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Tasks belong to projects (and inherit tenant isolation)
CREATE TABLE app.tasks (
  id BIGSERIAL PRIMARY KEY,
  tenant_id INTEGER NOT NULL REFERENCES app.tenants(id),
  project_id BIGINT NOT NULL REFERENCES app.projects(id),
  title TEXT NOT NULL,
  status TEXT NOT NULL DEFAULT 'open',
  assigned_to BIGINT REFERENCES app.users(id),
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Enable RLS on all tenant-scoped tables
ALTER TABLE app.users ENABLE ROW LEVEL SECURITY;
ALTER TABLE app.projects ENABLE ROW LEVEL SECURITY;
ALTER TABLE app.tasks ENABLE ROW LEVEL SECURITY;

ALTER TABLE app.users FORCE ROW LEVEL SECURITY;
ALTER TABLE app.projects FORCE ROW LEVEL SECURITY;
ALTER TABLE app.tasks FORCE ROW LEVEL SECURITY;

-- Create tenant isolation policies
CREATE POLICY tenant_isolation ON app.users FOR ALL
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

CREATE POLICY tenant_isolation ON app.projects FOR ALL
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

CREATE POLICY tenant_isolation ON app.tasks FOR ALL
  USING (tenant_id = current_setting('app.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('app.current_tenant_id')::INTEGER);

-- Add indexes for the tenant_id columns (RLS policies need these)
CREATE INDEX idx_users_tenant ON app.users(tenant_id);
CREATE INDEX idx_projects_tenant ON app.projects(tenant_id);
CREATE INDEX idx_tasks_tenant ON app.tasks(tenant_id);

The FORCE ROW LEVEL SECURITY line is important. Without it, the table owner role bypasses RLS entirely. If your application connects as the table owner (which it will if you use the same role for migrations and runtime), RLS policies are silently ignored.

RLS Policies with Node.js

The critical integration point is setting the session variable before executing queries. Here is how to do it properly with the pg library and connection pooling:

var pg = require('pg');

var pool = new pg.Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: process.env.DB_NAME,
  user: 'app_service',
  password: process.env.DB_PASSWORD,
  max: 20,
  ssl: { rejectUnauthorized: true }
});

// Middleware: set tenant context on every database operation
function withTenant(tenantId, callback) {
  pool.connect(function(err, client, release) {
    if (err) {
      return callback(err);
    }

    // Set the session variable for RLS
    client.query(
      "SELECT set_config('app.current_tenant_id', $1::TEXT, true)",
      [tenantId.toString()],
      function(err) {
        if (err) {
          release();
          return callback(err);
        }

        // The third argument 'true' makes it transaction-local
        // This is critical for connection pooling safety
        callback(null, client, release);
      }
    );
  });
}

// Usage in an Express route
function getProjects(req, res) {
  var tenantId = req.user.tenantId; // From your auth middleware

  withTenant(tenantId, function(err, client, release) {
    if (err) {
      console.error('Failed to set tenant context:', err);
      return res.status(500).json({ error: 'Internal server error' });
    }

    // This query automatically filters by tenant_id thanks to RLS
    // No WHERE tenant_id = $1 needed!
    client.query(
      'SELECT id, name, description, created_at FROM app.projects ORDER BY created_at DESC',
      function(err, result) {
        release(); // Always release back to pool

        if (err) {
          console.error('Query failed:', err);
          return res.status(500).json({ error: 'Internal server error' });
        }

        res.json({ projects: result.rows });
      }
    );
  });
}

The set_config call with the third argument set to true makes the setting transaction-local. This is essential when using connection pooling. Without it, a tenant's ID could leak to the next request that picks up the same pooled connection. With true, the setting is cleared when the transaction ends (or when the client is released back to the pool without an explicit transaction).

Transaction-Safe Pattern

For write operations, always wrap the tenant context and queries in an explicit transaction:

function createProject(req, res) {
  var tenantId = req.user.tenantId;
  var name = req.body.name;
  var description = req.body.description;

  pool.connect(function(err, client, release) {
    if (err) {
      return res.status(500).json({ error: 'Connection failed' });
    }

    client.query('BEGIN', function(err) {
      if (err) {
        release();
        return res.status(500).json({ error: 'Transaction start failed' });
      }

      client.query(
        "SELECT set_config('app.current_tenant_id', $1::TEXT, true)",
        [tenantId.toString()],
        function(err) {
          if (err) {
            return rollback(client, release, res, err);
          }

          client.query(
            'INSERT INTO app.projects (tenant_id, name, description, created_by) VALUES ($1, $2, $3, $4) RETURNING id, name, created_at',
            [tenantId, name, description, req.user.id],
            function(err, result) {
              if (err) {
                return rollback(client, release, res, err);
              }

              client.query('COMMIT', function(err) {
                release();

                if (err) {
                  return res.status(500).json({ error: 'Commit failed' });
                }

                res.status(201).json({ project: result.rows[0] });
              });
            }
          );
        }
      );
    });
  });
}

function rollback(client, release, res, originalErr) {
  client.query('ROLLBACK', function(err) {
    release();
    console.error('Rolled back due to:', originalErr.message);
    res.status(500).json({ error: 'Operation failed' });
  });
}

SSL/TLS Connection Enforcement

Never connect to PostgreSQL over an unencrypted connection in production. PostgreSQL supports TLS natively, and you should enforce it at both the server and client level.

Server-Side Configuration (postgresql.conf)

# Enable SSL
ssl = on
ssl_cert_file = '/etc/ssl/certs/server.crt'
ssl_key_file = '/etc/ssl/private/server.key'
ssl_ca_file = '/etc/ssl/certs/ca.crt'

# Require TLS 1.2 or higher
ssl_min_protocol_version = 'TLSv1.2'

# Strong cipher suites only
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'

Enforce SSL in pg_hba.conf

# Reject all non-SSL connections from remote hosts
hostssl  all  all  0.0.0.0/0  scram-sha-256
hostssl  all  all  ::/0       scram-sha-256

# Allow local socket connections without SSL (for admin tools)
local    all  postgres       peer

The hostssl keyword (instead of host) rejects connections that are not encrypted. This is a hard requirement for any deployment outside of localhost.

Node.js Client-Side SSL

var fs = require('fs');
var pg = require('pg');

var pool = new pg.Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: process.env.DB_NAME,
  user: 'app_service',
  password: process.env.DB_PASSWORD,
  ssl: {
    rejectUnauthorized: true,
    ca: fs.readFileSync('/path/to/ca.crt').toString(),
    cert: fs.readFileSync('/path/to/client.crt').toString(),
    key: fs.readFileSync('/path/to/client.key').toString()
  }
});

For managed PostgreSQL services (DigitalOcean, AWS RDS, etc.), you typically only need the CA certificate:

var pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL,
  ssl: {
    rejectUnauthorized: true,
    ca: fs.readFileSync('/path/to/ca-certificate.crt').toString()
  }
});

Never set rejectUnauthorized: false in production. That disables certificate verification and makes you vulnerable to man-in-the-middle attacks. I have seen this in production codebases more times than I can count, usually with a comment saying "TODO: fix SSL."

Password Authentication Best Practices

PostgreSQL supports multiple authentication methods. For password-based auth over the network, scram-sha-256 is the only acceptable choice.

-- Set the default password encryption method
ALTER SYSTEM SET password_encryption = 'scram-sha-256';
SELECT pg_reload_conf();

-- Now create roles (passwords will be hashed with SCRAM)
CREATE ROLE app_service LOGIN PASSWORD 'your-strong-password';

In pg_hba.conf, specify scram-sha-256 explicitly:

hostssl  all  app_service  10.0.0.0/8  scram-sha-256
hostssl  all  app_reader   10.0.0.0/8  scram-sha-256

Never use md5 for password authentication. The MD5 hash in PostgreSQL is md5(password + username), which means the same password produces different hashes for different usernames, but the algorithm itself is cryptographically broken. SCRAM-SHA-256 uses proper salting, iteration, and challenge-response.

Password Rotation

Rotate application passwords regularly. Here is a zero-downtime approach:

-- 1. Create a new login role with the new password
CREATE ROLE app_service_v2 LOGIN PASSWORD 'new-strong-password';
GRANT app_owner TO app_service_v2;

-- 2. Update your application to use app_service_v2
-- 3. Verify all connections are using the new role
SELECT usename, count(*) FROM pg_stat_activity GROUP BY usename;

-- 4. Drop the old role
DROP ROLE app_service;

-- 5. Rename if desired
ALTER ROLE app_service_v2 RENAME TO app_service;

Audit Logging with pgaudit

The pgaudit extension provides detailed session and object audit logging. It logs the actual SQL statements executed, who executed them, and when.

Installation and Configuration

-- Install the extension (must be in shared_preload_libraries)
CREATE EXTENSION pgaudit;

In postgresql.conf:

shared_preload_libraries = 'pgaudit'

# Log all DDL and DML by the application role
pgaudit.log = 'write, ddl'

# Log the parameters of statements
pgaudit.log_parameter = on

# Log the relation (table) affected
pgaudit.log_relation = on

# Log statement text for detailed auditing
pgaudit.log_statement_once = off

Role-Based Audit Logging

You can configure pgaudit to only log statements executed by specific roles:

-- Create an auditor role
CREATE ROLE auditor NOLOGIN;

-- Tell pgaudit to log statements from this role
ALTER SYSTEM SET pgaudit.role = 'auditor';
SELECT pg_reload_conf();

-- Grant the auditor role access to specific tables
GRANT SELECT, INSERT, UPDATE, DELETE ON app.users TO auditor;
GRANT SELECT, INSERT, UPDATE, DELETE ON app.projects TO auditor;

-- Now all DML on these tables will be logged, regardless of who runs it

This produces log entries like:

AUDIT: SESSION,1,1,WRITE,INSERT,TABLE,app.projects,
  "INSERT INTO app.projects (tenant_id, name) VALUES ($1, $2)",<1,'New Project'>

Preventing SQL Injection from Node.js

The pg library's parameterized queries are your primary defense against SQL injection. Never concatenate user input into SQL strings.

// WRONG - SQL injection vulnerability
function getUser(req, res) {
  var email = req.query.email;
  pool.query("SELECT * FROM app.users WHERE email = '" + email + "'", function(err, result) {
    // An attacker can send: ' OR 1=1 --
    res.json(result.rows);
  });
}

// RIGHT - Parameterized query
function getUser(req, res) {
  var email = req.query.email;
  pool.query('SELECT id, email, created_at FROM app.users WHERE email = $1', [email], function(err, result) {
    if (err) {
      return res.status(500).json({ error: 'Query failed' });
    }
    res.json(result.rows);
  });
}

For dynamic queries (search filters, sorting), use a query builder rather than string concatenation:

function searchProjects(req, res) {
  var conditions = [];
  var params = [];
  var paramIndex = 1;

  if (req.query.name) {
    conditions.push('name ILIKE $' + paramIndex);
    params.push('%' + req.query.name + '%');
    paramIndex++;
  }

  if (req.query.status) {
    conditions.push('status = $' + paramIndex);
    params.push(req.query.status);
    paramIndex++;
  }

  // Whitelist sortable columns to prevent injection via ORDER BY
  var allowedSorts = { name: 'name', created: 'created_at', status: 'status' };
  var sortColumn = allowedSorts[req.query.sort] || 'created_at';
  var sortDir = req.query.dir === 'asc' ? 'ASC' : 'DESC';

  var sql = 'SELECT id, name, status, created_at FROM app.projects';
  if (conditions.length > 0) {
    sql += ' WHERE ' + conditions.join(' AND ');
  }
  sql += ' ORDER BY ' + sortColumn + ' ' + sortDir;
  sql += ' LIMIT 50';

  pool.query(sql, params, function(err, result) {
    if (err) {
      return res.status(500).json({ error: 'Search failed' });
    }
    res.json({ projects: result.rows });
  });
}

Note the whitelist for sort columns. The ORDER BY clause cannot be parameterized in PostgreSQL, so you must validate it in application code. The same applies to table names, column names, and schema names used in dynamic SQL.

Connection Security: Least Privilege

The principle of least privilege means each role should have exactly the permissions it needs and nothing more. Here is a practical checklist:

-- 1. Revoke superuser-like permissions
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
REVOKE ALL ON DATABASE myapp FROM PUBLIC;

-- 2. Create separate roles for separate concerns
CREATE ROLE app_service LOGIN PASSWORD '...';    -- Runtime application
CREATE ROLE app_migrator LOGIN PASSWORD '...';    -- Schema migrations only
CREATE ROLE app_reader LOGIN PASSWORD '...';      -- Read-only reporting

-- 3. The migrator owns the schema but cannot read data
GRANT CREATE ON SCHEMA app TO app_migrator;
GRANT app_owner TO app_migrator;
-- Do NOT grant app_migrator to app_service

-- 4. The service role can read/write data but not change schema
GRANT USAGE ON SCHEMA app TO app_service;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA app TO app_service;
-- Do NOT grant CREATE, DROP, ALTER to app_service

-- 5. The reader can only SELECT
GRANT USAGE ON SCHEMA app TO app_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA app TO app_reader;

-- 6. Limit connection counts per role
ALTER ROLE app_service CONNECTION LIMIT 30;
ALTER ROLE app_reader CONNECTION LIMIT 10;
ALTER ROLE app_migrator CONNECTION LIMIT 2;

Complete Working Example: Multi-Tenant Node.js Application

Here is a complete, working Node.js application that demonstrates all of the concepts above. This is a multi-tenant task management API with PostgreSQL RLS handling data isolation.

Database Setup Script

Save this as setup.sql and run it with psql -f setup.sql:

-- setup.sql: Full database setup for multi-tenant app

-- Create the database
CREATE DATABASE multitenant_demo;
\c multitenant_demo

-- Create roles
CREATE ROLE mt_owner NOLOGIN;
CREATE ROLE mt_service LOGIN PASSWORD 'service-password-change-me';
GRANT mt_owner TO mt_service;

-- Create schema
CREATE SCHEMA mt AUTHORIZATION mt_owner;

-- Set search path
ALTER ROLE mt_service SET search_path TO mt, public;

-- Create tables
CREATE TABLE mt.tenants (
  id SERIAL PRIMARY KEY,
  name TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  api_key TEXT UNIQUE NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE mt.users (
  id BIGSERIAL PRIMARY KEY,
  tenant_id INTEGER NOT NULL REFERENCES mt.tenants(id),
  email TEXT NOT NULL,
  name TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE(tenant_id, email)
);

CREATE TABLE mt.tasks (
  id BIGSERIAL PRIMARY KEY,
  tenant_id INTEGER NOT NULL REFERENCES mt.tenants(id),
  title TEXT NOT NULL,
  description TEXT,
  status TEXT NOT NULL DEFAULT 'open' CHECK (status IN ('open', 'in_progress', 'done')),
  assigned_to BIGINT REFERENCES mt.users(id),
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Indexes for RLS performance
CREATE INDEX idx_users_tenant ON mt.users(tenant_id);
CREATE INDEX idx_tasks_tenant ON mt.tasks(tenant_id);
CREATE INDEX idx_tasks_status ON mt.tasks(tenant_id, status);

-- Enable RLS
ALTER TABLE mt.users ENABLE ROW LEVEL SECURITY;
ALTER TABLE mt.tasks ENABLE ROW LEVEL SECURITY;
ALTER TABLE mt.users FORCE ROW LEVEL SECURITY;
ALTER TABLE mt.tasks FORCE ROW LEVEL SECURITY;

-- RLS Policies
CREATE POLICY tenant_isolation ON mt.users FOR ALL
  USING (tenant_id = current_setting('mt.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('mt.current_tenant_id')::INTEGER);

CREATE POLICY tenant_isolation ON mt.tasks FOR ALL
  USING (tenant_id = current_setting('mt.current_tenant_id')::INTEGER)
  WITH CHECK (tenant_id = current_setting('mt.current_tenant_id')::INTEGER);

-- Grant permissions to service role
GRANT USAGE ON SCHEMA mt TO mt_service;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA mt TO mt_service;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA mt TO mt_service;

ALTER DEFAULT PRIVILEGES IN SCHEMA mt
  GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO mt_service;
ALTER DEFAULT PRIVILEGES IN SCHEMA mt
  GRANT USAGE ON SEQUENCES TO mt_service;

-- Seed tenants
INSERT INTO mt.tenants (name, slug, api_key) VALUES
  ('Acme Corp', 'acme', 'ak_acme_test_key_12345'),
  ('Globex Inc', 'globex', 'ak_globex_test_key_67890');

Node.js Application

// app.js - Multi-tenant task API with RLS
var express = require('express');
var pg = require('pg');

var app = express();
app.use(express.json());

// Database pool
var pool = new pg.Pool({
  host: process.env.DB_HOST || 'localhost',
  port: process.env.DB_PORT || 5432,
  database: 'multitenant_demo',
  user: 'mt_service',
  password: process.env.DB_PASSWORD || 'service-password-change-me',
  max: 20,
  idleTimeoutMillis: 30000
});

// Tenant authentication middleware
function authenticateTenant(req, res, next) {
  var apiKey = req.headers['x-api-key'];

  if (!apiKey) {
    return res.status(401).json({ error: 'Missing X-API-Key header' });
  }

  // Look up tenant by API key (no RLS needed on tenants table)
  pool.query(
    'SELECT id, name, slug FROM mt.tenants WHERE api_key = $1',
    [apiKey],
    function(err, result) {
      if (err) {
        console.error('Tenant lookup failed:', err);
        return res.status(500).json({ error: 'Authentication failed' });
      }

      if (result.rows.length === 0) {
        return res.status(401).json({ error: 'Invalid API key' });
      }

      req.tenant = result.rows[0];
      next();
    }
  );
}

// Database helper: execute query with tenant context
function tenantQuery(tenantId, sql, params, callback) {
  pool.connect(function(err, client, release) {
    if (err) {
      return callback(err);
    }

    // Set tenant context (transaction-local)
    client.query('BEGIN', function(err) {
      if (err) {
        release();
        return callback(err);
      }

      client.query(
        "SELECT set_config('mt.current_tenant_id', $1::TEXT, true)",
        [tenantId.toString()],
        function(err) {
          if (err) {
            client.query('ROLLBACK', function() {
              release();
              callback(err);
            });
            return;
          }

          client.query(sql, params, function(err, result) {
            if (err) {
              client.query('ROLLBACK', function() {
                release();
                callback(err);
              });
              return;
            }

            client.query('COMMIT', function(commitErr) {
              release();
              if (commitErr) {
                return callback(commitErr);
              }
              callback(null, result);
            });
          });
        }
      );
    });
  });
}

// Apply tenant auth to all /api routes
app.use('/api', authenticateTenant);

// List tasks (automatically filtered by RLS)
app.get('/api/tasks', function(req, res) {
  var sql = 'SELECT t.id, t.title, t.description, t.status, u.name AS assigned_to_name, t.created_at FROM mt.tasks t LEFT JOIN mt.users u ON t.assigned_to = u.id ORDER BY t.created_at DESC';

  tenantQuery(req.tenant.id, sql, [], function(err, result) {
    if (err) {
      console.error('Failed to list tasks:', err);
      return res.status(500).json({ error: 'Failed to retrieve tasks' });
    }

    res.json({
      tenant: req.tenant.name,
      count: result.rows.length,
      tasks: result.rows
    });
  });
});

// Create a task
app.post('/api/tasks', function(req, res) {
  var title = req.body.title;
  var description = req.body.description || null;

  if (!title) {
    return res.status(400).json({ error: 'title is required' });
  }

  var sql = 'INSERT INTO mt.tasks (tenant_id, title, description) VALUES ($1, $2, $3) RETURNING id, title, status, created_at';

  tenantQuery(req.tenant.id, sql, [req.tenant.id, title, description], function(err, result) {
    if (err) {
      console.error('Failed to create task:', err);
      return res.status(500).json({ error: 'Failed to create task' });
    }

    res.status(201).json({ task: result.rows[0] });
  });
});

// Update task status
app.patch('/api/tasks/:id', function(req, res) {
  var taskId = parseInt(req.params.id, 10);
  var status = req.body.status;

  var validStatuses = ['open', 'in_progress', 'done'];
  if (!validStatuses.indexOf(status) === -1) {
    return res.status(400).json({ error: 'Invalid status. Must be: ' + validStatuses.join(', ') });
  }

  var sql = 'UPDATE mt.tasks SET status = $1, updated_at = NOW() WHERE id = $2 RETURNING id, title, status, updated_at';

  tenantQuery(req.tenant.id, sql, [status, taskId], function(err, result) {
    if (err) {
      console.error('Failed to update task:', err);
      return res.status(500).json({ error: 'Failed to update task' });
    }

    if (result.rows.length === 0) {
      return res.status(404).json({ error: 'Task not found' });
    }

    res.json({ task: result.rows[0] });
  });
});

// List users for the current tenant
app.get('/api/users', function(req, res) {
  var sql = 'SELECT id, email, name, created_at FROM mt.users ORDER BY name';

  tenantQuery(req.tenant.id, sql, [], function(err, result) {
    if (err) {
      console.error('Failed to list users:', err);
      return res.status(500).json({ error: 'Failed to retrieve users' });
    }

    res.json({
      tenant: req.tenant.name,
      count: result.rows.length,
      users: result.rows
    });
  });
});

// Health check (no auth required)
app.get('/health', function(req, res) {
  pool.query('SELECT 1', function(err) {
    if (err) {
      return res.status(503).json({ status: 'unhealthy', error: err.message });
    }
    res.json({ status: 'healthy' });
  });
});

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log('Multi-tenant API running on port ' + port);
});

Testing the Isolation

# Create a task as Acme Corp
curl -X POST http://localhost:3000/api/tasks \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ak_acme_test_key_12345" \
  -d '{"title": "Acme secret project", "description": "Top secret"}'

# Response:
# {"task":{"id":1,"title":"Acme secret project","status":"open","created_at":"2026-02-08T..."}}

# Create a task as Globex Inc
curl -X POST http://localhost:3000/api/tasks \
  -H "Content-Type: application/json" \
  -H "X-API-Key: ak_globex_test_key_67890" \
  -d '{"title": "Globex launch plan", "description": "Confidential"}'

# Response:
# {"task":{"id":2,"title":"Globex launch plan","status":"open","created_at":"2026-02-08T..."}}

# List tasks as Acme - only sees Acme's task
curl http://localhost:3000/api/tasks \
  -H "X-API-Key: ak_acme_test_key_12345"

# Response:
# {"tenant":"Acme Corp","count":1,"tasks":[{"id":1,"title":"Acme secret project",...}]}

# List tasks as Globex - only sees Globex's task
curl http://localhost:3000/api/tasks \
  -H "X-API-Key: ak_globex_test_key_67890"

# Response:
# {"tenant":"Globex Inc","count":1,"tasks":[{"id":2,"title":"Globex launch plan",...}]}

The key point: both queries use the exact same SQL (SELECT ... FROM mt.tasks). There is no WHERE tenant_id = ... in the application code. PostgreSQL's RLS automatically appends the filter, making it impossible for one tenant to see another tenant's data, even if there is a bug in the application code.

Common Issues and Troubleshooting

1. "ERROR: unrecognized configuration parameter 'app.current_tenant_id'"

error: unrecognized configuration parameter "app.current_tenant_id"

This happens on PostgreSQL versions before 14, or when set_config is called with a custom namespace that has not been registered. In PostgreSQL 9.6-13, you need to add custom variable classes to postgresql.conf:

# postgresql.conf (only needed for PostgreSQL < 14)
custom_variable_classes = 'app'

On PostgreSQL 14+, custom configuration parameters with dots in the name work without registration. If you still see this error on 14+, check that you are not accidentally connecting to an older PostgreSQL instance.

2. "ERROR: new row violates row-level security policy"

ERROR: new row violates row-level security policy for table "tasks"

This means the WITH CHECK clause rejected the insert or update. The most common cause: the tenant_id value in the INSERT statement does not match the current_setting('app.current_tenant_id') value. Debug it:

-- Check what the current setting is
SELECT current_setting('app.current_tenant_id');

-- Verify the value you are inserting
-- If they do not match, the WITH CHECK policy blocks the write

Another cause: you forgot to call set_config before the INSERT. The setting defaults to an empty string, which cannot be cast to INTEGER, producing a different but related error.

3. "ERROR: permission denied for schema app"

ERROR: permission denied for schema app
HINT: No schema has been selected to create in.

You granted table permissions but forgot schema-level USAGE:

-- This is NOT enough:
GRANT SELECT ON ALL TABLES IN SCHEMA app TO app_reader;

-- You also need:
GRANT USAGE ON SCHEMA app TO app_reader;

The USAGE privilege on a schema is the gatekeeper. Without it, the role cannot see any objects in the schema, regardless of table-level grants.

4. "ERROR: current transaction is aborted, commands ignored until end of transaction block"

error: current transaction is aborted, commands ignored until end of transaction block

This happens when a query fails inside a transaction but you continue sending queries without issuing ROLLBACK. In the RLS context, this commonly occurs when set_config succeeds but the subsequent query fails (e.g., due to a syntax error), and your error handling does not roll back the transaction before releasing the connection back to the pool. The next request that picks up that connection gets this error.

Fix: always wrap your tenant-scoped operations in proper BEGIN/COMMIT/ROLLBACK blocks, and always ROLLBACK on any error before releasing the connection.

// In your error handler, ALWAYS rollback before release
client.query('ROLLBACK', function() {
  release();
  callback(originalError);
});

5. RLS Policies Are Silently Bypassed

You enabled RLS but one tenant can still see all data. This happens when the connecting role is the table owner and you did not use FORCE ROW LEVEL SECURITY:

-- Check if FORCE is enabled
SELECT relname, relrowsecurity, relforcerowsecurity
FROM pg_class
WHERE relname = 'tasks';

-- If relforcerowsecurity is false, the owner bypasses policies
ALTER TABLE mt.tasks FORCE ROW LEVEL SECURITY;

Best Practices

  • Never run your application as the PostgreSQL superuser. Create dedicated roles with the minimum permissions needed. The superuser bypasses RLS, security policies, and access controls entirely. One ORM bug away from a full data breach.

  • Always use FORCE ROW LEVEL SECURITY on tables with RLS. Without it, the table owner role (which your app likely connects as) silently bypasses all policies. This is the most common RLS misconfiguration.

  • Use transaction-local session variables for RLS. When calling set_config, always pass true as the third argument to make the setting transaction-scoped. With connection pooling, a request-scoped setting could leak to the next request on the same connection.

  • Index your tenant_id columns. RLS policies add a WHERE tenant_id = ... filter to every query. Without an index on tenant_id, every query becomes a sequential scan. For large tables, add composite indexes like (tenant_id, status) or (tenant_id, created_at).

  • Use ALTER DEFAULT PRIVILEGES to future-proof grants. Without it, new tables created by migrations will not have the correct permissions, causing production outages. Run this once per schema/role combination and never worry about it again.

  • Enforce SSL for all remote connections. Use hostssl in pg_hba.conf and ssl: { rejectUnauthorized: true } in your Node.js client. Never set rejectUnauthorized: false in production. Ever.

  • Rotate database passwords on a schedule. Use the dual-role rotation pattern: create a new role, update the app, verify connections, drop the old role. Never share passwords between environments.

  • Enable pgaudit for compliance-sensitive tables. It adds overhead, so audit selectively. Use role-based auditing to target only tables containing PII, financial data, or access-control records.

  • Parameterize every query. Use $1, $2, $3 placeholders with the pg library's parameter arrays. For dynamic SQL (ORDER BY, table names), use strict whitelists, never string concatenation.

  • Separate migration roles from runtime roles. Your migration tooling needs CREATE TABLE and ALTER TABLE permissions. Your application does not. Use a dedicated migrator role for schema changes and a restricted service role for runtime queries.

References

Powered by Contentful