PostgreSQL LISTEN/NOTIFY: Real-Time Without the Message Broker
PostgreSQL Search Series
Part 3 of 4- 1 Full-Text Search with PostgreSQL and Prisma
- 2 Advanced Search Techniques with pgvector and Geospatial Search
- 3 PostgreSQL LISTEN/NOTIFY: Real-Time Without the Message Broker You are here
- 4 Multi-Tenant Search in PostgreSQL with Row-Level Security
In Part 1, we built powerful full-text search. In Part 2, we added semantic search. Now, it’s time to make it real-time.
A user is viewing search results on your site. As they’re reading, your team publishes a new article that perfectly matches their query. Instead of the user needing to refresh, the new article just appears.
This isn’t just for search. Imagine an admin dashboard showing live order counts, a task manager instantly assigning tasks to a teammate’s board, or a chat app where messages appear the moment they’re sent.
Most developers assume this requires complex infrastructure like Redis Pub/Sub, RabbitMQ, or a third-party service. But what if your database could handle it for you?
It can. PostgreSQL has a built-in feature called LISTEN and NOTIFY that turns your database into a simple, real-time message broker.
The key insight is this: Your database can directly notify your application when data changes. This simple, powerful pattern lets you build reactive features without adding heavy dependencies to your stack.
What We’ll Build:
- A PostgreSQL trigger that sends a
NOTIFYpayload when an article is created, updated, or deleted. - A lightweight WebSocket server in Next.js that
LISTENs for database notifications. - A React search component that receives updates and refreshes its data in real-time.
How It Works: The Big Picture
The flow is straightforward. When data changes, a chain reaction occurs, pushing the update from the database all the way to the user’s browser.
- Trigger: A database trigger fires automatically on
INSERT,UPDATE, orDELETE. - Notify: The trigger function calls
pg_notify(), sending a message to a named channel. - Listen: A long-running process in your Node.js backend continuously
LISTENs on that channel. - Push: When the Node.js process receives a notification, it relays the message to all connected browsers via WebSockets.
Here’s a visual breakdown:
Data flow from database change to UI update
Click to zoom
Let’s build each piece of this system.
Step 1: See It in Action with psql
Before writing any code, let’s prove the concept works using psql. Open two terminal windows and connect to your database in each.
In Terminal 1 (The Listener):
Subscribe to a channel named new_article. This command will block, waiting for notifications.
LISTEN new_article;In Terminal 2 (The Notifier): Send a notification to that channel with a simple text payload.
NOTIFY new_article, 'Hello from PostgreSQL!';Instantly, Terminal 1 will display the message:
Asynchronous notification "new_article" with payload "Hello from PostgreSQL!" received from server...This is the core mechanic. For our application, we’ll want to send structured data. Let’s send a JSON payload instead:
-- In Terminal 2NOTIFY new_article, '{"id": 123, "action": "INSERT", "table": "Article"}';Terminal 1 will receive the JSON string, which our application can then parse.
Step 2: Automate Notifications with a Trigger
Manually sending notifications isn’t practical. We need to automate this process whenever our data changes. We’ll use a PostgreSQL trigger function.
First, create the function that will be executed by the trigger.
CREATE OR REPLACE FUNCTION notify_article_change()RETURNS TRIGGER AS $$DECLARE payload JSON;BEGIN -- Construct a JSON payload with the operation type and record ID payload = json_build_object( 'operation', TG_OP, -- The operation: INSERT, UPDATE, or DELETE 'record_id', COALESCE(NEW.id, OLD.id) );
-- Send the notification on the 'article_changes' channel PERFORM pg_notify('article_changes', payload::text);
-- Return the appropriate record for the trigger RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql;Breaking it down:
TG_OP: A special variable that holds the operation type (INSERT,UPDATE,DELETE).NEWandOLD: Special variables representing the new and old versions of the row.NEW.idis available for inserts/updates, whileOLD.idis used for deletes.COALESCEconveniently picks the one that isn’t null.pg_notify(): The function that sends our JSON payload to thearticle_changeschannel.
Next, attach this function to your Article table with a trigger.
-- Ensure the trigger doesn't already existDROP TRIGGER IF EXISTS article_change_trigger ON "Article";
-- Create the trigger to run AFTER data changesCREATE TRIGGER article_change_triggerAFTER INSERT OR UPDATE OR DELETE ON "Article"FOR EACH ROW EXECUTE FUNCTION notify_article_change();How to Manage Triggers with Prisma
You’re probably wondering: “Where does this SQL code live in my project?” The best practice is to manage database functions and triggers using Prisma Migrate.
- Create a new migration:
npx prisma migrate dev --name add_article_notify_trigger- Prisma will generate a new migration folder with an empty
migration.sqlfile.- Paste the
CREATE FUNCTIONandCREATE TRIGGERSQL from above into thatmigration.sqlfile.Now, your trigger is version-controlled and will be applied automatically alongside your schema changes.
Step 3: Bridge PostgreSQL to the Browser with WebSockets
Browsers can’t connect directly to PostgreSQL. We need a server-side “bridge” that LISTENs for database notifications and relays them to clients using WebSockets.
We’ll create a simple standalone Node.js script for this.
import { WebSocketServer } from 'ws';import { Client } from 'pg';
const WSS_PORT = 3001;
// 1. Create a WebSocket serverconst wss = new WebSocketServer({ port: WSS_PORT });
// 2. Create and connect a PostgreSQL clientconst pgClient = new Client({ connectionString: process.env.DATABASE_URL });
async function startBridge() { await pgClient.connect();
// 3. Start listening for notifications on our channel await pgClient.query('LISTEN article_changes'); console.log('PostgreSQL LISTENer connected.');
// 4. When a notification is received, broadcast it to all WebSocket clients pgClient.on('notification', (msg) => { console.log('Received notification:', msg.payload); wss.clients.forEach((client) => { // Check if the client is still connected if (client.readyState === 1) { // 1 means WebSocket.OPEN client.send(msg.payload); } }); });
console.log(`WebSocket bridge running on ws://localhost:${WSS_PORT}`);}
startBridge().catch(console.error);To run this alongside your Next.js app:
- Install dependencies:
npm install ws pg - Update your
package.jsonto run both processes:package.json "scripts": {"dev": "next dev & node scripts/websocket-bridge.js"}
🚀 Making WebSockets Production-Ready
This script is a minimal example perfect for getting started. Production applications require more robust WebSocket handling:
- Authentication and Authorization
- Automatic Reconnection Logic
- Error Handling and Logging
- Scaling across multiple server instances
We cover all these topics in our deep-dive WebSockets with Next.js series. We highly recommend reading it before deploying this pattern to production.
Step 4: Build the Live React Component
Finally, let’s create the React component that performs the search and listens for real-time updates.
import { useState, useEffect, useCallback } from 'react';import { useDebounce } from 'use-debounce';
export default function LiveSearch() { const [query, setQuery] = useState(''); const [results, setResults] = useState([]); const [debouncedQuery] = useDebounce(query, 300);
// Memoized function to fetch search results const fetchResults = useCallback(async (searchQuery) => { if (searchQuery.length < 2) { setResults([]); return; } const response = await fetch(`/api/search?q=${encodeURIComponent(searchQuery)}`); const data = await response.json(); setResults(data); }, []);
// Effect for initial search and on query change useEffect(() => { fetchResults(debouncedQuery); }, [debouncedQuery, fetchResults]);
// Effect for listening to real-time updates useEffect(() => { // Only connect if there's an active query if (debouncedQuery.length < 2) return;
const ws = new WebSocket('ws://localhost:3001');
ws.onopen = () => console.log('WebSocket connected.'); ws.onclose = () => console.log('WebSocket disconnected.');
// When a message is received from the server, re-fetch results ws.onmessage = (event) => { console.log('DB change detected, refetching results...'); fetchResults(debouncedQuery); };
// Clean up the connection when the component unmounts or query changes return () => ws.close(); }, [debouncedQuery, fetchResults]);
return ( <div> <input type="text" value={query} onChange={(e) => setQuery(e.target.value)} placeholder="Search articles..." /> <ul> {results.map((article) => ( <li key={article.id}> <h3>{article.title}</h3> </li> ))} </ul> </div> );}How it works:
- The user types in the search box. The query is debounced to avoid excessive API calls.
- The first
useEffecthook fetches the initial search results. - The second
useEffecthook opens a WebSocket connection. - When our WebSocket bridge sends a message (triggered by a database change), the
onmessagehandler fires, callingfetchResultsagain to get the latest data. - The UI updates automatically, without a page refresh.
Advanced: User-Specific Channels
What about notifications for specific users? Like “John received a new message” or “Order #123 was updated for customer Alice”?
You can create dynamic channels per user. Instead of broadcasting to everyone, send to targeted channels:
-- Trigger function for user-specific notificationsCREATE OR REPLACE FUNCTION notify_user_notification()RETURNS TRIGGER AS $$DECLARE channel_name TEXT;BEGIN -- Create a channel name like "user_notifications_42" channel_name := 'user_notifications_' || NEW.user_id;
PERFORM pg_notify( channel_name, json_build_object( 'id', NEW.id, 'message', NEW.message, 'type', NEW.notification_type )::text );
RETURN NEW;END;$$ LANGUAGE plpgsql;
CREATE TRIGGER user_notification_triggerAFTER INSERT ON "Notification"FOR EACH ROWEXECUTE FUNCTION notify_user_notification();On the server side, subscribe to user-specific channels:
const userConnections = new Map(); // userId -> Set of WebSocket clients
wss.on('connection', (ws, req) => { // Get userId from auth token/session const userId = authenticateWebSocket(req);
if (!userId) { ws.close(); return; }
// Track this connection if (!userConnections.has(userId)) { userConnections.set(userId, new Set());
// Subscribe to user-specific channel in PostgreSQL pgClient.query(`LISTEN user_notifications_${userId}`); }
userConnections.get(userId).add(ws);
ws.on('close', () => { const connections = userConnections.get(userId); connections.delete(ws);
// If no more connections for this user, unsubscribe if (connections.size === 0) { pgClient.query(`UNLISTEN user_notifications_${userId}`); userConnections.delete(userId); } });});
// Route notifications to specific userspgClient.on('notification', (msg) => { // Extract userId from channel name: "user_notifications_42" -> "42" const match = msg.channel.match(/^user_notifications_(\d+)$/); if (match) { const userId = match[1]; const connections = userConnections.get(userId);
if (connections) { connections.forEach(ws => { if (ws.readyState === 1) { ws.send(msg.payload); } }); } }});Visual flow:
User-specific notifications using dynamic channels
Click to zoom
This pattern is perfect for:
- In-app notifications (“You have a new message”)
- Task assignments (“A task was assigned to you”)
- Order updates (“Your order #123 shipped”)
- Real-time collaboration (“User X edited section Y”)
When to Use Triggers vs. Application Code
You might be wondering: “Why put this logic in the database instead of my application code?”
Here’s when each approach shines:
Use Database Triggers When:
1. Multiple Writers Exist
If you have background workers, admin panels, or multiple services writing to your database, triggers ensure notifications fire regardless of which code path inserts the data.
Triggers ensure consistency across all entry points
Click to zoom
2. Database as Message Broker for Workers
Instead of polling for jobs, workers can LISTEN for notifications:
import { Client } from 'pg';
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();await client.query('LISTEN new_jobs');
client.on('notification', async (msg) => { const { job_id } = JSON.parse(msg.payload);
// Process the job await processJob(job_id);});Using PostgreSQL as a lightweight job queue
Click to zoom
Benefits:
- No polling = less database load
- Workers wake up instantly when jobs arrive
- PostgreSQL acts as both database AND message broker
- Simpler architecture (one less service)
3. Data Consistency is Critical
Triggers run in the same transaction as the data change. If the transaction rolls back, the notification isn’t sent. With application code, you might send a notification for a change that never commits.
Use Application Code When:
1. Business Logic is Complex
If sending notifications requires calling external APIs, checking multiple conditions, or transforming data extensively, keep it in your application layer where you have full language features.
2. You Want to Avoid Database Migrations
Triggers live in SQL files managed by Prisma migrations. Changing them requires migration files. Application code is easier to iterate on during development.
3. Testing is Important
Testing application code is easier than testing database triggers. You can mock, stub, and unit test JavaScript more readily than PL/pgSQL.
Hybrid Approach
You can do both! Use triggers for simple, guaranteed notifications, and add application logic on top:
// API routeawait prisma.article.create({ data: newArticle });// Trigger fires automatically: NOTIFY article_changes
// Additional app-level logicif (article.featured) { await sendSlackNotification(article); await invalidateCDNCache(article.slug);}Bottom line: If you have a single application writing to your database and simple notification needs, application code is fine. If you have multiple writers or need guarantees, triggers are powerful.
Limitations & When to Use This Pattern
This approach is powerful but has trade-offs. It’s crucial to know its limits.
- Payload Size: Notifications are limited to 8000 bytes. Best practice is to send only IDs and essential metadata, not entire records.
- No Persistence: If your Node.js listener is down when a notification is sent, the message is lost forever. For guaranteed delivery, a dedicated message queue like RabbitMQ is a better choice.
- Connection Cost: Each
LISTENcommand uses one PostgreSQL connection. This pattern is excellent for applications with a few listeners (e.g., a handful of backend services). For thousands of concurrent listeners, a dedicated pub/sub system like Redis or a managed service is more scalable.
✅ Perfect for:
- Internal dashboards and admin panels.
- Low-to-medium traffic SaaS features.
- Cache invalidation signals between servers.
- Triggering background jobs.
❌ Consider alternatives for:
- High-throughput systems requiring guaranteed message delivery (e.g., financial transactions).
- Massively scaled applications with >1,000 concurrent real-time listeners.
- When your database is already at its connection limit.
Alternatives
- Polling: The simplest method. The client asks the server for updates every few seconds. Easy to implement but inefficient.
useEffect(() => {const interval = setInterval(() => fetchResults(query), 5000);return () => clearInterval(interval);}, [query]);
- Server-Sent Events (SSE): A simpler, one-way alternative to WebSockets, perfect for read-only updates like notifications. See our SSE guide here.
- Supabase Realtime: If you’re using Supabase, it provides a managed service that exposes PostgreSQL’s LISTEN/NOTIFY functionality through a simple client-side SDK.
Conclusion
You’ve now implemented a powerful, efficient real-time system using tools you already have. By combining PostgreSQL’s LISTEN/NOTIFY with a simple WebSocket bridge, you can build engaging, reactive experiences without the overhead of external message brokers.
You learned to:
- Use
LISTENandNOTIFYfor inter-process communication. - Automate notifications with database triggers managed by Prisma Migrate.
- Bridge database events to browsers with a Node.js WebSocket server.
- Create a live-updating React component.
This shows how powerful PostgreSQL is. Sometimes the best solution is already in your database.
Next up: In Part 4 - Multi-Tenant Search, we’ll tackle data isolation and security using PostgreSQL’s Row-Level Security (RLS)—a must-have for any SaaS application.
Ready to Build with LLMs?
The concepts in this post are just the start. My free 11-page cheat sheet gives you copy-paste prompts and patterns to get reliable, structured output from any model.
Related Articles
Multi-Tenant Search in PostgreSQL with Row-Level Security
Build a secure, scalable multi-tenant SaaS architecture with PostgreSQL's Row-Level Security (RLS), Prisma, and Next.js. Isolate customer data and implement tenant-aware search that just works.
Full-Text Search with PostgreSQL and Prisma
Learn about Full-Text Search with PostgreSQL and Prisma
Advanced Search Techniques with pgvector and Geospatial Search
Learn about Advanced Search Techniques with pgvector and Geospatial Search
Redis Queues and Pub/Sub in Next.js - A Practical Guide
Learn how to use Redis lists, queues, and pub/sub patterns in Next.js for real-time features and background processing