Back to Blogs
Stop Reaching for Redis and MongoDB: PostgreSQL Is More Powerful Than You Think
DatabaseFebruary 18, 2026

Stop Reaching for Redis and MongoDB: PostgreSQL Is More Powerful Than You Think

AI Assisted Content — This article was written with the help of AI tools. It has been reviewed and curated by our team.

PraiseGod

PraiseGod

10 min read

Stop Reaching for Redis and MongoDB: PostgreSQL Is More Powerful Than You Think

The reflex is understandable. You have a new use case — full-text search, flexible document storage, vector similarity, real-time notifications — and you reach for the specialized tool. MongoDB for the schema flexibility. Elasticsearch for the search. Redis for the cache and pub/sub. Pinecone for the vector index.

Each addition feels justified in isolation. The cumulative cost is not: another operational burden, another backup strategy, another failure mode, another connection pool to manage, another expertise requirement for your team. And more often than not, PostgreSQL already handles the use case better than you expect.

JSONB: Flexible Schemas Without Sacrificing Queryability

MongoDB's primary value proposition is schema flexibility. PostgreSQL's JSONB type provides the same flexibility with the added benefit of full ACID transactions, foreign key constraints, and, critically, indexability.

sql
1CREATE TABLE events (
2    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
3    user_id     UUID NOT NULL REFERENCES users(id),
4    event_type  TEXT NOT NULL,
5    payload     JSONB NOT NULL,
6    occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
7);
8
9-- GIN index enables fast containment queries on the entire payload
10CREATE INDEX idx_events_payload ON events USING GIN (payload);
11
12-- Or a functional index on a specific field for high-selectivity queries
13CREATE INDEX idx_events_user_action ON events ((payload->>'action'));

Querying is expressive and performant:

sql
1-- Find events where payload contains a specific field value
2SELECT * FROM events WHERE payload @> '{"action": "purchase"}'::jsonb;
3
4-- Extract nested fields
5SELECT payload->>'product_id', COUNT(*) as purchases
6FROM events
7WHERE event_type = 'commerce'
8  AND payload->>'amount' > '100'
9GROUP BY 1;

The @> containment operator with a GIN index is effectively O(log n). This is not a workaround — it is a first-class feature used in production by companies at significant scale.

Full-Text Search: No Elasticsearch Required

Elasticsearch is operationally expensive. Its cluster management, index management, mapping configuration, and JVM tuning require dedicated expertise. For the majority of application search use cases, PostgreSQL's built-in full-text search is sufficient and orders of magnitude simpler to operate.

sql
1-- Add a tsvector column updated automatically via trigger
2ALTER TABLE blog_posts ADD COLUMN search_vector tsvector
3    GENERATED ALWAYS AS (
4        setweight(to_tsvector('english', title), 'A') ||
5        setweight(to_tsvector('english', coalesce(excerpt, '')), 'B') ||
6        setweight(to_tsvector('english', content), 'C')
7    ) STORED;
8
9CREATE INDEX idx_posts_fts ON blog_posts USING GIN (search_vector);
10
11-- Ranked search with headline extraction
12SELECT
13    title,
14    ts_rank(search_vector, query) AS rank,
15    ts_headline('english', excerpt, query, 'MaxWords=30, MinWords=15') AS snippet
16FROM blog_posts, to_tsquery('english', 'fastapi & production') query
17WHERE search_vector @@ query
18ORDER BY rank DESC
19LIMIT 10;

For fuzzy matching (typo tolerance), add the pg_trgm extension:

sql
1CREATE EXTENSION IF NOT EXISTS pg_trgm;
2CREATE INDEX idx_posts_title_trgm ON blog_posts USING GIN (title gin_trgm_ops);
3
4-- Finds "FastAPI" even if the user types "FastAP" or "FstAPI"
5SELECT * FROM blog_posts WHERE similarity(title, 'FastAP') > 0.3 ORDER BY similarity(title, 'FastAP') DESC;

Elasticsearch is appropriate when you need complex relevance tuning, faceted search across billions of documents, or multi-language analysis pipelines with custom analyzers. For a product search, blog search, or knowledge base — PostgreSQL handles it.

pgvector: The Case Against a Dedicated Vector Database

The pgvector extension turns PostgreSQL into a capable vector similarity search engine supporting both exact nearest-neighbor and approximate nearest-neighbor (HNSW and IVFFlat indexes).

sql
1CREATE EXTENSION vector;
2
3CREATE TABLE documents (
4    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
5    content     TEXT,
6    embedding   vector(1536),  -- OpenAI text-embedding-3-small dimensions
7    metadata    JSONB
8);
9
10-- HNSW index — better query performance, more memory
11CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops)
12    WITH (m = 16, ef_construction = 64);
13
14-- Combined vector search + SQL filter in a single query
15SELECT id, content, 1 - (embedding <=> $1::vector) AS similarity
16FROM documents
17WHERE metadata->>'user_id' = $2
18  AND metadata->>'doc_type' = 'contract'
19ORDER BY embedding <=> $1::vector
20LIMIT 5;

The critical advantage: your vector search and your structured queries live in the same transaction. A RAG system that needs to retrieve documents by user, filter by access control, and rank by semantic similarity can do all of that in a single SQL query — without a round-trip to a separate vector store.

At what scale does pgvector stop being sufficient? Approximately 5–10 million vectors with 1536 dimensions is the practical limit for HNSW on a single node before query latency degrades meaningfully. Below that threshold, pgvector is production-grade and operationally far simpler than Pinecone or Weaviate.

Table Partitioning: Time-Series at Scale

Time-series and event data grows without bound. Native table partitioning makes managing this data (archival, retention, query performance) tractable without TimescaleDB or ClickHouse.

sql
1CREATE TABLE api_requests (
2    id          BIGINT GENERATED ALWAYS AS IDENTITY,
3    endpoint    TEXT NOT NULL,
4    user_id     UUID,
5    status_code SMALLINT,
6    duration_ms INT,
7    occurred_at TIMESTAMPTZ NOT NULL
8) PARTITION BY RANGE (occurred_at);
9
10-- Create partitions — automate this with pg_partman in production
11CREATE TABLE api_requests_2025_q1 PARTITION OF api_requests
12    FOR VALUES FROM ('2025-01-01') TO ('2025-04-01');
13CREATE TABLE api_requests_2025_q2 PARTITION OF api_requests
14    FOR VALUES FROM ('2025-04-01') TO ('2025-07-01');
15
16-- Partition pruning happens automatically on range queries
17SELECT endpoint, AVG(duration_ms) FROM api_requests
18WHERE occurred_at >= NOW() - INTERVAL '7 days'
19GROUP BY endpoint;
20-- Only scans the relevant partition, not the full table

Pair this with pg_partman for automated partition creation and pg_cron for scheduled archival of old partitions to cold storage.

LISTEN/NOTIFY: Real-Time Without Redis Pub/Sub

For moderate-volume real-time notification (dashboards, collaborative tools, job status updates), LISTEN/NOTIFY eliminates the need for Redis as a pub/sub broker.

sql
1-- In a trigger on the jobs table
2CREATE FUNCTION notify_job_update() RETURNS trigger AS $$
3BEGIN
4    PERFORM pg_notify(
5        'job_updates',
6        json_build_object('job_id', NEW.id, 'status', NEW.status)::text
7    );
8    RETURN NEW;
9END;
10$$ LANGUAGE plpgsql;
11
12CREATE TRIGGER job_status_notify
13AFTER UPDATE OF status ON jobs
14FOR EACH ROW EXECUTE FUNCTION notify_job_update();

In Python, with asyncpg:

python
1async def listen_for_updates(ws_manager: ConnectionManager):
2    conn = await asyncpg.connect(settings.DATABASE_URL)
3    await conn.add_listener('job_updates', lambda conn, pid, channel, payload:
4        asyncio.create_task(ws_manager.broadcast('jobs', json.loads(payload)))
5    )

The appropriate scale threshold: NOTIFY payload is limited to 8000 bytes, and high-throughput scenarios (thousands of notifications per second) should use a dedicated broker. For most application use cases, LISTEN/NOTIFY is entirely sufficient.

The Operational Case for Single-Database Architecture

Every additional data store in your stack multiplies operational complexity non-linearly. You need backup procedures, restore testing, monitoring, connection pooling, and expertise for each system. Failures in any one system can cascade.

A single PostgreSQL database with PgBouncer for connection pooling is:

  • Backed up with pg_basebackup or managed snapshots in a single operation
  • Monitored with a single set of metrics (pg_stat_* views, pgBadger for query analysis)
  • Scaled with read replicas for read-heavy workloads and Citus for horizontal sharding when you genuinely need it

The engineering principle is not "never use Redis" — Redis is excellent for distributed rate limiting, atomic counters, and workloads that require sub-millisecond latency on hundreds of thousands of operations per second. The principle is: exhaust what PostgreSQL can do before accepting the operational burden of a second system. For most organizations below hyperscale, that means PostgreSQL handles far more than currently used for.

Share this article

Please or to leave a comment.