How OpenAI is scaling the PostgreSQL database to 800 million customers

Metro Loud
7 Min Read

[ad_1]

How OpenAI is scaling the PostgreSQL database to 800 million customers

Whereas vector databases nonetheless have many legitimate use circumstances, organizations together with OpenAI are leaning on PostgreSQL to get issues executed.

In a weblog put up on Thursday, OpenAI disclosed how it’s utilizing the open-source PostgreSQL database.

OpenAI runs ChatGPT and its API platform for 800 million customers on a single-primary PostgreSQL occasion — not a distributed database, not a sharded cluster. One Azure PostgreSQL Versatile Server handles all writes. Practically 50 learn replicas unfold throughout a number of areas deal with reads. The system processes hundreds of thousands of queries per second whereas sustaining low double-digit millisecond p99 latency and five-nines availability.

The setup challenges standard scaling knowledge and provides enterprise architects perception into what really works at large scale.

The lesson right here isn’t to repeat OpenAI’s stack. It’s that architectural selections ought to be pushed by workload patterns and operational constraints — not by scale panic or trendy infrastructure decisions. OpenAI’s PostgreSQL setup reveals how far confirmed methods can stretch when groups optimize intentionally as an alternative of re-architecting prematurely.

"For years, PostgreSQL has been probably the most essential, under-the-hood information methods powering core merchandise like ChatGPT and OpenAI’s API,"  OpenAI engineer Bohan Zhang wrote in a technical disclosure. "Over the previous 12 months, our PostgreSQL load has grown by greater than 10x, and it continues to rise shortly."

The corporate achieved this scale via focused optimizations, together with connection pooling that minimize connection time from 50 milliseconds to five milliseconds and cache locking to forestall 'thundering herd' issues the place cache misses set off database overload.

Why PostgreSQL issues for enterprises

PostgreSQL handles operational information for ChatGPT and OpenAI's API platform. The workload is closely read-oriented, which makes PostgreSQL a superb match. Nevertheless, PostgreSQL's multiversion concurrency management (MVCC) creates challenges beneath heavy write hundreds.

When updating information, PostgreSQL copies whole rows to create new variations, inflicting write amplification and forcing queries to scan via a number of variations to seek out present information.



Quite than preventing this limitation, OpenAI constructed its technique round it. At OpenAI’s scale, these tradeoffs aren’t theoretical — they decide which workloads keep on PostgreSQL and which of them should transfer elsewhere.

How OpenAI is optimizing PostgreSQL

At massive scale, standard database knowledge factors to one in every of two paths: shard PostgreSQL throughout a number of major cases so writes will be distributed, or migrate to a distributed SQL database like CockroachDB or YugabyteDB designed to deal with large scale from the beginning. Most organizations would have taken one in every of these paths years in the past, nicely earlier than reaching 800 million customers.

Sharding or transferring to a distributed SQL database eliminates the single-writer bottleneck. A distributed SQL database handles this coordination routinely, however each approaches introduce important complexity: utility code should route queries to the proper shard, distributed transactions develop into tougher to handle and operational overhead will increase considerably.

As an alternative of sharding PostgreSQL, OpenAI established a hybrid technique: no new tables in PostgreSQL. New workloads default to sharded methods like Azure Cosmos DB. Current write-heavy workloads that may be horizontally partitioned get migrated out. All the pieces else stays in PostgreSQL with aggressive optimization.

This method provides enterprises a sensible different to wholesale re-architecture. Quite than spending years rewriting tons of of endpoints, groups can establish particular bottlenecks and transfer solely these workloads to purpose-built methods.



Why this issues

OpenAI's expertise scaling PostgreSQL reveals a number of practices that enterprises can undertake no matter their scale.

Construct operational defenses at a number of layers. OpenAI's method combines cache locking to forestall "thundering herd" issues, connection pooling (which dropped their connection time from 50ms to 5ms), and price limiting at utility, proxy and question ranges. Workload isolation routes low-priority and high-priority visitors to separate cases, guaranteeing a poorly optimized new characteristic can't degrade core companies.

Overview and monitor ORM-generated SQL in manufacturing. Object-Relational Mapping (ORM) frameworks like Django, SQLAlchemy, and Hibernate routinely generate database queries from utility code, which is handy for builders. Nevertheless, OpenAI discovered one ORM-generated question becoming a member of 12 tables that precipitated a number of high-severity incidents when visitors spiked. The comfort of letting frameworks generate SQL creates hidden scaling dangers that solely floor beneath manufacturing load. Make reviewing these queries a normal apply.

Implement strict operational self-discipline. OpenAI permits solely light-weight schema modifications — something triggering a full desk rewrite is prohibited. Schema modifications have a 5-second timeout. Lengthy-running queries get routinely terminated to forestall blocking database upkeep operations. When backfilling information, they implement price limits so aggressive that operations can take over per week.

Learn-heavy workloads with burst writes can run on single-primary PostgreSQL longer than generally assumed. The choice to shard ought to rely upon workload patterns somewhat than person counts.

This method is especially related for AI purposes, which frequently have closely read-oriented workloads with unpredictable visitors spikes. These traits align with the sample the place single-primary PostgreSQL scales successfully.

The lesson is simple: establish precise bottlenecks, optimize confirmed infrastructure the place potential, and migrate selectively when essential. Wholesale re-architecture isn't all the time the reply to scaling challenges.

[ad_2]

Share This Article