Skip to main content
Database icon connected to flow chart shapes, symbolizing Databricks Lakebase and app development. [databricks.com](https://d

Editorial illustration for Databricks DB cuts app build to days; Lakebase runs PostgreSQL on lakehouse

Databricks Slashes App Dev Time with Lakebase DB

Databricks DB cuts app build to days; Lakebase runs PostgreSQL on lakehouse

3 min read

Databricks’ new server‑less database promises to shrink the typical months‑long cycle of building data‑driven applications down to a matter of days, a claim that’s catching the eye of firms racing to ready their stacks for the next wave of agentic AI. The pitch is simple: eliminate the friction between where data lives and how it’s processed, so developers can focus on logic rather than plumbing. Yet the real test lies in how a platform handles the underlying storage model while still speaking the language that data teams already trust.

That’s where Lakebase enters the conversation, positioning its offering as a bridge between the lakehouse’s immutable storage and the familiar PostgreSQL query engine. By keeping every write anchored in lakehouse formats yet exposing a vanilla Postgres interface, Lakebase aims to give companies the speed of a cloud‑native service without forcing them to abandon the ecosystem of tools built around PostgreSQL.

*Lakebase takes the separation of storage and compute to its logical conclusion by putting storage directly in the data lakehouse. The compute layer runs essentially vanilla PostgreSQL—maintaining full compatibility with the Postgres ecosystem—but every write goes to lakehouse storage in formats.*

Lakebase takes the separation of storage and compute to its logical conclusion by putting storage directly in the data lakehouse. The compute layer runs essentially vanilla PostgreSQL-- maintaining full compatibility with the Postgres ecosystem -- but every write goes to lakehouse storage in formats that Spark, Databricks SQL and other analytics engines can immediately query without ETL. "The unique technical insight was that data lakes decouple storage from compute, which was great, but we need to introduce data management capabilities like governance and transaction management into the data lake," Xin explained.

"We're actually not that different from the lakehouse concept, but we're building lightweight, ephemeral compute for OLTP databases on top." Databricks built Lakebase with the technology it gained from the acquisition of Neon. But Xin emphasized that Databricks significantly expanded Neon's original capabilities to create something fundamentally different. "They didn't have the enterprise experience, and they didn't have the cloud scale," Xin said.

"We brought the Neon team's novel architectural idea together with the robustness of the Databricks infrastructure and combined them. So now we've created a super scalable platform." From hundreds of databases to millions built for agentic AI Xin outlined a vision directly tied to the economics of AI coding tools that explains why the Lakebase construct matters beyond current use cases.

Lakebase arrives as Databricks’ newest service, now generally available, and it promises to shift OLTP workloads onto the same lakehouse storage that has powered analytics for years. By anchoring vanilla PostgreSQL compute to lakehouse‑resident files, the offering claims full compatibility with the existing Postgres ecosystem while keeping writes in lakehouse formats. The company argues that this “logical conclusion” of storage‑compute separation could compress application development cycles from months to days, a claim that aligns with its earlier push to shorten data‑centric projects as firms ready themselves for agentic AI.

Yet the practical impact of moving transactional writes into a data lake remains uncertain; performance characteristics under heavy OLTP loads have not been disclosed. Likewise, how well the service integrates with legacy systems beyond the Postgres interface is still unclear. For organizations already invested in Databricks’ lakehouse model, Lakebase may present a convenient extension, but broader adoption will likely depend on real‑world testing and cost assessments that have yet to be published.

Further Reading

Common Questions Answered

How does Lakebase change the traditional approach to operational databases?

Lakebase introduces a new architecture for OLTP databases by separating storage and compute, enabling independent scaling and eliminating vendor lock-in. It stores data in modern data lakes using open formats like Postgres, allowing for elastic scaling, lower total cost of ownership, and seamless integration with analytical and AI systems.

What makes Lakebase unique for AI-driven application development?

Lakebase is specifically designed to support AI agents operating at machine speed, with advanced branching and checkpointing capabilities that allow for rapid experimentation and rewinding. It eliminates complex ETL pipelines by deeply integrating operational data with the lakehouse, enabling developers to build intelligent applications more efficiently.

What are the key benefits of using Lakebase for database management?

Lakebase offers several key benefits, including serverless architecture with instant elastic scaling, openness through open-source standards like Postgres, and modern development workflows that make database branching as easy as code repository branching. Additionally, it provides a fully managed Postgres database that simplifies application development by reducing database management overhead.