2025 reveals vectors as data type, not DB, integrated into multimodel systems
Why does the way we store vectors matter to an enterprise today? While AI workloads have grown, the underlying storage choices have remained fragmented. Companies once faced a binary decision: adopt a purpose‑built vector database or shoe‑horn embeddings into a relational system that wasn’t designed for similarity search.
That tension showed up in every RFP, every architecture diagram, and every budget line. But the market didn’t stay static. By the middle of 2025 vendors began bundling vector capabilities into their multimodel offerings, treating embeddings the same way they treat JSON, graphs or time‑series.
The shift meant that a single platform could now host structured tables, document collections and high‑dimensional vectors side by side, without a separate stack. It also implied a change in how data engineers think about schema, latency and scaling. The practical upshot?
Organizations no longer need to spin up a dedicated system just to handle vectors. Instead, they can lean on an existing multimodel database and keep everything under one roof.
In 2025 what became painfully obvious was that vectors were no longer a specific database type but rather a specific data type that could be integrated into an existing multimodel database. So instead of an organization being required to use a purpose‑built system, it could just use an existing data...
In 2025 what became painfully obvious was that vectors were no longer a specific database type but rather a specific data type that could be integrated into an existing multimodel database. So instead of an organization being required to use a purpose-built system, it could just use an existing database that supports vectors. For example, Oracle supports vectors and so does every database offered by Google.
Amazon S3, long the de facto leader in cloud based object storage, now allows users to store vectors, further negating the need for a dedicated, unique vector database. That doesn't mean object storage replaces vector search engines -- performance, indexing, and filtering still matter -- but it does narrow the set of use cases where specialized systems are required. No, that doesn't mean purpose-built vector databases are dead.
Much like with RAG, there will continue to be use cases for purpose-built vector databases in 2026. What will change is that use cases will likely narrow somewhat for organizations that need the highest levels of performance or a specific optimization that a general-purpose solution doesn't support. PostgreSQL ascendant As 2026 starts, what's old is new again.
The open-source PostgreSQL database will be 40 years old in 2026, yet it will be more relevant than it has ever been before. Over the course of 2025, the supremacy of PostgreSQL as the go-to database for building any type of GenAI solution became apparent. Snowflake spent $250 million to acquire PostgreSQL database vendor Crunchy Data; Databricks spent $1 billion on Neon; and Supabase raised a $100 million series E giving it a $5 billion valuation.
Vectors are now a data type. They no longer require a dedicated database. Instead of building a purpose‑built system, firms can plug vectors into their existing multimodel databases, a shift that could simplify pipelines while also raising questions about performance, consistency, and tooling support.
The change follows years of movement from relational tables to document stores and graph engines, illustrating how each wave has reshaped how enterprises store and query information. Now, in the era of agentic AI, data infrastructure is again in flux. Whether integrating vectors into multimodel platforms will deliver the expected scalability remains uncertain, and early adopters are watching latency and cost metrics closely.
Some organizations appreciate the convenience of a single system. They won’t ignore the need for new monitoring tools. Is the convenience worth the risk?
Others worry about hidden complexity in query planning and index maintenance. Companies will need to evaluate governance policies as vector embeddings embed semantic nuances that traditional columns never captured. The shift underscores a broader lesson: data matters, and how it is represented can influence AI outcomes.
Only further experience will clarify the trade‑offs. Time will test durability.
Further Reading
- SQL Server 2025 Now GA: Enterprise AI without the Learning Curve - Pure Storage Blog
- Vector Data Type - SQL Server | Microsoft Learn - Microsoft Learn
- What's New in SQL Server 2025 - Microsoft Learn - Microsoft Learn
- How to Make a Vector Database Work for Your Enterprise - Sombra - Sombra
Common Questions Answered
How did the classification of vectors change in 2025 according to the article?
In 2025 vectors were redefined from being a dedicated database type to a generic data type that can be embedded within multimodel databases. This shift allows enterprises to store and query embeddings using existing database platforms rather than deploying purpose‑built vector databases.
Which major database providers now support vectors as a native data type?
The article notes that Oracle has added native vector support, and every database offering from Google also supports vectors. Additionally, Amazon S3, traditionally an object storage service, now includes vector capabilities, expanding the ecosystem for similarity search.
What benefits does integrating vectors into multimodel databases provide to enterprises?
Integrating vectors into multimodel databases simplifies data pipelines by eliminating the need for separate purpose‑built vector stores. It also reduces architectural complexity and can lower costs, though it introduces new considerations around performance, consistency, and tooling support.
What new challenges might arise from using vectors as a data type in existing databases?
The article highlights concerns about performance optimization for similarity search, ensuring consistency of vector data across distributed systems, and the availability of mature tooling for indexing and querying vectors. Enterprises will need to evaluate whether their current databases can meet these new requirements.