What Is a Vector Database?
A database optimised for storing high-dimensional embedding vectors and serving approximate nearest-neighbour search with metadata filtering.
What Is a Vector Database?
A vector database is a database optimised for storing high-dimensional embedding vectors and serving approximate nearest-neighbour (ANN) search. Unlike a traditional row-store, it indexes vectors using structures like HNSW (hierarchical navigable small worlds) or IVF (inverted file index), trading exact match for sub-linear search at billion-scale. Modern vector DBs combine vector similarity with metadata filters (hybrid search), support real-time inserts, and scale horizontally. In RAG systems they store embedded document chunks; at query time the user’s query is embedded and searched against the index, returning top-k matches in tens of milliseconds.
Why It Matters in Production LLM and Agent Systems
The vector database is the backbone of every RAG system, and its operational properties — latency, recall, freshness, cost — set hard ceilings on the rest of the stack. A vector DB that supports only exact-match ANN cannot handle multi-tenant filters; one that lacks streaming inserts cannot serve a knowledge base that updates daily. Choosing the wrong index type at the wrong scale leads to either p99 latency blowing the response budget or recall ceilings that cap RAG quality.
The pain shows up across roles. Retrieval engineers see recall drop as the corpus grows because the index parameters were tuned for a smaller dataset. Platform engineers see cost overruns from over-provisioned dedicated indexes when serverless would have worked. SREs see incidents when index rebuilds bring down search. Compliance leads need vector DBs that support tenant isolation and audit logs — not all of them do.
In 2026, the vector-DB landscape is broader than it was: Pinecone (managed serverless), Weaviate (open-source with hybrid search), Qdrant (Rust-based, fast), ChromaDB (developer-friendly), Milvus (cloud-native scale), pgvector (Postgres extension for teams that want one DB), LanceDB (columnar, multimodal). MongoDB Atlas, Redis, Elasticsearch, and Azure AI Search now ship first-class vector capabilities too. The differences matter for RAG quality and ops cost, but choice matters less than instrumentation: a fast vector DB that you cannot trace is still a black box.
How FutureAGI Handles Vector Databases
FutureAGI’s approach is explicit: we do not ship a vector store. We instrument the ones you use. The traceAI library has first-class integrations for traceAI-pinecone, traceAI-weaviate, traceAI-qdrant, traceAI-chromadb, plus traceAI-milvus, traceAI-pgvector, traceAI-lancedb, traceAI-mongodb, traceAI-redis, traceAI-elasticsearch, and traceAI-azure-search. Each integration auto-instruments the client SDK, emitting OpenTelemetry spans for every query with attributes like retrieval.documents, retrieval.score, vector.collection, and vector.metric.
That instrumentation feeds the eval and observability layers. fi.evals.ContextRelevance scores whether the retrieved vectors actually matched the query intent. fi.evals.ChunkAttribution confirms the LLM downstream actually used what the vector DB returned. The trace dashboard shows per-collection p99 latency, per-tenant query distribution, and recall outliers — operational signals that vector DB providers’ own dashboards rarely combine with downstream RAG quality.
A typical FutureAGI workflow: a team running Qdrant for one tenant cluster and Pinecone for another instruments both with traceAI. They run a unified golden dataset through each, see Qdrant has 12% better ContextRelevance p10 but Pinecone has 40% lower p99 query latency, and decide based on the latency-vs-quality tradeoff for that tenant. The decision is data, not a vendor pitch. We treat vector DBs as a substrate FutureAGI runs on, not a layer FutureAGI replaces.
How to Measure or Detect It
Vector-DB quality is measured at the storage and retrieval layer:
- Query latency: p50/p99 on retrieve spans — captured automatically by
traceAI-pinecone,traceAI-weaviate,traceAI-qdrant,traceAI-chromadb. - Recall@k on a labelled set: percentage of queries where the gold doc appears in top-k — the canonical retrieval-quality benchmark.
fi.evals.ContextRelevance: 0–1 score on whether the returned vectors actually match query intent.fi.evals.ChunkAttribution: pass/fail on whether the answer used what the vector DB returned.- Index freshness lag: time from corpus insert to query-visibility — surfaces stale-context failures.
- OTel attributes:
vector.collection,vector.metric,retrieval.documents,retrieval.score.
from fi.evals import ContextRelevance
result = ContextRelevance().evaluate(
input="What's the cancellation policy?",
context="Cancel any time within the first 14 days for a full refund."
)
print(result.score, result.reason)
Common Mistakes
- Picking a vector DB before knowing the access pattern. Read-heavy FAQ workloads have different needs than write-heavy chat-history workloads. Define the access pattern first.
- Over-tuning HNSW
efparameters without measuring recall. A biggerefis always slower; whether it improves recall is empirical and dataset-specific. - Ignoring metadata-filter performance. Hybrid search on a poorly-indexed metadata column degrades latency by 10x. Index filter columns explicitly.
- Single-index for multi-tenant data. Tenant data leaks happen at the metadata-filter layer. Use namespacing or per-tenant collections; do not rely solely on
WHERE tenant_id = .... - Skipping index rebuild plans. Embedding-model upgrades require a full re-embed; a vector DB without a clean rebuild path traps you on an old embedding version.
Frequently Asked Questions
What is a vector database?
A vector database stores high-dimensional embedding vectors and serves approximate nearest-neighbour search at low latency. In RAG systems it holds embedded document chunks and returns top-k matches when the user query is embedded and searched.
How is a vector database different from a traditional database?
Traditional databases index scalar values for exact-match or range queries. Vector databases index high-dimensional vectors for similarity search using ANN algorithms like HNSW or IVF. Most modern vector DBs also support hybrid search combining metadata filters with vector similarity.
Does FutureAGI ship a vector database?
No. FutureAGI does not ship a vector store. We instrument the leading vector databases — Pinecone, Weaviate, Qdrant, ChromaDB, pgvector, Milvus, LanceDB — through traceAI integrations, so retrieval calls are observable and evaluable wherever you store your embeddings.