Vecstore Achieves 60% Latency Reduction by Consolidating to Neon Serverless Postgres with pgvector

Image for Vecstore Achieves 60% Latency Reduction by Consolidating to Neon Serverless Postgres with pgvector

SAN FRANCISCO, CA – Vecstore, an AI search platform built on Rust, has significantly reduced its data processing latency by replacing its dual-database architecture of Pinecone and Amazon RDS with a unified Neon serverless PostgreSQL solution leveraging the pgvector extension. The company reported a substantial drop in latency from 200 milliseconds to 80 milliseconds, alongside a much simpler operational setup.

The migration was announced by Neon, a serverless Postgres provider, which quoted Giorgi Kenchadze, Founder & CEO at Vecstore, stating, "We replaced both Pinecone and RDS with Neon, and latency dropped from 200ms to 80ms with a much simpler setup. Neon also gave us a smoother developer experience across multiple regions. It just works.” Vecstore's platform requires handling both structured relational data and unstructured vector embeddings for its AI-driven search functionalities.

Previously, Vecstore managed separate AWS RDS instances for relational data and Pinecone for vector search. This setup led to increased complexity, higher costs, and performance bottlenecks, with each search requiring two distinct calls. Kenchadze noted that "Having two separate databases for one product was inefficient in every way: cost, performance, and developer experience." The operational overhead included managing separate clients, deployments, and integration paths, compounded by an incomplete Rust SDK for Pinecone.

The transition to Neon's serverless Postgres, which natively supports the pgvector extension, allowed Vecstore to consolidate both data types into a single database. This unified approach not emphatically simplified their architecture but also delivered improved query speeds, better developer experience through features like branching and autoscaling, and reduced infrastructure costs. Kenchadze highlighted that the "misconception people have about pgvector... it’s just as fast as Pinecone, if not faster."

This case study underscores a growing trend where PostgreSQL with pgvector is increasingly seen as a viable, and often superior, alternative to specialized vector databases like Pinecone. Benchmarks from various sources, including Timescale and Supabase, have frequently shown pgvector outperforming Pinecone in terms of latency, throughput, and cost-efficiency for many workloads. The success of Vecstore's migration further validates the "Postgres for everything" paradigm, demonstrating that a well-optimized relational database can effectively handle complex AI-driven vector search requirements.