Vector search in LanceDB
To get started, install LanceDB into your project's NPM dependencies:
- npm
- pnpm
- yarn
npm install --save @lancedb/lancedb apache-arrow
pnpm add @lancedb/lancedb apache-arrow
yarn add @lancedb/lancedb apache-arrow
Then import LanceDB into your project:
import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow";
const bucketName = process.env.BUCKET_NAME || "tigris-example";
const db = await lancedb.connect(`s3://${bucketName}/docs`, {
storageOptions: {
endpoint: "https://t3.storage.dev",
region: "auto",
},
});
Then register the embedding model you plan to use, such as OpenAI's embedding model:
import "@lancedb/lancedb/embedding/openai";
import { LanceSchema, getRegistry, register } from "@lancedb/lancedb/embedding";
import { EmbeddingFunction } from "@lancedb/lancedb/embedding";
const func = getRegistry()
.get("openai")
?.create({ model: "text-embedding-3-small" }) as EmbeddingFunction;
And create the schema for the data you want to ingest:
const contentSchema = LanceSchema({
text: func.sourceField(new arrow.Utf8()),
vector: func.vectorField(),
//title: new arrow.Utf8(),
url: new arrow.Utf8(),
heading: new arrow.Utf8(),
});
This creates a schema that has a few fields:
- The source
text
that you are searching against - The high-dimensional generated
vector
used to search for similar embeddings - Additional metadata such as the
title
,heading
, andurl
of the document you're embedding so that the model can link users back to a source
Strictly speaking, only the text
and vector
fields are required. The rest
are optional but can help you make the user experience better. Users tend to
trust responses that have citations a lot more than responses that don't.
Next, create a table that uses that schema:
const tbl = await db.createEmptyTable("content", contentSchema, {
// if both of these are set, LanceDB uses the semantics of
// `CREATE TABLE IF NOT EXISTS content` in your favorite relational
// database.
mode: "create",
existOk: true,
});
Ingesting files
The exact details of how you ingest files will vary based on what you are ingesting, but at a high level you can make a lot of cheap assumptions about the data that will help. The biggest barrier to ingesting data into a model is a combination of two factors:
- The context window of the model (8191 tokens for OpenAI models).
- Figuring out where to chunk files such that they fit into the context window of the model.
For the sake of argument, let's say that we're dealing with a folder full of Markdown documents. Markdown is a fairly variable format that is versatile enough (this document is written in a variant of Markdown), but we can also take advantage of human organizational psychology to make this easier. People generally break Markdown documents into sections where each section is separated by a line beginning with one or more hashes:
# Title of the document
Ah yes, the venerable introduction paragraph—the sacred scroll...
## Insights
What began as an unrelated string of metaphors...
You can break this into two chunks:
[
{
"heading": "Title of the document",
"content": "Ah yes, the venerable introduction paragraph—the sacred scroll..."
},
{
"heading": "Insights",
"content": "What began as an unrelated string of metaphors..."
}
]
Each of these should be indexed separately and the heading metadata should be attached to each record in the database. You can break it up into sections of up to 8191 tokens (or however big your model's context window is) with logic like this:
Long code block with example document chunking code
import { encoding_for_model } from "@dqbd/tiktoken";
export type MarkdownSection = {
heading: string;
content: string;
};
// Exercise for the reader: handle front matter with the gray-matter package.
export async function chunkify(
markdown: string,
maxTokens = 8191,
model = "text-embedding-3-small",
): Promise<MarkdownSection[]> {
const encoding = await encoding_for_model(model);
const sections: MarkdownSection[] = [];
const lines = markdown.split("\n");
let currentHeading: string | null = null;
let currentContent: string[] = [];
const pushSection = (heading: string, content: string) => {
const tokens = encoding.encode(content);
if (tokens.length <= maxTokens) {
sections.push({ heading, content });
} else {
// If section is too long, split by paragraphs
const paragraphs = content.split(/\n{2,}/);
let chunkTokens: number[] = [];
let chunkText: string = "";
for (const para of paragraphs) {
const paraTokens = encoding.encode(para + "\n\n");
if (chunkTokens.length + paraTokens.length > maxTokens) {
sections.push({
heading,
content: chunkText.trim(),
});
chunkTokens = [...paraTokens];
chunkText = para + "\n\n";
} else {
chunkTokens.push(...paraTokens);
chunkText += para + "\n\n";
}
}
if (chunkTokens.length > 0) {
sections.push({
heading,
content: chunkText.trim(),
});
}
}
};
for (const line of lines) {
const headingMatch = line.match(/^#{1,6} (.+)/);
if (headingMatch) {
if (currentHeading !== null) {
const sectionText = currentContent.join("\n").trim();
if (sectionText) {
pushSection(currentHeading, sectionText);
}
}
currentHeading = headingMatch[1].trim();
currentContent = [];
} else {
currentContent.push(line);
}
}
// Push the final section
if (currentHeading !== null) {
const sectionText = currentContent.join("\n").trim();
if (sectionText) {
pushSection(currentHeading, sectionText);
}
}
encoding.free();
return sections;
}
Then when you're reading your files, use a loop like this to break all of the files into chunks:
import { glob } from "glob";
import { readFile } from "node:fs/promises";
import { chunkify } from "./markdownChunk";
const markdownFiles = await glob("./docs/**/*.md");
const files = [...markdownFiles].filter(
(fname) => !fname.endsWith("README.md"),
);
files.sort();
const fnameToURL = (fname) => {
// Implement me!
};
const utterances = [];
for (const fname of files) {
const data = await readFile(fname, "utf-8");
const chunks = await chunkify(data);
chunks.forEach(({ heading, content }) => {
utterances.push({
fname,
heading,
content,
url: fnameToURL(fname),
});
});
}
And finally ingest all the files into LanceDB:
let docs: unknown = []; // temporary buffer so we don't block all the time
const MAX_BUFFER_SIZE = 100;
for (const utterance of utterances) {
const { heading, content, url } = utterance;
docs.push({
heading,
content,
url,
});
if (docs.length >= MAX_BUFFER_SIZE) {
console.log(`adding ${docs.length} documents`);
await tbl.add(docs);
docs = [];
}
}
if (docs.length !== 0) {
console.log(`adding ${docs.length} documents`);
await tbl.add(docs);
}
Finally, create an index on the vector
field so the LanceDB client can search
faster:
await tbl.createIndex("vector");
And then run an example search for the term "Tigris":
const query = "Tigris";
const actual = await tbl.search(query).limit(10).toArray();
console.log(
actual.map(({ url, heading, text }) => {
return { url, heading, text };
}),
);
The entire example in one big file
import * as lancedb from "@lancedb/lancedb";
import * as arrow from "apache-arrow";
import "@lancedb/lancedb/embedding/openai";
import { LanceSchema, getRegistry } from "@lancedb/lancedb/embedding";
import { EmbeddingFunction } from "@lancedb/lancedb/embedding";
import { glob } from "glob";
import { readFile } from "node:fs/promises";
import { chunkify } from "./markdownChunk";
const bucketName = process.env.BUCKET_NAME || "tigris-example";
interface Utterance {
fname: string;
heading: string;
content: string;
url: string;
}
const func = getRegistry()
.get("openai")
?.create({ model: "text-embedding-3-small" }) as EmbeddingFunction;
const contentSchema = LanceSchema({
text: func.sourceField(new arrow.Utf8()),
vector: func.vectorField(),
url: new arrow.Utf8(),
heading: new arrow.Utf8(),
});
const fnameToURL = (fname) => {
let ref = /\.\.\/\.\.\/(.*)\.md/.exec(fname)![1];
if (ref.endsWith("/index")) {
ref = ref.slice(0, -"index".length);
}
return `https://tigrisdata.com/docs/${ref}`;
};
(async () => {
const markdownFiles = glob.sync("../../**/*.md");
const files = [...markdownFiles].filter(
(fname) => !fname.endsWith("README.md"),
);
files.sort();
const utterances: Utterance[] = [];
for (const fname of files) {
const data = await readFile(fname, "utf-8");
const chunks = await chunkify(data);
chunks.forEach(({ heading, content }) => {
utterances.push({
fname,
heading,
content,
url: fnameToURL(fname),
});
});
}
const db = await lancedb.connect(`s3://${bucketName}/docs-test`, {
storageOptions: {
endpoint: "https://t3.storage.dev",
region: "auto",
},
});
const tbl = await db.createEmptyTable("content", contentSchema, {
mode: "create",
existOk: true,
});
let docs: Record<string, string>[] = []; // temporary buffer so we don't block all the time
const MAX_BUFFER_SIZE = 100;
for (const utterance of utterances) {
const { heading, content, url } = utterance;
docs.push({
heading,
text: content,
url,
});
if (docs.length >= MAX_BUFFER_SIZE) {
console.log(`adding ${docs.length} documents`);
await tbl.add(docs);
docs = [];
}
}
if (docs.length !== 0) {
console.log(`adding ${docs.length} documents`);
await tbl.add(docs);
}
await tbl.createIndex("vector");
const query = "Tigris";
const actual = await tbl.search(query).limit(10).toArray();
console.log(
actual.map(({ url, heading, text }) => {
return { url, heading, text };
}),
);
})();
And now you can search the Tigris docs!