Hi! Like the title says, I’m wondering if I can use the GPT API to search a product database I’m building and make product recommendations based on user inputs in my web app.
Essentially a user would describe attributes about themselves and I’m hoping GPT could search my product database to find and present the most relevant recommendations.
I’ve seen similar questions asked here previously, but none was exactly this scenario + the answers weren’t totally clear.
If not then you want to hit the Embeddings API with the text content from your database. Store the vector that is returned in your database. Then, when a user makes a query, you pass that query to the Embeddings API and compare the returned vector to each vector in your database.
My implementation is to store the vector as a CSV string in a TEXT field. The vector is used atomically so no need to break it down. Then to compare the query vector to each vector in your database and store the results in a temporary table that you can easily search in order.
The vector comparison is done with Cosine Similarity.
Thanks for the response, Ian! The database is fairly static, so it sounds like this might be a viable route. I’m not very familiar with the embeddings API. Are you saying that each row in the database would be assigned a unique vector?
It seems like this might only work if the query was very similar to the product descriptions. Meaning, if the user is telling us “I’m looking for ____ type of product”, we could user the embeddings to find a product in our database that’s the closest match.
Instead we’re asking the user to describe themselves and are hoping GPT can extrapolate to recommend products based on the attributes of the user. Does that make sense? Do you still think the embedding approach would work here?
That’s a great idea! Using tags to filter might actually reduce the database to a size that the entire new subset could fit in the prompt.
Of course, that’d still be a super token-heavy approach versus GPT just “knowing” our entire database to begin with. But I really like this idea as a start, so I’m gonna mess around with it.
I’d be interested to hearing more about your specific use case and what you’re working on!
I’m not sure this will work, but it might be cheaper to operate. Ask ChatGPT to read your produce descriptions and then have it pretend to be a user describing attributes of themselves. You can ask it to generate as many (100s?) of these user descriptions as you want. You can even create one or two for each product manually as examples for ChatGPT to iterate on. Create embeddings for each generated user self-description, associate with the product in question, and push the embedding into something like pinecone. Then, when a new user comes to your site and provides their self-description, create embeddings for that and query pinecone with it. Creating and querying with embeddings is cheaper than having to invoke ChatGPT for each user. This way you just have to use it for each new product or product update.
That’s interesting. So you’re saying that GPT would then calculate the semantic distance from any new user description and compare it to one in our dataset — and therefore, it’d know corresponding good recommendations?
Sounds like your database is very static, and assuming your database can export to a CSV file, you can turn the CSV into embeddings and run GPT on top of it.
Here is a thread on how to use it on top of CSV files…
Hi Nelson! Thanks for the insight. That sounds like a good approach if my use case was to build a chatbot that could answer questions about our product list, but I’m looking for GPT to make a selection from a product list based on attributes of our user, so it’s slightly different. And I’m not sure it would work in that sense.
Hi Jon,
Do you mind sharing a sample CSV file of your products?
I assume you will like GPT to answer product questions and do product recommendations right?
Thanks @outdone-jon
You can build a plugin instead of using embeddings. Which also gives you more customer opportunities and other options.
And then let the model ask the api.
Once the user provides their description you just call the OpenAI embeddings endpoint and then use that to query pinecone (or any vector db). That query calculates the semantic distances and can return all the “hits” along with their distance. A pinecone query (they even have a free tier and you can host your own vector db very easily) is dirt cheap compared to ChatGPT’s API. Of course you can mix-in as much “chat” as you want (e.g. asking the user to provide their description).