Pinecone/Devpost Hackathon June 2023
- Try it out: Commercial Consensus (hosted on AWS)
- Execution flow diagrams
Traditional implementations of collaborative filtering, content-based filtering, and graph-based recommendation methods rely heavily on structured, tabular data. However, this approach is fraught with limitations due to the widespread missing and inconsistent data inherent to third-party seller platforms:
Example of inconsistent data availability for two products in the same category:
Missing data across our full dataset:
Even when data is available, it is often heterogeneous:
This data quality issue hampers the effectiveness of recommendation systems, thereby reducing platform revenue generation as well as impeding optimal user experience.
Commercial Consensus approaches this problem by harnessing the latent information within customer reviews. By performing vector similarity search on an embedding space reduced by traditional tabular filters, the system presents a basic approach to mitigating the longstanding problem of data quality in e-commerce platforms. Utilizing Pinecone's vector search engine over indexed OpenAI embeddings in coordination with Cohere's reranking endpoint, the platform performs a hybrid (tabular + semantic) search and a conversational interface to tap into the previously inaccessible body of knowledge available in customer reviews.
Enhanced Search
Personalized search results using metadata & namespace filters + co.rerank()
Hover over the '?' icon to see the most similar review to your query.
'View' page contains detailed product specs and relevant reviews
Access aspect-based sentiments from reviews
Custom pinecone.query() + cohere.rerank() + openai.ChatCompletions() chain.
Generation using both aggregated reviews and product specs.
- User enters a query and presses 'Search':
- User clicks 'View' on a product:
- User enters a question in the 'Chat' tab:
This is a product of e-commerce sellers optimizing their product titles to facilitate lexical search in the presence of variably-populated data fields. We're able to exploit this practice by including this title in the LLM prompt.
As demonstrated in the diagrams above, the output of each cosine similarity search on the stored text-embedding-ada-002
-embedded dataset (i.e., each call to pinecone.query()) is followed by a re-rank.
Re-ranking is a widely-used step in modern search engines. It is generally run on the results of a lighter-weight lexical search (like TF-IDF or BM25) to refine the results. Re-ranking using BERT variants has shown SOTA search status in recent years:
Cohere recently introduced their rerank endpoint:
While pinecone.query()
without re-ranking was often sufficient for simple and well-formed queries, certain query formations (like specific negation expressions) led to undesirable results. Adding re-ranking also generally appeared to show better matching on longer reviews, however in some cases this not necessarily desirable (i.e. re-ranking led to longer reviews being prioritized while a more succinct match would be preferred for display on the home page). In other cases (specifically during RAG chaining), the longer reviews led to significantly better output. More testing is needed here.
A few examples of using pinecone.query()
alone vs. pinecone.query()
+cohere.rerank()
:
In the above, notice that both reviews mentioning BSOD in the re-ranked results go on to say that they resolved it.