GitHub - stanford-futuredata/ColBERT: Stanford ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22)
Pipeline RobustQA Avg. score Avg. response time (secs) Azure Cognitive Search Retriever + GPT4 + Ada 72.36 >1.0s Canopy (Pinecone) 59.61 >1.0s Langchain + Pinecone + OpenAI 61.42 <0.8s Langchain + Pinecone + Cohere 69.02 <0.6s LlamaIndex + Weaviate Vector Store - Hybrid Search 75.89 <1.0s RAG Google Cloud VertexAI... See more
arXiv:2405.02048v1 [cs.IR] 3 May 2024
Nicolay Gerold added
Text embeddings are a critical piece of many pipelines, from search, to RAG, to vector databases and more. Most embedding models are BERT/Transformer-based and typically have short context lengths (e.g., 512). That’s only about two pages of text, but documents can be very long – books, legal cases, TV screenplays, code repositories, etc can be tens... See more
Long-Context Retrieval Models with Monarch Mixer
Nicolay Gerold added
🥤 Cola [NeurIPS 2023]
Large Language Models are Visual Reasoning Coordinators
Liangyu Chen*,†,♥ Bo Li*,♥ Sheng Shen♣ Jingkang Yang♥
Chunyuan Li♠ Kurt Keutzer♣ Trevor Darrell♣ Ziwei Liu✉,♥
♥S-Lab, Nanyang Technological University
♣University of California, Berkeley ♠Microsoft Research, Redmond
*Equal Contribution †Project Lead ✉Corresponding Author... See more
Large Language Models are Visual Reasoning Coordinators
Liangyu Chen*,†,♥ Bo Li*,♥ Sheng Shen♣ Jingkang Yang♥
Chunyuan Li♠ Kurt Keutzer♣ Trevor Darrell♣ Ziwei Liu✉,♥
♥S-Lab, Nanyang Technological University
♣University of California, Berkeley ♠Microsoft Research, Redmond
*Equal Contribution †Project Lead ✉Corresponding Author... See more
cliangyu • GitHub - cliangyu/Cola: [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"
Nicolay Gerold added
If all of these research ambitions were to come to fruition, the resulting system would be a very early version of the system that we envisioned in the introduction. That is, the resulting system would be able to provide domain expert answers to a wide range of information needs in a way that neither modern IR systems, question answering systems, o... See more
Donald Metzler • Rethinking Search: Making Domain Experts out of Dilettantes
Benjamin Searle added
- Cohere introduced Embed v3, an advanced model for generating document embeddings, boasting top performance on a few benchmarks. It excels in matching document topics to queries and content quality, improving search applications and retrieval-augmentation generation (RAG) systems. The new version offers models with 1024 or 384 dimensions, supports o
FOD#27: "Now And Then"
Nicolay Gerold added
- Multiple indices. Splitting the document corpus up into multiple indices and then routing queries based on some criteria. This means that the search is over a much smaller set of documents rather than the entire dataset. Again, it is not always useful, but it can be helpful for certain datasets. The same approach works with the LLMs themselves.
- Cu
Matt Rickard • Improving RAG: Strategies
Nicolay Gerold added
Although there are already many methods available for keyword generation (e.g., Rake, YAKE!, TF-IDF, etc.) I wanted to create a very basic, but powerful method for extracting keywords and keyphrases. This is where KeyBERT comes in! Which uses BERT-embeddings and simple cosine similarity to find the sub-phrases in a document that are the most simila... See more
MaartenGr • GitHub - MaartenGr/KeyBERT: Minimal keyword extraction with BERT
Nicolay Gerold added
We currently don't understand how to make sense of the neural activity within language models. Today, we are sharing improved methods for finding a large number of "features"—patterns of activity that we hope are human interpretable.
Hagen Peters added