Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
Ben Auffarthamazon.com
Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
A knowledge graph is a structured knowledge representation model that organizes information in the form of entities, attributes, and relationships.
DocArray as our in-memory vector storage. DocArray provides various features like advanced indexing, comprehensive serialization protocols, a unified Pythonic interface, and more. Further, it offers efficient and intuitive handling of multimodal data for tasks such as natural language processing, computer vision, and audio processing.
ConversationSummaryMemory is a type of memory in LangChain that generates a summary of the conversation as it progresses. Instead of storing all messages verbatim, it condenses the information, providing a summarized version of the conversation.
Each retriever has its own strengths and weaknesses, and the choice of retriever depends on the specific use case and requirements. For example, the purpose of an Arxiv retriever is to retrieve scientific articles from the Arxiv.org archive.
Document loaders have a load() method that loads data from the configured source and returns it as documents. They may also have a lazy_load() method for loading data into memory as and when they are needed.
There’s functionality in LangChain for knowledge graphs for retrieval; however, LangChain also provides memory components to automatically create a knowledge graph based on our conversation messages.
In LangChain, we can also extract information from the conversation as facts and store these by integrating a knowledge graph as the memory.
MMR mitigates retrieval redundancy and mitigates the bias inherent in the document collection. We’ve set the k parameter to 2, which means we will get 2 documents back from retrieval.
Maximum Marginal Relevance (MMR): We can apply diversity-based re-ranking of documents during retrieval to get results that cover different perspectives or points of view from the documents retrieved so far.
Zep, being one such example, provides a persistent backend to store, summarize, and search chat histories using vector embeddings and auto-token counting.