Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
by Ben Auffarth
updated 10h ago
by Ben Auffarth
updated 10h ago
ConversationSummaryMemory is a type of memory in LangChain that generates a summary of the conversation as it progresses. Instead of storing all messages verbatim, it condenses the information, providing a summarized version of the conversation.
Johann Van Tonder added 6mo ago
milvus is immensely popular; however, other libraries such as qdrant, weviate, and chroma have been catching up.
Johann Van Tonder added 6mo ago
Vector databases are widely used in NLP tasks such as sentiment analysis, text classification, and semantic search. By representing text as vector embeddings, it becomes easier to compare and analyze textual data.
Johann Van Tonder added 6mo ago
LangChainHub is a central repository for sharing artifacts like prompts, chains, and agents used in LangChain. Inspired by the Hugging Face Hub, it aims to be a one-stop resource for discovering high-quality building blocks to compose complex LLM apps.
Johann Van Tonder added 6mo ago
There’s functionality in LangChain for knowledge graphs for retrieval; however, LangChain also provides memory components to automatically create a knowledge graph based on our conversation messages.
Johann Van Tonder added 6mo ago
Embeddings can be created using different methods. For texts, one simple method is the bag-of-words approach, where each word is represented by a count of how many times it appears in a text.
Johann Van Tonder added 6mo ago
By calculating distances between embeddings, we can perform tasks like search and similarity scoring, or classify objects, for example by topic or category.
Johann Van Tonder added 6mo ago
LangChain Decorators provides a more Pythonic interface for defining and executing prompts compared to base LangChain, making it easier to leverage the power of LLMs.
Johann Van Tonder added 6mo ago
Once we start making a lot of calls, especially in the map step, if we use a cloud provider, we’ll see tokens and, therefore, costs increase.
Johann Van Tonder added 6mo ago