Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
amazon.com
Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs

Vector databases can be used to store and serve machine learning models and their corresponding embeddings. The primary application is similarity search (also semantic search),
ConversationSummaryMemory is a type of memory in LangChain that generates a summary of the conversation as it progresses. Instead of storing all messages verbatim, it condenses the information, providing a summarized version of the conversation.
The true power of LLMs lies not in LLMs being used in isolation but in LLMs being combined with other sources of knowledge and computation. The LangChain framework aims to enable precisely this kind of integration, facilitating the development of context-aware, reasoning-based applications.
An embedding is a numerical representation of content in a way that machines can process and understand.
Vector libraries, like Facebook (Meta) Faiss or Spotify Annoy, provide functionality for working with vector data. In the context of vector search, a vector library is specifically designed to store and perform similarity search on vector embeddings.
Stochastic parrots refers to LLMs that can produce convincing language but lack any true comprehension of the meaning behind words.
In the most generic terms, a chain is a sequence of calls to components, which can include other chains.
Once we start making a lot of calls, especially in the map step, if we use a cloud provider, we’ll see tokens and, therefore, costs increase.
LangFlow and Flowise are UIs that allow chaining LangChain components in an executable flowchart by dragging sidebar components onto the canvas and connecting them together to create your pipeline.