Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
Ben Auffarthamazon.com
Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs
Streamlit makes it simple to wrap our agent in an interactive web application.
While chains define reusable logic by sequencing components, agents leverage chains to take goal-driven actions. Agents combine and orchestrate chains. The agent observes the environment, decides which chain to execute based on that observation, takes the chain’s specified action, and repeats.
Vector libraries, like Facebook (Meta) Faiss or Spotify Annoy, provide functionality for working with vector data. In the context of vector search, a vector library is specifically designed to store and perform similarity search on vector embeddings.
plan-and-execute agents that first create a complete plan and then gather evidence to execute it. The Planner LLM produces a list of plans (P).
Stochastic parrots refers to LLMs that can produce convincing language but lack any true comprehension of the meaning behind words.
LangChain is an open-source Python framework for building LLM-powered applications. It provides developers with modular, easy-to-use components for connecting language models with external data sources and services.
Fine-tuning involves modifying a pre-trained language model by training it on a specific task using supervised learning.
We can also do arithmetic between these embeddings; for example, we can calculate distances between them:
ConversationSummaryMemory is a type of memory in LangChain that generates a summary of the conversation as it progresses. Instead of storing all messages verbatim, it condenses the information, providing a summarized version of the conversation.