LLMs Are Not All You Need | Pinecone
Darren LI and added
LLMs combine what they “learned” in training with any new context you give them. There are many ways to give the AI additional context, the most common is in the prompt that you provide (“You should act like a marketer and help me respond to a request for proposal”), or any documents you upload to the AI.
Ethan Mollick • Which AI should I use? Superpowers and the State of Play
Johann Van Tonder added
AutoGen's design offers multiple advantages: a) it gracefully navigates the strong but imperfect generation and reasoning abilities of these LLMs; b) it leverages human understanding and intelligence, while providing valuable automation through conversations between agents; c) it simplifies and unifies the implementation of complex LLM workflows as... See more
r/singularity - Reddit
Nicolay Gerold added
Chaining LLM Agents instead of LLM calls. Seems like a pretty heavy prompt engineering effort.
They are pushing for agents that are specialized in a certain tasks through RAG / finetuning, where CAMEL and other frameworks failed.
One interesting area for exploration might be finetuning LLMs for collaboration before finetuning them for tasks.
On their own, large language models (LLMs) are, to a significant extent, Babel-like. Their latent space can output every possible combination of words. They are capable of creating genius-level sentences—and also false gibberish. And at this point in the lifecycle of this technology, the quality of the results you’re going to get is far higher when
... See moreDan Shipper • GPT-4: A Copilot for the Mind
sari added
memary: Open-Source Longterm Memory for Autonomous Agents
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve relevan... See more
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve relevan... See more
GitHub - kingjulio8238/memary: Longterm Memory for Autonomous Agents.
Nicolay Gerold added
Data
Large Language Model State Machine (llmstatemachine)
Introduction
llmstatemachine is a library for creating agents with GPT-based language models and state machine logic.
Introduction
llmstatemachine is a library for creating agents with GPT-based language models and state machine logic.
- Chat History as Memory : Leverages large context window models, making chat history the primary source of agent memory.
- Custom Python Functions with JSON Generation : Allows the c
GitHub - robocorp/llmstatemachine: A Python library for building GPT-powered agents with state machine logic and chat history memory.
Nicolay Gerold added
LLM agents rely on their Long-Term Memory (LTM) to store and retrieve crucial memories. Conversations, thoughts, plans, actions, observations, skills, and behaviors are all stored within a vector database. The LTM enables pre-processing and post-processing of memories, ensuring optimal retrieval. By considering factors such as context, recency, imp... See more
Introducing our work on general-purpose LLM Agents | GoodAI
Nicolay Gerold added
Autonomous Agents & Agent Simulations
blog.langchain.devDarren LI added