Agents
memary: Open-Source Longterm Memory for Autonomous Agents
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve... See more
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve... See more
GitHub - kingjulio8238/memary: Longterm Memory for Autonomous Agents.
Data
💸🤑 Announcing our Bounty Program: Help the Julep community fix bugs and ship features and get paid. More details here.
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI apps and... See more
Start your project with conversation history, support for any LLM, agentic workflows, integrations & more.
Explore the docs »
Report Bug · Request Feature · Join Our Discord · X · LinkedIn
Why Julep?
We've built a lot of AI apps and... See more
GitHub - julep-ai/julep: Open-source alternative to Assistant's API with a managed backend for memory, RAG, tools and tasks. ~Supabase for building AI agents.
AgentTuning: Enabling Generalized Agent Abilities For LLMs
🤗 Model (AgentLM-70B) • 🤗 Dataset (AgentInstruct) • 📃 Paper • 🌐 Project Page
中文版(Chinese)
AgentTuning represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent... See more
🤗 Model (AgentLM-70B) • 🤗 Dataset (AgentInstruct) • 📃 Paper • 🌐 Project Page
中文版(Chinese)
AgentTuning represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent... See more
THUDM • GitHub - THUDM/AgentTuning: AgentTuning: Enabling Generalized Agent Abilities for LLMs
Adala is an Autonomous DA ta (Labeling) Agent framework.
Adala offers a robust framework for implementing agents specialized in data processing, with an emphasis on diverse data labeling tasks. These agents are autonomous, meaning they can independently acquire one or more skills through iterative learning. This learning process is influenced by... See more
Adala offers a robust framework for implementing agents specialized in data processing, with an emphasis on diverse data labeling tasks. These agents are autonomous, meaning they can independently acquire one or more skills through iterative learning. This learning process is influenced by... See more
HumanSignal • GitHub - HumanSignal/Adala: Adala: Autonomous DAta (Labeling) Agent framework
📖 Introduction
XAgent is an open-source experimental Large Language Model (LLM) driven autonomous agent that can automatically solve various tasks. It is designed to be a general-purpose agent that can be applied to a wide range of tasks. XAgent is still in its early stages, and we are working hard to improve it.
🏆 Our goal is to create a... See more
XAgent is an open-source experimental Large Language Model (LLM) driven autonomous agent that can automatically solve various tasks. It is designed to be a general-purpose agent that can be applied to a wide range of tasks. XAgent is still in its early stages, and we are working hard to improve it.
🏆 Our goal is to create a... See more
OpenBMB • GitHub - OpenBMB/XAgent: An Autonomous LLM Agent for Complex Task Solving
🔎 GPT Researcher
GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.
The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent Plan-and-Solve and RAG (Retrieval Augmented... See more
GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.
The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent Plan-and-Solve and RAG (Retrieval Augmented... See more
assafelovic • GitHub - assafelovic/gpt-researcher: GPT based autonomous agent that does online comprehensive research on any given topic
The LLM doesn’t call the tool directly (yet), but it does pass back to the application what functions should be called — and with which parameters. And, now, OpenAI lets multiple function calls be “invoked” at once.
But, this idea is not just about GPT. The open source world is moving towards this model as well.
This Is The Future…It’s Just Not Here... See more
But, this idea is not just about GPT. The open source world is moving towards this model as well.
This Is The Future…It’s Just Not Here... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
In the open-source community, there are huge numbers of people leveraging AutoGen in creative ways, and solving surprising problems. One pattern that we see as fundamental is the "generator+critic" pattern, where one agent generates content (writing, code, etc.) and another agent critiques it (finds bugs, etc.) They can iterate until the solution... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Introduction
VT.ai is a multi-modal AI Chatbot Assistant, offering a chat interface to interact with Large Language Models (LLMs) from various providers. Both via remote API or running locally with Ollama.
The application supports multi-modal conversations, seamlessly integrating text, images, and vision processing with LLMs.
[Beta] Multi-modal AI... See more
VT.ai is a multi-modal AI Chatbot Assistant, offering a chat interface to interact with Large Language Models (LLMs) from various providers. Both via remote API or running locally with Ollama.
The application supports multi-modal conversations, seamlessly integrating text, images, and vision processing with LLMs.
[Beta] Multi-modal AI... See more