LLMs
For the deployment side of things, we found that the performance of our training process was quite slow, especially when it gets into these large language models and when you train from scratch. MosaicML offers what's called programmatic optimization, which is not so much on the hardware side of things, but rather on the algorithmic side. Can you... See more
CB Insights • 2024 Tech Trends
Source: CB Insights Report
- Mistral AI shows a promising alternative to the GPT 3.5 model using prompt engineering .
- Mistral AI can be used where it requires high volume and faster processing time with very little cost .
- Mistral AI can be used as pre-filtering to GPT 4 to reduce cost i.e. can be used to filter down search results .
Mistral 7B is 187x cheaper compared to GPT-4
First of all, I'd say you have a bigger problem where your company is trying to find nails with a hammer. That is where your sentiment comes from, and could be an obstacle for both you and the company. It's the same deal when I see people keep on talking about RAG, and nowadays "modular RAG", when really, you could treat everything as a software... See more
r/MachineLearning - Reddit
- Multiple indices. Splitting the document corpus up into multiple indices and then routing queries based on some criteria. This means that the search is over a much smaller set of documents rather than the entire dataset. Again, it is not always useful, but it can be helpful for certain datasets. The same approach works with the LLMs themselves.
Matt Rickard • Improving RAG: Strategies
memary: Open-Source Longterm Memory for Autonomous Agents
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve... See more
memary demo
Why use memary?
Agents use LLMs that are currently constrained to finite context windows. memary overcomes this limitation by allowing your agents to store a large corpus of information in knowledge graphs, infer user knowledge through our memory modules, and only retrieve... See more
GitHub - kingjulio8238/memary: Longterm Memory for Autonomous Agents.
Data
Today, we’re releasing the Assistants API, our first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code... See more
New models and developer products announced at DevDay
The exact metrics we use depend on the application — our main goal is to understand how users use the feature and quickly make improvements to better meet their needs. For internal applications, this might mean measuring efficiency and sentiment. For consumer-facing applications, we similarly focus on measures of user satisfaction - direct user... See more
Developing Rapidly with Generative AI
First time here? Go to our setup guide
Features
Features
- 🤖 Multiple model integrations: OpenAI, transformers, llama.cpp, exllama2, mamba
- 🖍️ Simple and powerful prompting primitives based on the Jinja templating engine
- 🚄 Multiple choices, type constraints and dynamic stopping
- ⚡ Fast regex-structured generation
- 🔥 Fast JSON generation following a JSON schema
outlines-dev • GitHub - outlines-dev/outlines: Neuro Symbolic Text Generation
In the simplest form, we can use the model’s detection confidence to determine a score. But even here there are quite a few options to choose from:
- Lowest confidence - the score is the lowest confidence of all detected objects
- Average confidence - average of all confidences of detected objects
- Minimizing confidence delta - difference between
Active Learning with Domain Experts, a Case Study in Machine Learning
What data to label?