LLMs
The Gemini API context caching feature is designed to reduce the cost of requests that contain repeat content with high input token counts.
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
Context caching guide | Google AI for Developers | Google for Developers
Mem0: The Memory Layer for Personalized AI
Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
Note: The Mem0 repository now also includes the Embedchain project. We continue to maintain and support Embedchain ❤️. You can find the Embedchain codebase in the embedchai... See more
GitHub - mem0ai/mem0: The memory layer for Personalized AI
Deploying a Generative AI model requires more than a VM with a GPU. It normally includes:
- Container Service : Most often Kubernetes to run LLM Serving solutions like Hugging Face Text Generation Inference or vLLM.
- Compute Resources : GPUs for running models, CPUs for management services
- Networking and DNS : Routing traffic to the appropriate
Understanding the Cost of Generative AI Models in Production
When we deliver a model we make sure we don't reach X seconds of latency in our API. Before even going into performance of LLMs for classification, I can tell you that with the current available tech they are just infeasible.
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
r/MachineLearning - Reddit
We're doing NER on hundreds of millions of documents in a specialised niche. LLMs are terrible for this. Slow, expensive and horrifyingly inaccurate. Even with agents, pydantic parsing and the like. Supervised methods are the way to go. Hell, I'd take an old school rule based approach over LLMs for this.
TorchMultimodal (Beta Release)
Introduction
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale. It provides:
Introduction
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale. It provides:
- A repository of modular and composable building blocks (models, fusion layers, loss functions, datasets and utilities).
- A repository of examples that show how to combine these building
facebookresearch • GitHub - facebookresearch/multimodal at a33a8b888a542a4578b16972aecd072eff02c1a6
Ensuring availability during peak traffic by maintaining all GPU instance types could lead to prohibitively high costs. To avoid the financial strain of idle instances, we implemented a “standby instances” mechanism. Rather than preparing for the maximum potential load, we maintained a calculated number of standby instances that match the... See more
Sean Sheng • Scaling AI Models Like You Mean It
One interesting thing about LLMs is that they can actually recover (and without error loops). You can have a step that doesn't work right, and a later step can use its common-sense knowledge to ignore some of the missing results, conflicting information, etc. One of the problems with developing with LLMs is that the machine will often cover up... See more
Ask HN: What are some actual use cases of AI Agents right now? | Hacker News
My $0.02 is that a lot of the future research/work there will be figuring out how to identify effective sub-graphs to provide additional context, to avoid having to pass in the entire graph. As well as trying to identify ontology-less structures in real-time, which includes NER and RE, as well as named entity/relationship... See more
r/MachineLearning - Reddit
The quality of dataset is 95% of everything. The rest 5% is not to ruin it with bad parameters.