LLMs
Overview
Loki is our open-source solution designed to automate the process of verifying factuality. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is... See more
Loki is our open-source solution designed to automate the process of verifying factuality. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is... See more
Libr-AI • GitHub - Libr-AI/OpenFactVerification: Open-source solution designed to automate the process of verifying factuality
Deploying a Generative AI model requires more than a VM with a GPU. It normally includes:
- Container Service : Most often Kubernetes to run LLM Serving solutions like Hugging Face Text Generation Inference or vLLM.
- Compute Resources : GPUs for running models, CPUs for management services
- Networking and DNS : Routing traffic to the appropriate
Understanding the Cost of Generative AI Models in Production
Overview
MaxText is a high performance , highly scalable , open-source LLM written in pure Python/Jax and targeting Google Cloud TPUs and GPUs for training and inference . MaxText achieves high MFUs and scales from single host to very large clusters while staying simple and "optimization-free" thanks to the power of Jax and the XLA compiler.
MaxText... See more
MaxText is a high performance , highly scalable , open-source LLM written in pure Python/Jax and targeting Google Cloud TPUs and GPUs for training and inference . MaxText achieves high MFUs and scales from single host to very large clusters while staying simple and "optimization-free" thanks to the power of Jax and the XLA compiler.
MaxText... See more
google • GitHub - google/maxtext: A simple, performant and scalable Jax LLM!
- Traditional AI - The most secure, understandable, and performant. However, Good implementations of traditional AI require that we define the rules behind the system, which makes it unfeasible for many of the use cases that the other 2 techniques thrive on.
- Supervised Machine Learning- Middle of the road b/w AI and Deep Learning. Good when we have
Devansh • How to Pick between Traditional AI, Supervised Machine Learning, and Deep Learning [Thoughts]
Where would I add generative AI? Generative AI has the ease of accessibility of traditional AI, where people think it is understandable, but it does not have that feature in itself. It also has the opaque and costly nature of DL. Many companies are at the moment rushing into developing things with generative AI without having any prior foundation in AI and any processes set up to manage it: data ops, devops, …
Traditional AI forces you to think about how something works, understand the system, and then define the rules for it. ML lets you use features and feature importance to shortcut some. Deep Learning allows you to brute force it. Generative AI allows you to brute force without any background in DL.
LLMTuner
LLMTuner: Fine-Tune Llama, Whisper, and other LLMs with best practices like LoRA, QLoRA, through a sleek, scikit-learn-inspired interface.
LLMTuner: Fine-Tune Llama, Whisper, and other LLMs with best practices like LoRA, QLoRA, through a sleek, scikit-learn-inspired interface.
promptslab • GitHub - promptslab/LLMtuner: Tune LLM in few lines of code
The Gemini API context caching feature is designed to reduce the cost of requests that contain repeat content with high input token counts.
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
Context caching guide | Google AI for Developers | Google for Developers
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
You can think your way into solving a deterministic system, but you cannot think your way into solving a probabilistic system.
The first thing that I want to call out is that deterministic software has edge cases, while probabilistic software has long tails.
I find that a lot of junior folks try to really think hard about edge cases around... See more
.png?table=block&id=e222d02f-1d78-4887-8972-a958b1fbca65&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1250&userId=&cache=v2)