LLMs
What’s the best way for an end user to organize and explore millions of latent space features?
I’ve found tens of thousands of interpretable features in my experiments, and frontier labs have demonstrated results with a thousand times more features in production-scale models. No doubt, as interpretability techniques advance, we’ll see feature maps... See more
I’ve found tens of thousands of interpretable features in my experiments, and frontier labs have demonstrated results with a thousand times more features in production-scale models. No doubt, as interpretability techniques advance, we’ll see feature maps... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
- Query the RAG anyway and let the LLM itself chose whether to use the the RAG context or its built in knowledge
- Query the RAG but only provide the result to the LLM if it meets some level of relevancy (ie embedding distance) to the question
- Run the LLM both on it's own and with the RAG response, use a heuristic (or another LLM) to pick the best answer
r/LocalLLaMA - Reddit
Here's my read on the situation:
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
Greg Kamradt • Tweet
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
Humans are bad at coming up with search queries. Humans are good at incrementally narrowing down options with a series of filters, and pointing where they want to go next. This seems obvious, but we keep building interfaces for finding information that look more like Google Search and less like a map.
All information tools have to give users some... See more
All information tools have to give users some... See more
thesephist.com • Navigate, don't search
Fine-Tuning for LLM Research by AI Hero
This repo contains the code that will be run inside the container. Alternatively, this code can also be run natively. The container is built and pushed to the repo using Github actions (see below). You can launch the fine tuning job using the examples in the https://github.com/ai-hero/llm-research-examples... See more
This repo contains the code that will be run inside the container. Alternatively, this code can also be run natively. The container is built and pushed to the repo using Github actions (see below). You can launch the fine tuning job using the examples in the https://github.com/ai-hero/llm-research-examples... See more
GitHub - ai-hero/llm-research-fine-tuning
What is Substrate?
Substrate is an AI inference platform. In particular, it excels at enabling complex multi-model workloads . At its core, Substrate is 1) a collection of cutting-edge AI models – tuned for optimum performance, and 2) a set of composable APIs for relating these models to each other. We believe having both of these components in one... See more
Substrate is an AI inference platform. In particular, it excels at enabling complex multi-model workloads . At its core, Substrate is 1) a collection of cutting-edge AI models – tuned for optimum performance, and 2) a set of composable APIs for relating these models to each other. We believe having both of these components in one... See more
Nextra: the next docs builder
In general, I see LLMs to be used in two broad categories: data processing, which is more of a worker use-cases, where the latency isn't the biggest issue but rather quality, and in user-interactions, where latency is a big factor. I think for the faster case a faster fallback is necessary. Or you escalate upwards, you first rely on a smaller more... See more
Discord - A New Way to Chat with Friends & Communities
When we deliver a model we make sure we don't reach X seconds of latency in our API. Before even going into performance of LLMs for classification, I can tell you that with the current available tech they are just infeasible.
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
r/MachineLearning - Reddit
We're doing NER on hundreds of millions of documents in a specialised niche. LLMs are terrible for this. Slow, expensive and horrifyingly inaccurate. Even with agents, pydantic parsing and the like. Supervised methods are the way to go. Hell, I'd take an old school rule based approach over LLMs for this.