LLMs
So right now, LLMs (Large Language Models) are all the rage. But in the future, it’s possible that the way we get things done is composing things with a combination of LLMs, SMMs (Small, Mighty Models), agents and tools.
It’s what I call Cognitive Composition (because it sounds cool and I have a longtime love affair with alliteration).
This is how we... See more
It’s what I call Cognitive Composition (because it sounds cool and I have a longtime love affair with alliteration).
This is how we... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
The way that most RLHF is done to date has the entire response from a language model get an associated score. To anyone with an RL background, this is disappointing, because it limits the ability for RL methods to make connections about the value of each sub-component of text. Futures have been pointed to where this multi-step optimization comes at... See more
Nathan Lambert • The Q* hypothesis: Tree-of-thoughts reasoning, process reward models, and supercharging synthetic data
Today, we’re releasing the Assistants API, our first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code... See more
New models and developer products announced at DevDay
When we deliver a model we make sure we don't reach X seconds of latency in our API. Before even going into performance of LLMs for classification, I can tell you that with the current available tech they are just infeasible.
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
r/MachineLearning - Reddit
We're doing NER on hundreds of millions of documents in a specialised niche. LLMs are terrible for this. Slow, expensive and horrifyingly inaccurate. Even with agents, pydantic parsing and the like. Supervised methods are the way to go. Hell, I'd take an old school rule based approach over LLMs for this.
We went to OpenAI's office in San Francisco yesterday to ask them all the questions we had on Quivr (YC W24), here is what we learned:
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs 2M$ to... See more
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs 2M$ to... See more
Paul Venuto • feed updates
The need for better AI or LLM-specific infrastructure, along with the host of problems that come with non-deterministic of LLMs, means that there’s more software work ahead of us, not less. Abstraction layers like LLMs create more possibilities and thus, more work.
Is this a good thing or a bad thing? I’m not sure.
A great example of this is frontend... See more
Is this a good thing or a bad thing? I’m not sure.
A great example of this is frontend... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
Google Deepmind used similar idea to make LLMs faster in Accelerating Large Language Model Decoding with Speculative Sampling. Their algorithm uses a smaller draft model to make initial guesses and a larger primary model to validate them. If the draft often guesses right, operations become faster, reducing latency.
There are some people speculating... See more
There are some people speculating... See more
muhtasham • Machine Learners Guide to Real World - 2️⃣ Concepts from Operating Systems That Found Their Way in LLMs
- Multiple indices. Splitting the document corpus up into multiple indices and then routing queries based on some criteria. This means that the search is over a much smaller set of documents rather than the entire dataset. Again, it is not always useful, but it can be helpful for certain datasets. The same approach works with the LLMs themselves.