LLMs
The new seed parameter enables reproducible outputs by making the model return consistent completions most of the time. This beta feature is useful for use cases such as replaying requests for debugging, writing more comprehensive unit tests, and generally having a higher degree of control over the model behavior. We at OpenAI have been using this... See more
New models and developer products announced at DevDay
Here's my read on the situation:
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
Greg Kamradt • Tweet
OpenAI is treating its new marketplace seriously now: The brand new GPT store will come with REVENUE SHARING.... (missing in the Plugins launch)
and launching a Stateful Assistants API:
- Persistent Threads (/api/openai/threads)
- Built in Retrieval (chunking etc done for you)
- Code Interpreter (RIP Adv Data Analysis?)
- Speech to Text and Text to... See more
and launching a Stateful Assistants API:
- Persistent Threads (/api/openai/threads)
- Built in Retrieval (chunking etc done for you)
- Code Interpreter (RIP Adv Data Analysis?)
- Speech to Text and Text to... See more
swyx • Tweet
The need for better AI or LLM-specific infrastructure, along with the host of problems that come with non-deterministic of LLMs, means that there’s more software work ahead of us, not less. Abstraction layers like LLMs create more possibilities and thus, more work.
Is this a good thing or a bad thing? I’m not sure.
A great example of this is frontend... See more
Is this a good thing or a bad thing? I’m not sure.
A great example of this is frontend... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
Why is Discord such a good GTM for AI applications?
Text interface. Most users are just generating images, videos, and audio in these Discord servers. Prompts are easily expressible in simple text commands. It’s why we’ve seen image generation strategies like Midjourney (all-in-one) flourish in Discord while more raw diffusion models haven’t grown... See more
Text interface. Most users are just generating images, videos, and audio in these Discord servers. Prompts are easily expressible in simple text commands. It’s why we’ve seen image generation strategies like Midjourney (all-in-one) flourish in Discord while more raw diffusion models haven’t grown... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Announcing Together Inference Engine – the fastest inference available
November 13, 2023・By Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
Today we are announcing Together Inference Engine, the world’s... See more
November 13, 2023・By Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
Today we are announcing Together Inference Engine, the world’s... See more
Announcing Together Inference Engine – the fastest inference available
Today, we’re releasing the Assistants API, our first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code... See more
New models and developer products announced at DevDay
Jail-Breaked & Offline Appliances: It’s becoming increasingly clear that we’ll be able to interact with everyday appliances and devices with natural language. As locally run LLMs become more efficient and powerful, the prospects of having a conversation with your coffee machine in the morning aren’t unreasonable. After all, who wants to tinker with... See more