LLMs
How can we make interacting with conversational models feel more natural?
Every conversational interface to a language model adopts the same pattern:
A chat history sidebar, with each conversation lasting just a few turns
New sessions always begin in a brand-new thread
Every user query must always elicit exactly one response
None of these assumptions... See more
Every conversational interface to a language model adopts the same pattern:
A chat history sidebar, with each conversation lasting just a few turns
New sessions always begin in a brand-new thread
Every user query must always elicit exactly one response
None of these assumptions... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
- Self-play is the idea that an agent can improve its gameplay by playing against slightly different versions of itself because it’ll progressively encounter more challenging situations. In the space of LLMs, it is almost certain that the largest portion of self-play will look like AI Feedback rather than competitive processes.
Nathan Lambert • The Q* hypothesis: Tree-of-thoughts reasoning, process reward models, and supercharging synthetic data
Menlo Ventures released a report on ‘The State of Generative AI in the Enterprise’ and found that adoption is trailing the hype. Details below:
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Deploying a Generative AI model requires more than a VM with a GPU. It normally includes:
- Container Service : Most often Kubernetes to run LLM Serving solutions like Hugging Face Text Generation Inference or vLLM.
- Compute Resources : GPUs for running models, CPUs for management services
- Networking and DNS : Routing traffic to the appropriate
Understanding the Cost of Generative AI Models in Production
TorchMultimodal (Beta Release)
Introduction
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale. It provides:
Introduction
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale. It provides:
- A repository of modular and composable building blocks (models, fusion layers, loss functions, datasets and utilities).
- A repository of examples that show how to combine these building
facebookresearch • GitHub - facebookresearch/multimodal at a33a8b888a542a4578b16972aecd072eff02c1a6
When it comes to identifying where generative AI can make an impact, we dig into challenges that commonly:
- Involve analysis, interpretation, or review of unstructured content (e.g. text) at scale
- Require massive scaling that may be otherwise prohibitive due to limited resources
- Would be challenging for rules-based or traditional ML approaches
Developing Rapidly with Generative AI
We consider these aspects of our problem:
- Latency : How fast does the system need to respond to user input?
- Task Complexity : What level of understanding is required from the LLM? Is the input context and prompt super domain-specific?
- Prompt Length : How much context needs to be provided for the LLM to do its task?
- Quality : What is the acceptable
.png?table=block&id=e222d02f-1d78-4887-8972-a958b1fbca65&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1250&userId=&cache=v2)
.png?table=block&id=5cffd615-f82a-4e84-b2ff-4f4e496e2d3e&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1330&userId=&cache=v2)