LLMs
- Query the RAG anyway and let the LLM itself chose whether to use the the RAG context or its built in knowledge
- Query the RAG but only provide the result to the LLM if it meets some level of relevancy (ie embedding distance) to the question
- Run the LLM both on it's own and with the RAG response, use a heuristic (or another LLM) to pick the best answer
r/LocalLLaMA - Reddit
Since we launched ChatGPT Enterprise a few months ago, early customers have expressed the desire for even more customization that aligns with their business. GPTs answer this call by allowing you to create versions of ChatGPT for specific use cases, departments, or proprietary datasets. Early customers like Amgen, Bain, and Square are already... See more
Introducing GPTs
This could be a business opportunity: building GPTs for companies.
Menlo Ventures released a report on ‘The State of Generative AI in the Enterprise’ and found that adoption is trailing the hype. Details below:
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Amplify Partners was running a survey among 800+ AI engineers to bring transparency to the AI Engineering space. The report is concise, yet it provides a wealth of insights into the technologies and methods employed by companies for the implementation of AI products.
Highlights
👉 Top AI use cases are code intelligence, data extraction and workflow... See more
Highlights
👉 Top AI use cases are code intelligence, data extraction and workflow... See more
Paul Venuto • feed updates
We consider these aspects of our problem:
- Latency : How fast does the system need to respond to user input?
- Task Complexity : What level of understanding is required from the LLM? Is the input context and prompt super domain-specific?
- Prompt Length : How much context needs to be provided for the LLM to do its task?
- Quality : What is the acceptable
Developing Rapidly with Generative AI
- Mistral AI shows a promising alternative to the GPT 3.5 model using prompt engineering .
- Mistral AI can be used where it requires high volume and faster processing time with very little cost .
- Mistral AI can be used as pre-filtering to GPT 4 to reduce cost i.e. can be used to filter down search results .
Mistral 7B is 187x cheaper compared to GPT-4
Why is Discord such a good GTM for AI applications?
Text interface. Most users are just generating images, videos, and audio in these Discord servers. Prompts are easily expressible in simple text commands. It’s why we’ve seen image generation strategies like Midjourney (all-in-one) flourish in Discord while more raw diffusion models haven’t grown... See more
Text interface. Most users are just generating images, videos, and audio in these Discord servers. Prompts are easily expressible in simple text commands. It’s why we’ve seen image generation strategies like Midjourney (all-in-one) flourish in Discord while more raw diffusion models haven’t grown... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
The Gemini API context caching feature is designed to reduce the cost of requests that contain repeat content with high input token counts.
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
When to use context caching
Context caching is particularly well suited to scenarios where a substantial initial context is referenced repeatedly by shorter requests. Consider using context caching for use cases... See more
Context caching guide | Google AI for Developers | Google for Developers
When we deliver a model we make sure we don't reach X seconds of latency in our API. Before even going into performance of LLMs for classification, I can tell you that with the current available tech they are just infeasible.
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
Reply
reply
LinuxSpinach
•
5h ago
^ this. And especially classification as a task, because businesses don’t want to pay llm... See more
r/MachineLearning - Reddit
We're doing NER on hundreds of millions of documents in a specialised niche. LLMs are terrible for this. Slow, expensive and horrifyingly inaccurate. Even with agents, pydantic parsing and the like. Supervised methods are the way to go. Hell, I'd take an old school rule based approach over LLMs for this.