LLMs
How enterprises are using open source LLMs: 16 examples.
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Paul Venuto • feed updates
Clean & curate your data with LLMs
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
- Suite of tools for data processing using LLMs including categorization, transformation, and extraction
- Validation of LLM outputs
- Batch processing for token savings
- Retry logic with exponential backoff for handling rate limits and
databonsai • GitHub - databonsai/databonsai: clean & curate your data with LLMs.
- Mistral AI shows a promising alternative to the GPT 3.5 model using prompt engineering .
- Mistral AI can be used where it requires high volume and faster processing time with very little cost .
- Mistral AI can be used as pre-filtering to GPT 4 to reduce cost i.e. can be used to filter down search results .
Mistral 7B is 187x cheaper compared to GPT-4
Two ways for an AI company to protect itself from competition: (a) depend not just on AI but also deep domain knowledge about a particular field, (b) have a very close relationship with the end users.
Paul Graham • Tweet
We went to OpenAI's office in San Francisco yesterday to ask them all the questions we had on Quivr (YC W24), here is what we learned:
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs 2M$ to... See more
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs 2M$ to... See more
Paul Venuto • feed updates
We generally lean towards picking more advanced commercial LLMs to quickly validate our ideas and obtain early feedback from users. Although they may be expensive, the general idea is that if problems can't be adequately solved with state-of-the-art foundational models like GPT-4, then more often than not, those problems may not be addressable... See more
Developing Rapidly with Generative AI
What’s the best way for an end user to organize and explore millions of latent space features?
I’ve found tens of thousands of interpretable features in my experiments, and frontier labs have demonstrated results with a thousand times more features in production-scale models. No doubt, as interpretability techniques advance, we’ll see feature maps... See more
I’ve found tens of thousands of interpretable features in my experiments, and frontier labs have demonstrated results with a thousand times more features in production-scale models. No doubt, as interpretability techniques advance, we’ll see feature maps... See more

.png?table=block&id=5cffd615-f82a-4e84-b2ff-4f4e496e2d3e&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1330&userId=&cache=v2)