LLMs
To train LLMs, you need data that is:
Large — Sufficiently large LMs require trillions of tokens.
Clean — Noisy data reduces performance.
Diverse — Data should come from different sources and different knowledge bases.
What does clean data look like?
You can de-duplicate data with simple heuristics. The most basic would be removing any exact duplicates... See more
Large — Sufficiently large LMs require trillions of tokens.
Clean — Noisy data reduces performance.
Diverse — Data should come from different sources and different knowledge bases.
What does clean data look like?
You can de-duplicate data with simple heuristics. The most basic would be removing any exact duplicates... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
The OpenAI Assistants API offers more than a simple prompt-sharing interface; it provides a sophisticated framework for AI interactions. It allows for persistent conversation sessions with automatic context management (Threads), structured interactions (Messages and Runs), integration with various tools for enhanced capabilities, customization... See more
Discord - A New Way to Chat with Friends & Communities
However, a key risk with several of these startups is the potential lack of a long-term moat. It is difficult to read too much into it given the stage of these startups and the limited public information available but it’s not difficult to poke holes at their long term defensibility. For example:
- If a startup is built on the premise of taking base
AI Startup Trends: Insights from Y Combinator’s Latest Batch
- You have access to a proprietary asset (like data) that others don’t have easy access to. In our “write job postings” example, perhaps you have a corpus of thousands of job postings including some outcome scores (as to how well they did). You could use this data to create better job postings. Others don’t have ready access to this data. Note: The
Dharmesh Shah • How To Build a Defensible A.I. Startup
Protecting LLM products:
(1) Is hard to bootstrap. This already hints to existing customers or you need to get a bunch of your customers to co-develop (insurance model → companies pooling their data to solve a problem they all have). This runs into a bunch of issues: competitive drive of the companies, data privacy and security.
(2) Reserved for existing companies. This is the co-pilot model.
(3) This might be the most sustainable one, but it is also the hardest one. I have not seen anything in that direction yet besides OpenAI.
- Multiple indices. Splitting the document corpus up into multiple indices and then routing queries based on some criteria. This means that the search is over a much smaller set of documents rather than the entire dataset. Again, it is not always useful, but it can be helpful for certain datasets. The same approach works with the LLMs themselves.
Matt Rickard • Improving RAG: Strategies
The next-generation command line.
The source of truth for your team’s secrets, scripts, and SSH credentials.
The source of truth for your team’s secrets, scripts, and SSH credentials.
.png?table=block&id=e2eaaa6a-a9a8-4f09-a88e-888ba717d58d&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1200&userId=&cache=v2)
.png?table=block&id=e222d02f-1d78-4887-8972-a958b1fbca65&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1250&userId=&cache=v2)
.png?table=block&id=b4e186f9-aa38-4fce-b32e-8fdd8fc746ce&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1260&userId=&cache=v2)