LLMs
- You have access to a proprietary asset (like data) that others don’t have easy access to. In our “write job postings” example, perhaps you have a corpus of thousands of job postings including some outcome scores (as to how well they did). You could use this data to create better job postings. Others don’t have ready access to this data. Note: The a
Dharmesh Shah • How To Build a Defensible A.I. Startup
Protecting LLM products:
(1) Is hard to bootstrap. This already hints to existing customers or you need to get a bunch of your customers to co-develop (insurance model → companies pooling their data to solve a problem they all have). This runs into a bunch of issues: competitive drive of the companies, data privacy and security.
(2) Reserved for existing companies. This is the co-pilot model.
(3) This might be the most sustainable one, but it is also the hardest one. I have not seen anything in that direction yet besides OpenAI.
One interesting thing about LLMs is that they can actually recover (and without error loops). You can have a step that doesn't work right, and a later step can use its common-sense knowledge to ignore some of the missing results, conflicting information, etc. One of the problems with developing with LLMs is that the machine will often cover up bugs... See more
Ask HN: What are some actual use cases of AI Agents right now? | Hacker News
Memory Considerations
Since co-occurrence matrices are square, they grow exponential with the number of entities being embedded. For 50k entities and a 32-bit data format, a dense matrix will already be at 10GB. 100k entities puts it at 40GB.
If you are trying to embed even more entities than that or have limited RAM available, you may need to use a... See more
Since co-occurrence matrices are square, they grow exponential with the number of entities being embedded. For 50k entities and a 32-bit data format, a dense matrix will already be at 10GB. 100k entities puts it at 40GB.
If you are trying to embed even more entities than that or have limited RAM available, you may need to use a... See more
What I've Learned Building Interactive Embedding Visualizations
OpenAI is treating its new marketplace seriously now: The brand new GPT store will come with REVENUE SHARING.... (missing in the Plugins launch)
and launching a Stateful Assistants API:
- Persistent Threads (/api/openai/threads)
- Built in Retrieval (chunking etc done for you)
- Code Interpreter (RIP Adv Data Analysis?)
- Speech to Text and Text to Spee... See more
and launching a Stateful Assistants API:
- Persistent Threads (/api/openai/threads)
- Built in Retrieval (chunking etc done for you)
- Code Interpreter (RIP Adv Data Analysis?)
- Speech to Text and Text to Spee... See more
swyx • Tweet
Disruptive innovation comes in two flavors: (1) New-market disruption, where the company creates and claims a new segment in an existing market by catering to an underserved customer base, or (2) Low-end disruption, in which a company uses a low-cost business model to enter at the bottom of an existing market and claim a segment.
Copilots don’t crea... See more
Copilots don’t crea... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Principles for growable tools
There are three critical pieces to building a tool that can grow around its users over time.
There are three critical pieces to building a tool that can grow around its users over time.
- Design around play . Sometimes I call this design around experimentation . Using the tool for day-to-day work should involve playing and experimenting with what’s possible with the tool. Whether that’s writing small programs to
Beyond customization: build tools that grow with us | thesephist.com
a couple of the top of my head:
- LLM in the loop with preference optimization
- synthetic data generation
- cross modality "distillation" / dictionary remapping
- constrained decoding
r/MachineLearning - Reddit
Additional LLM paradigms beyond RAG
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
Deploying a Generative AI model requires more than a VM with a GPU. It normally includes:
- Container Service : Most often Kubernetes to run LLM Serving solutions like Hugging Face Text Generation Inference or vLLM.
- Compute Resources : GPUs for running models, CPUs for management services
- Networking and DNS : Routing traffic to the appropriate servic