LLMs
For the deployment side of things, we found that the performance of our training process was quite slow, especially when it gets into these large language models and when you train from scratch. MosaicML offers what's called programmatic optimization, which is not so much on the hardware side of things, but rather on the algorithmic side. Can you... See more
CB Insights • 2024 Tech Trends
Source: CB Insights Report
- Right now, GPTs are the easiest way of sharing structured prompts, which are programs, written in plain English (or another language), that can get the AI to do useful things. I discussed creating structured prompts last week, and all the same techniques apply, but the GPT system makes structured prompts more powerful and much easier to create,
Ethan Mollick • Almost an Agent: What GPTs can do
Document search and synthesis
Scores of organizations want to harness generative AI so employees can easily find the most relevant documents through improved search results and summaries. For example, your organization can reduce the time it takes employees to find answers to common HR- and process-related questions. Internal manuals and sites are... See more
Scores of organizations want to harness generative AI so employees can easily find the most relevant documents through improved search results and summaries. For example, your organization can reduce the time it takes employees to find answers to common HR- and process-related questions. Internal manuals and sites are... See more
Donna Schut • The Prompt: Takeaways from hundreds of conversations about generative AI - part 1 | Google Cloud Blog
Here's my read on the situation:
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
Greg Kamradt • Tweet
Ensuring availability during peak traffic by maintaining all GPU instance types could lead to prohibitively high costs. To avoid the financial strain of idle instances, we implemented a “standby instances” mechanism. Rather than preparing for the maximum potential load, we maintained a calculated number of standby instances that match the... See more
Sean Sheng • Scaling AI Models Like You Mean It
Easily chunk complex documents the same way a human would.
Chunking documents is a challenging task that underpins any RAG system. High quality results are critical to a sucessful AI application, yet most open-source libraries are limited in their ability to handle complex documents.
Open Parse is designed to fill this gap by providing a flexible,... See more
Chunking documents is a challenging task that underpins any RAG system. High quality results are critical to a sucessful AI application, yet most open-source libraries are limited in their ability to handle complex documents.
Open Parse is designed to fill this gap by providing a flexible,... See more
Filimoa • GitHub - Filimoa/open-parse: Improved file parsing for LLM’s
Two ways for an AI company to protect itself from competition: (a) depend not just on AI but also deep domain knowledge about a particular field, (b) have a very close relationship with the end users.
Paul Graham • Tweet
To do this, we employ a technique known as AI-assisted evaluation, alongside traditional metrics for measuring performance. This helps us pick the prompts that lead to better quality outputs, making the end product more appealing to users. AI-assisted evaluation uses best-in-class LLMs (like GPT-4) to automatically critique how well the AI's... See more