LLMs
Since we launched ChatGPT Enterprise a few months ago, early customers have expressed the desire for even more customization that aligns with their business. GPTs answer this call by allowing you to create versions of ChatGPT for specific use cases, departments, or proprietary datasets. Early customers like Amgen, Bain, and Square are already... See more
Introducing GPTs
This could be a business opportunity: building GPTs for companies.
Clean & curate your data with LLMs
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
databonsai is a Python library that uses LLMs to perform data cleaning tasks.
Features
- Suite of tools for data processing using LLMs including categorization, transformation, and extraction
- Validation of LLM outputs
- Batch processing for token savings
- Retry logic with exponential backoff for handling rate limits and
databonsai • GitHub - databonsai/databonsai: clean & curate your data with LLMs.
Zerox OCR
A dead simple way of OCR-ing a document for AI ingestion. Documents are meant to be a visual representation after all. With weird layouts, tables, charts, etc. The vision models just make sense!
The general logic:
A dead simple way of OCR-ing a document for AI ingestion. Documents are meant to be a visual representation after all. With weird layouts, tables, charts, etc. The vision models just make sense!
The general logic:
- Pass in a PDF (URL or file buffer)
- Turn the PDF into a series of images
- Pass each image to GPT and ask nicely for Markdown
- Aggregat
Tyler Maran • GitHub - getomni-ai/zerox: Zero shot pdf OCR with gpt-4o-mini
To do this, we employ a technique known as AI-assisted evaluation, alongside traditional metrics for measuring performance. This helps us pick the prompts that lead to better quality outputs, making the end product more appealing to users. AI-assisted evaluation uses best-in-class LLMs (like GPT-4) to automatically critique how well the AI's... See more
Developing Rapidly with Generative AI
“I think a lot of people obviously want to talk about the sexy kind of new consumer applications. I would tell you that I think that the earliest and most significant effect that AI is going to have on our company is actually going to be as it relates to our developer productivity. Some of the tools that we’re seeing are going to allow our devs to... See more
Adam Huda • The Transformative Power of Generative AI in Software Development: Lessons from Uber's Tech-Wide Hackathon
Jail-Breaked & Offline Appliances: It’s becoming increasingly clear that we’ll be able to interact with everyday appliances and devices with natural language. As locally run LLMs become more efficient and powerful, the prospects of having a conversation with your coffee machine in the morning aren’t unreasonable. After all, who wants to tinker with... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
The quality of dataset is 95% of everything. The rest 5% is not to ruin it with bad parameters.
After 500+ LoRAs made, here is the secret
Amplify Partners was running a survey among 800+ AI engineers to bring transparency to the AI Engineering space. The report is concise, yet it provides a wealth of insights into the technologies and methods employed by companies for the implementation of AI products.
Highlights
👉 Top AI use cases are code intelligence, data extraction and workflow... See more
Highlights
👉 Top AI use cases are code intelligence, data extraction and workflow... See more
Paul Venuto • feed updates
The multiple cantilevered AI overhangs:
Compute overhang. We have much more compute than we are using. Scale can go much further.
Idea overhang. There are many obvious research ideas and combinations of ideas that haven’t been tried in earnest yet.
Capability overhang. Even if we stopped all research now, it would take ten years to digest the new... See more
Compute overhang. We have much more compute than we are using. Scale can go much further.
Idea overhang. There are many obvious research ideas and combinations of ideas that haven’t been tried in earnest yet.
Capability overhang. Even if we stopped all research now, it would take ten years to digest the new... See more