LLMs
When it comes to identifying where generative AI can make an impact, we dig into challenges that commonly:
- Involve analysis, interpretation, or review of unstructured content (e.g. text) at scale
- Require massive scaling that may be otherwise prohibitive due to limited resources
- Would be challenging for rules-based or traditional ML approaches
Developing Rapidly with Generative AI
Humans are bad at coming up with search queries. Humans are good at incrementally narrowing down options with a series of filters, and pointing where they want to go next. This seems obvious, but we keep building interfaces for finding information that look more like Google Search and less like a map.
All information tools have to give users some... See more
All information tools have to give users some... See more
thesephist.com • Navigate, don't search
we’re in a capability overhang - the AI tech that already exists has huge potential impact, whether you engage or not, so get ahead by exploring
the appropriate approach is pathfinding which uses experiments to learn and, critically, artefacts to tell the organisation what to do next.
the appropriate approach is pathfinding which uses experiments to learn and, critically, artefacts to tell the organisation what to do next.
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Easily chunk complex documents the same way a human would.
Chunking documents is a challenging task that underpins any RAG system. High quality results are critical to a sucessful AI application, yet most open-source libraries are limited in their ability to handle complex documents.
Open Parse is designed to fill this gap by providing a flexible,... See more
Chunking documents is a challenging task that underpins any RAG system. High quality results are critical to a sucessful AI application, yet most open-source libraries are limited in their ability to handle complex documents.
Open Parse is designed to fill this gap by providing a flexible,... See more
Filimoa • GitHub - Filimoa/open-parse: Improved file parsing for LLM’s
Mem0: The Memory Layer for Personalized AI
Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
Note: The Mem0 repository now also includes the Embedchain project. We continue to maintain and support Embedchain ❤️. You can find the Embedchain codebase in the embedchai... See more
GitHub - mem0ai/mem0: The memory layer for Personalized AI
How enterprises are using open source LLMs: 16 examples.
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Paul Venuto • feed updates
core components of Deep RL that enabled success like AlphaGo: self-play and look-ahead planning.
Self-play is the idea that an agent can improve its gameplay by playing against slightly different versions of itself because it’ll progressively encounter more challenging situations. In the space of LLMs, it is almost certain that the largest portion... See more
Self-play is the idea that an agent can improve its gameplay by playing against slightly different versions of itself because it’ll progressively encounter more challenging situations. In the space of LLMs, it is almost certain that the largest portion... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
These two components might be some of the most important ideas to improve all of AI.
𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦: it will improve your LLM performance on given use cases (e.g., coding, extracting text, etc.). Mainly, the LLM will specialize in a given task (a specialist will always beat a generalist in its domain)
𝘤𝘰𝘯𝘵𝘳𝘰𝘭: you can refine how your model should behave on specific inputs and outputs, resulting in a more robust product
𝘮𝘰𝘥𝘶𝘭𝘢𝘳𝘪𝘻𝘢𝘵𝘪𝘰𝘯:... See more
𝘤𝘰𝘯𝘵𝘳𝘰𝘭: you can refine how your model should behave on specific inputs and outputs, resulting in a more robust product
𝘮𝘰𝘥𝘶𝘭𝘢𝘳𝘪𝘻𝘢𝘵𝘪𝘰𝘯:... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Motivation for finetuning
.png?table=block&id=5cffd615-f82a-4e84-b2ff-4f4e496e2d3e&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1330&userId=&cache=v2)