LLMs
Overview
Loki is our open-source solution designed to automate the process of verifying factuality. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is... See more
Loki is our open-source solution designed to automate the process of verifying factuality. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is... See more
Libr-AI • GitHub - Libr-AI/OpenFactVerification: Open-source solution designed to automate the process of verifying factuality
The context size of the input is too small for when you want to analyse CSV's with 1000's of rows and embedding doesn't really work because it loses context.
r/LLMDevs - Reddit
How enterprises are using open source LLMs: 16 examples.
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Many use Llama-2: Brave, Wells Fargo, IBM, The Grammy Awards, Perplexity, Shopify, LyRise, Niantic....
Quote: “A lot of customer are asking themselves: Wait a second, why am I paying for super large model that knows very little about my business? Couldn’t I just use one of these open-source... See more
Paul Venuto • feed updates
Disruptive innovation comes in two flavors: (1) New-market disruption, where the company creates and claims a new segment in an existing market by catering to an underserved customer base, or (2) Low-end disruption, in which a company uses a low-cost business model to enter at the bottom of an existing market and claim a segment.
Copilots don’t... See more
Copilots don’t... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
So right now, LLMs (Large Language Models) are all the rage. But in the future, it’s possible that the way we get things done is composing things with a combination of LLMs, SMMs (Small, Mighty Models), agents and tools.
It’s what I call Cognitive Composition (because it sounds cool and I have a longtime love affair with alliteration).
This is how we... See more
It’s what I call Cognitive Composition (because it sounds cool and I have a longtime love affair with alliteration).
This is how we... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Menlo Ventures released a report on ‘The State of Generative AI in the Enterprise’ and found that adoption is trailing the hype. Details below:
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Generative AI still represents less than 1% of cloud spend by surveyed enterprises, including just an 8% increase in 2023.
Safety and ROI continue to be prime concerns, and the tangible advantages of being... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Study finds RLHF reduces LLM creativity and output variety : A new research paper posted in /r/LocalLLaMA shows that while alignment techniques like RLHF reduce toxic and biased content, they also limit the creativity of large language models, even in contexts unrelated to safety.
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
pair-preference-model-LLaMA3-8B by RLHFlow: Really strong reward model, trained to take in two inputs at once, which is the top open reward model on RewardBench (beating one of Cohere’s).
DeepSeek-V2 by deepseek-ai (21B active, 236B total param.): Another strong MoE base model from the DeepSeek team. Some people are questioning the very high MMLU... See more
DeepSeek-V2 by deepseek-ai (21B active, 236B total param.): Another strong MoE base model from the DeepSeek team. Some people are questioning the very high MMLU... See more
