LLMs
no reason to build any kind of software product these days that doesn't have a significant UX/domain knowledge component
Discord - A New Way to Chat with Friends & Communities
โI think a lot of people obviously want to talk about the sexy kind of new consumer applications. I would tell you that I think that the earliest and most significant effect that AI is going to have on our company is actually going to be as it relates to our developer productivity. Some of the tools that weโre seeing are going to allow our devs to ... See more
Adam Huda โข The Transformative Power of Generative AI in Software Development: Lessons from Uber's Tech-Wide Hackathon
Amplify Partners was running a survey among 800+ AI engineers to bring transparency to the AI Engineering space. The report is concise, yet it provides a wealth of insights into the technologies and methods employed by companies for the implementation of AI products.
Highlights
๐ Top AI use cases are code intelligence, data extraction and workflow a... See more
Highlights
๐ Top AI use cases are code intelligence, data extraction and workflow a... See more
Feed | LinkedIn
weโre in a capability overhang - the AI tech that already exists has huge potential impact, whether you engage or not, so get ahead by exploring
the appropriate approach is pathfinding which uses experiments to learn and, critically, artefacts to tell the organisation what to do next.
the appropriate approach is pathfinding which uses experiments to learn and, critically, artefacts to tell the organisation what to do next.
Shortwave โ rajhesh.panchanadhan@gmail.com [Gmail alternative]
The quality of dataset is 95% of everything. The rest 5% is not to ruin it with bad parameters.
After 500+ LoRAs made, here is the secret
Announcing Together Inference Engine โ the fastest inference available
November 13, 2023ใปBy Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
โ
Today we are announcing Together Inference Engine, the worldโs fast... See more
November 13, 2023ใปBy Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
โ
Today we are announcing Together Inference Engine, the worldโs fast... See more
Announcing Together Inference Engine โ the fastest inference available
The context size of the input is too small for when you want to analyse CSV's with 1000's of rows and embedding doesn't really work because it loses context.
r/LLMDevs - Reddit
๐ฆ Service Deployment - Ray Serve (https://lnkd.in/eAV-Y6RN)
๐งฐ Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
๐ LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
๐ Vector Database - Weaviate (https://weaviate.io/)
๐ Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
๐ LLM Observability - Weights & Biases Tra... See more
๐งฐ Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
๐ LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
๐ Vector Database - Weaviate (https://weaviate.io/)
๐ Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
๐ LLM Observability - Weights & Biases Tra... See more