LLMs
We generally lean towards picking more advanced commercial LLMs to quickly validate our ideas and obtain early feedback from users. Although they may be expensive, the general idea is that if problems can't be adequately solved with state-of-the-art foundational models like GPT-4, then more often than not, those problems may not be addressable... See more
Developing Rapidly with Generative AI
Here's my read on the situation:
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
* The TAM is massive, still so many businesses trying to figure out AI
* If you do deployments you’ll need to spend a of time hand holding clients through scoping projects (not unlike other dev works) since the material is so new
* Lot’s of opportunity in education
* The hard part isn’t the expertise, it’s distribution... See more
Greg Kamradt • Tweet
Generative AI can automate simple tasks
By automating simpler, tedious tasks (generating boilerplate code, fixing linter errors, generating unit tests, etc.), generative AI can help engineers focus on more complex tasks.
Generative AI can improve quality & reliability
Since generative AI models are trained on large codebases, they have the potential... See more
By automating simpler, tedious tasks (generating boilerplate code, fixing linter errors, generating unit tests, etc.), generative AI can help engineers focus on more complex tasks.
Generative AI can improve quality & reliability
Since generative AI models are trained on large codebases, they have the potential... See more
Adam Huda • The Transformative Power of Generative AI in Software Development: Lessons from Uber's Tech-Wide Hackathon
Matei Zaharia, Omar Khattab, Lingjiao Chen, et al. • The Shift From Models to Compound AI Systems
First of all, I'd say you have a bigger problem where your company is trying to find nails with a hammer. That is where your sentiment comes from, and could be an obstacle for both you and the company. It's the same deal when I see people keep on talking about RAG, and nowadays "modular RAG", when really, you could treat everything as a software... See more
r/MachineLearning - Reddit
We consider these aspects of our problem:
- Latency : How fast does the system need to respond to user input?
- Task Complexity : What level of understanding is required from the LLM? Is the input context and prompt super domain-specific?
- Prompt Length : How much context needs to be provided for the LLM to do its task?
- Quality : What is the acceptable
Developing Rapidly with Generative AI
My $0.02 is that a lot of the future research/work there will be figuring out how to identify effective sub-graphs to provide additional context, to avoid having to pass in the entire graph. As well as trying to identify ontology-less structures in real-time, which includes NER and RE, as well as named entity/relationship... See more
r/MachineLearning - Reddit
a couple of the top of my head:
- LLM in the loop with preference optimization
- synthetic data generation
- cross modality "distillation" / dictionary remapping
- constrained decoding
r/MachineLearning - Reddit
Additional LLM paradigms beyond RAG
📦 Service Deployment - Ray Serve (https://lnkd.in/eAV-Y6RN)
🧰 Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
🔌 LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
🗄 Vector Database - Weaviate (https://weaviate.io/)
📚 Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
📈 LLM Observability - Weights & Biases Traces (https... See more
🧰 Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
🔌 LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
🗄 Vector Database - Weaviate (https://weaviate.io/)
📚 Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
📈 LLM Observability - Weights & Biases Traces (https... See more