LLMs
đ¤ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
- Why CrewAI
- Getting Started
- Key Features
- Examples
- Local Open Source Models
- CrewAI x AutoGen x ChatDev
- Contribution
- đŹ CrewAI Discord Community
- Hire Consulting
- License
joaomdmoura ⢠GitHub - joaomdmoura/crewAI: Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Google Deepmind used similar idea to make LLMs faster in Accelerating Large Language Model Decoding with Speculative Sampling. Their algorithm uses a smaller draft model to make initial guesses and a larger primary model to validate them. If the draft often guesses right, operations become faster, reducing latency.
There are some people speculating... See more
There are some people speculating... See more
muhtasham ⢠Machine Learners Guide to Real World - 2ď¸âŁ Concepts from Operating Systems That Found Their Way in LLMs
- You have access to a proprietary asset (like data) that others donât have easy access to. In our âwrite job postingsâ example, perhaps you have a corpus of thousands of job postings including some outcome scores (as to how well they did). You could use this data to create better job postings. Others donât have ready access to this data. Note: The
Dharmesh Shah ⢠How To Build a Defensible A.I. Startup
Protecting LLM products:
(1) Is hard to bootstrap. This already hints to existing customers or you need to get a bunch of your customers to co-develop (insurance model â companies pooling their data to solve a problem they all have). This runs into a bunch of issues: competitive drive of the companies, data privacy and security.
(2) Reserved for existing companies. This is the co-pilot model.
(3) This might be the most sustainable one, but it is also the hardest one. I have not seen anything in that direction yet besides OpenAI.
GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., âalways respond in XMLâ). It also supports our new JSON mode, which ensures the model will respond with valid JSON. The new API parameter response_format enables the model to constrain its... See more
New models and developer products announced at DevDay
Two ways for an AI company to protect itself from competition: (a) depend not just on AI but also deep domain knowledge about a particular field, (b) have a very close relationship with the end users.
Paul Graham ⢠Tweet
We consider these aspects of our problem:
- Latency : How fast does the system need to respond to user input?
- Task Complexity : What level of understanding is required from the LLM? Is the input context and prompt super domain-specific?
- Prompt Length : How much context needs to be provided for the LLM to do its task?
- Quality : What is the acceptable
.png?table=block&id=e222d02f-1d78-4887-8972-a958b1fbca65&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1250&userId=&cache=v2)

.png?table=block&id=b4e186f9-aa38-4fce-b32e-8fdd8fc746ce&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1260&userId=&cache=v2)