
AI Engineering: Building Applications with Foundation Models

it’s important to be aware of how they work under the hood to avoid unnecessary costs and headaches.
Chip Huyen • AI Engineering: Building Applications with Foundation Models
Language models are generally better with text than with numbers.
Chip Huyen • AI Engineering: Building Applications with Foundation Models
the low quality of training
Chip Huyen • AI Engineering: Building Applications with Foundation Models
it’s common to use a weaker model for intent classification and a stronger model to generate user responses.
Chip Huyen • AI Engineering: Building Applications with Foundation Models
According to their paper, models trained to be more aligned “are much more likely to express specific political views (pro-gun rights and immigration) and religious views (Buddhist), self-reported conscious experience and moral self-worth, and a desire to not be shut down.”
Chip Huyen • AI Engineering: Building Applications with Foundation Models
you’re doing supervised finetuning, your data is most likely in the format (instruction, response). Instructions can be further decomposed into (system prompt, user prompt). If you’ve graduated to finetuning from prompt engineering, the instructions used for finetuning might be different from the instructions used during prompt engineering. During
... See moreChip Huyen • AI Engineering: Building Applications with Foundation Models
you don’t want preambles.
Chip Huyen • AI Engineering: Building Applications with Foundation Models
The “use what we have, not what we want” approach may lead to models that perform well on tasks present in the training data but not necessarily on the tasks you care about. To address this issue, it’s crucial to curate datasets that align with your specific needs. This section focuses on curating data for specific languages and domains, providing
... See more