Self-Supervised Prompt Optimization

New Anthropic research: We elicit capabilities from pretrained models using no external supervision, often competitive or better than using human supervision.
Using this approach, we are able to train a Claude 3.5-based assistant that beats its human-supervised counterpart. https://t.co/p0wKBtRo7q

This guy literally built a prompt that can make any prompt 10x better https://t.co/mGb670D2f2

🔥 YC outlines how top AI startups prompt LLMs: prompts exceeding six pages, XML tags, meta-prompts and evaluations as their core IP.
They found meta-prompting and role assignment drive consistent, agent-like behavior.
⚙️ Key Learning
→ Top AI startups use... See more