They look like software, but they act like
people. And just like people, you can’t just hire someone and pop them on a seat, you have to train them. And create systems around them to make the outputs verifiable.
Which means there’s a pareto frontier of the number of LLM calls you’ll need ot make for verification and the error-rate each LLM introduce...
See more