Generative AI
Meta AI released LLaMA ... and they included a paper which described exactly what it was trained on. It was 5TB of data.
2/3 of it was from Common Crawl. It had content from GitHub, Wikipedia, ArXiv, StackExchange and something called “Books”.
What’s Books? 4.5% of the training data was books. Part of this was Project Gutenberg, which is public dom
4 questions of organizations [about using AI].
- What did you do that was valuable that's no longer valuable?
- What impossible things can you now do that you could not before?
- What can you democratize and bring down market?
- What can you do upmarket so you have new ways of competing?
Ethan Mollick, https://www.forbes.com/sites/jenamcgregor/2024/
Younger workers benefit more from labor-augmenting tech, like AI.