TabLib
updated 1y ago
updated 1y ago
Nicolay Gerold added
Nicolay Gerold added
Meta AI released LLaMA ... and they included a paper which described exactly what it was trained on. It was 5TB of data.
2/3 of it was from Common Crawl. It had content from GitHub, Wikipedia, ArXiv, StackExchange and something called “Books”.
What’s Books? 4.5% of the training data was books. Part of this was Project Gutenberg, which is public dom
Johann Van Tonder added