Generative AI
by Johann Van Tonder · updated 3mo ago
Generative AI
by Johann Van Tonder · updated 3mo ago
Writing code from scratch is gone. This was the most exciting part of the profession. Now what remains are debugging/fixing. The same concerns graphical design (original character design in games is gone), writing music (new melodies can be generated by clicking "refresh" and then improved), literature. You never start from scratch anymore. Which i
... See moreJohann Van Tonder added 3mo ago
“From this point on, the intelligence of LLMs… will only continue to improve. Human intelligence will not.”
OpenAI’s paper on using AI to debug AI code
Johann Van Tonder added 3mo ago
4 questions of organizations [about using AI].
- What did you do that was valuable that's no longer valuable?
- What impossible things can you now do that you could not before?
- What can you democratize and bring down market?
- What can you do upmarket so you have new ways of competing?
Ethan Mollick, https://www.forbes.com/sites/jenamcgregor/2024/
Johann Van Tonder added 6mo ago
Everyone freaks out, and they interpret the statement, ‘AI will affect my job’ as ‘AI will do my job for me.’
Those two things are not the same, because AI can be a tool or substitute. Just because the job may change doesn’t mean that the job will be eliminated.
Source: Kellogg Insight
Johann Van Tonder added 6mo ago
Younger workers benefit more from labor-augmenting tech, like AI.
“In other words, workers who were used to doing things a certain way struggled to adapt when complementary technology arrived, while less-experienced workers could harness the power of these new tools.”
Source: Kellogg Insight
Johann Van Tonder added 6mo ago
Weird GPT token for Reddit user davidjl123, “a keen member of the /r/counting subreddit. He’s posted incremented numbers there well over 163,000 times. Presumably that subreddit ended up in the training data used to create the tokenizer used by GPT-2, and since that particular username showed up hundreds of thousands of times it ended up getting it
... See moreJohann Van Tonder added 6mo ago
Meta AI released LLaMA ... and they included a paper which described exactly what it was trained on. It was 5TB of data.
2/3 of it was from Common Crawl. It had content from GitHub, Wikipedia, ArXiv, StackExchange and something called “Books”.
What’s Books? 4.5% of the training data was books. Part of this was Project Gutenberg, which is public dom
Johann Van Tonder added 6mo ago
"There’s really no replacement for spending time with these things, working towards a deeper mental model of the things they are good at and the things they are likely to mess up. Combining with domain knowledge of the thing you are working on is key too, especially as that can help protect you against them making things up!"
- Simon Willison, attem
Johann Van Tonder added 6mo ago
"A key challenge of (LLMs) is that they do not come with a manual! They come with a “Twitter influencer manual” instead, where lots of people online loudly boast about the things they can do with a very low accuracy rate, which is really frustrating..."
Simon Willison, attempting to explain LLM
Johann Van Tonder added 6mo ago
“A more practical answer is that it’s a file. This right here is a large language model, called Vicuna 7B. It’s a 4.2 gigabyte file on my computer. If you open the file, it’s just numbers. These things are giant binary blobs of numbers…”
Simon Willison, attempting to explain LLM
Johann Van Tonder added 6mo ago