Large Language Models
The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start doing active inference, or using them in a search loop,... See more
François Cholletx.comSymbolicAI: A neuro-symbolic perspective on LLMs | Hacker News
news.ycombinator.com
Why Elixir/OTP doesn't need an Agent framework: Part 2 | Goto Code - Elixir & LLM Development
Mislav Stipeticgoto-code.com
But despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst.
Alberto Gandolfi • Gen AI: Too Much Spend, Too Little Benefit?
The real story

Home
aider.chat

Getting good results from Claude code | Hacker News
news.ycombinator.com
Prompt Engineering for LLMs
oreilly.com