The choice of programming language is almost always irrelevant to an organization's commercial success. End-users don’t likely care about your programming language choice. You need to upskill your business sense if you’re trying to differentiate your product or service based on the underlying programming language or tech stack.
In the digital world, self-nudging aims to empower people to be citizen ‘choice architects’ by designing their informational environments in ways that work best for them and that constrain their activities in beneficial ways. We can, for instance, remove distracting and irresistible notifications. We may set specific times in which messages can be ... See more
Alex Albert shared 5 architectures for using them in an agentic context:
Delegation : Use cheaper, faster models for cost and speed gains.
For example, Opus can delegate to Haiku to read a book and return relevant passages. This works well if the task description & result are more compact than the full context.
So right now, LLMs (Large Language Models) are all the rage. But in the future, it’s possible that the way we get things done is composing things with a combination of LLMs, SMMs (Small, Mighty Models), agents and tools.
It’s what I call Cognitive Composition (because it sounds cool and I have a longtime love affair with alliteration).
We can detect factually inconsistent summaries via the natural language inference (NLI) task. The NLI task works like this: Given a premise sentence and a hypothesis sentence, the goal is to predict if the hypothesis is entailed by, neutral, or contradicts the premise.
The need for better AI or LLM-specific infrastructure, along with the host of problems that come with non-deterministic of LLMs, means that there’s more software work ahead of us, not less. Abstraction layers like LLMs create more possibilities and thus, more work.
Is this a good thing or a bad thing? I’m not sure.
“Even though I originally created three-step approach that everybody now does, my view is it's actually wrong and we shouldn't use it… the right way to do this is to fine-tune language models, is to actually throw away the idea of fine-tuning. There's no such thing. There's only continued pre-training.
And pre-training is something where from the ve... See more
It’s underdigitized. According to the McKinsey Industry Digitalization Index, only the agriculture sector is less digitized than construction. The typical IT spend for construction companies is 1-2% of the revenue, compared with the 3-5% average across industries. Moreover, there are many barriers to digital technology adoption including skill mana... See more
The right way to use ChatGPT going forward might be to follow the programmer’s maxim that if you do it three times, you should automate it, except now the threshold might be two and if something is nontrivial it also might be one. You can use others’ versions, but there is a lot to be said for rolling one’s own if the process is easy. If it works w... See more