Adam Gamwell
- focus less on AI as something separate from humans, and more on tools that enhance human cognition rather than replacing it… If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option.
Experience. Don’t observe.
Inhale. Don’t read.
Transfigure. Don’t shift.
Advocate. Don’t ponder.
Prove. Don’t promise.
Encourage. Don’t cut.
Imagine. Don’t worry.
Do. Don’t analyze.
Hear. Don’t listen.
Show. Don’t tell.
Give. Don’t take.
from 101 Design Rules by wearecollins.com
- In investing, higher volatility usually equates to higher possible returns. In today’s world of online expression, we settle for lower expected value, market-level outcomes so as to not ruffle any feathers and not take any outsized risk. We’re basically hoping to allow people to know us enough so that they include us in their passive index of human... See more
from Being Known is Being Loved
Thought provoking and VC Culture
An interesting reflection on the contrast between the desire for big gains in Venture capital and culture and the mediocre results a product might create and how that mediocrity shapes the way we think about humans
- He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion... Nor is it enough that he should hear the opini... See more
Read today this almost feels like a backdoor secret entrance to empathy for a culture that prefers not to listen to those with different perspectives have to say
When we introduce new machines, we also disrupt social structures, games, and the rules of those games. With paradigm shifts come lots of progress, yes. But agents of progress are agents of chaos too. They create agitation and anxiety.
I see many shades of this, but I’ll narrow it down to three major types:
Anxiety of incompetence: my skills are devalued and i have to learn new ones
Anxiety of irrelevance: people might not relate to, value, or think about me
Anxiety of uncertainty: I don’t know what the future holds and it feels fickle
As we build systems whose capabilities more and more resemble those of humans, despite the fact that those systems work in ways that are fundamentally different from the way humans work, it becomes increasingly tempting to anthropomorphise them. Humans have evolved to co-exist over many millions of years, and human culture has evolved over thousand
... See morefrom Talking about large language models by Murray Shanahan
I think the most worrisome aspect of AI systems in the short term is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities. We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can actually be fully trusted.
from Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
- In one way, it is easier to be inexperienced: you don’t have to learn what is no longer relevant. Experience, on the other hand, creates two distinct struggles: the first is to identify and unlearn what is no longer necessary (that’s work, too). The second is to remain open-minded, patient, and willing to engage with what’s new, even if it resemble... See more
from Everything Easy is Hard Again by Frank Chimero
- People worried about AI taking their jobs and taking control are competing with a myth. Instead, people should train themselves to be better humans even as they develop better AI. People are still in control, but they need to use that control wisely, ethically and carefully.
from Don’t Fuss About Training AIs. Train Our Kids by Esther Dyson