thoughts
classic aristocratic morality that imputes virtue to victimhood, levels down human excellence, and strives to protect the innocent above all.
Was Nietzsche a Techno-Optimist?
One of my ongoing interests is coming up with better mechanisms to fund public goods : projects that are valuable to very large groups of people, but that do not have a naturally accessible business model. My past work on this includes my contributions to quadratic funding and its use in Gitcoin Grants, retro PGF, and more recently deep funding.
Man... See more
Man... See more
eth.limo • D/Acc: One Year Later
- It hacks our computers → cyber-defense
- It creates a super-plague → bio-defense
- It convinces us (either to trust it, or to distrust each other) → info-defense
eth.limo • D/Acc: One Year Later
To me, this approach seems risky, and could combine the flaws of multipolar races and centralization. If we have to limit people, it seems better to limit everyone on an equal footing, and do the hard work of actually trying to cooperate to organize that instead of one party seeking to dominate everyone else.
eth.limo • D/Acc: One Year Later
- It's a useful capability to have: if we get warning signs that near-superintelligent AI is starting to do things that risk catastrophic damage, we will want to take the transition more slowly.
- Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers.
- Focusing on industrial-scale hardware,
eth.limo • D/Acc: One Year Later
A more advanced approach is to use clever cryptographic trickery: for example, industrial-scale (but not consumer) AI hardware that gets produced could be equipped with a trusted hardware chip that only allows it to continue running if it gets 3/3 signatures once a week from major international bodies, including at least one non-military-affiliated... See more
eth.limo • D/Acc: One Year Later
The focus on training cost is proving fragile in the face of new technology already: the recent state-of-the-art quality Deepseek v3 model was trained at a cost of only $6 million, and in new models like o1 costs are shifting from training to inference more generally.
eth.limo • D/Acc: One Year Later
Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different.[1] More precisely, beings with agency may pursue similar instrumental goals—goals which are made i... See more
wikipedia.org • Instrumental Convergence
Compared to this, the common e/acc message of "you're already a hero just the way you are" is understandably extremely appealing. A d/acc message, one that says "you should build, and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive", may be a winner.