D/Acc: One Year Later
One of my ongoing interests is coming up with better mechanisms to fund public goods : projects that are valuable to very large groups of people, but that do not have a naturally accessible business model. My past work on this includes my contributions to quadratic funding and its use in Gitcoin Grants, retro PGF, and more recently deep funding.
Man... See more
Man... See more
eth.limo • D/Acc: One Year Later
- It hacks our computers → cyber-defense
- It creates a super-plague → bio-defense
- It convinces us (either to trust it, or to distrust each other) → info-defense
eth.limo • D/Acc: One Year Later
To me, this approach seems risky, and could combine the flaws of multipolar races and centralization. If we have to limit people, it seems better to limit everyone on an equal footing, and do the hard work of actually trying to cooperate to organize that instead of one party seeking to dominate everyone else.
eth.limo • D/Acc: One Year Later
- It's a useful capability to have: if we get warning signs that near-superintelligent AI is starting to do things that risk catastrophic damage, we will want to take the transition more slowly.
- Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers.
- Focusing on industrial-scale hardware,
eth.limo • D/Acc: One Year Later
A more advanced approach is to use clever cryptographic trickery: for example, industrial-scale (but not consumer) AI hardware that gets produced could be equipped with a trusted hardware chip that only allows it to continue running if it gets 3/3 signatures once a week from major international bodies, including at least one non-military-affiliated... See more
eth.limo • D/Acc: One Year Later
The focus on training cost is proving fragile in the face of new technology already: the recent state-of-the-art quality Deepseek v3 model was trained at a cost of only $6 million, and in new models like o1 costs are shifting from training to inference more generally.