I posted my last post to the Effective Altruism Forum, where it received considerably more attention than I’d anticipated. On the one hand, this was nice, because I’d spent a decent amount of work on it and it felt good to have it appreciated. On the other hand, this was terrifying, because I really have no idea if my thesis was correct—I might’ve just convinced a bunch of people to adopt a worse communication strategy, thereby marginally decreasing the amount of resources going into AI alignment, thereby marginally increasing the likelihood of an AI apocalypse.
Followup on Terminator
Followup on Terminator
Followup on Terminator
I posted my last post to the Effective Altruism Forum, where it received considerably more attention than I’d anticipated. On the one hand, this was nice, because I’d spent a decent amount of work on it and it felt good to have it appreciated. On the other hand, this was terrifying, because I really have no idea if my thesis was correct—I might’ve just convinced a bunch of people to adopt a worse communication strategy, thereby marginally decreasing the amount of resources going into AI alignment, thereby marginally increasing the likelihood of an AI apocalypse.