why i don't have a p(doom)
APR 06, 2026
inkhaven
there is some probability that i will die today.
or tomorrow.
or in a decade or a century from now.
knowing the specific probabilities would likely change the way that i think about life. i'd probably call my mom more.
but trying to figure out exactly how or when this is going to happen is gripping the hell out of your glass of water because you're too scared of dropping it. your muscles will cramp up and your hand will get tired and you're going to end up dropping it anyway, without getting a taste of the water.
thinking about doom feels the same.
maybe that's why i'm not in ai safety.
but there was a time when i touched upon ai safety land, and there were definitely interesting things there—interesting technical things like "how can we tell if an ai is lying?" and some more philosophical things like "how can we tell if an ai is conscious?". there were cool people, too.
except that everyone seemed to care about these things because of the big important alignment problem and the big bad doom that was coming if we couldn't figure it out, and i couldn't relate at all. as time went on, it became increasingly obvious that i didn't care about "ai safety" so much as "the kinds of things ai safety people study," the same way i liked sorting out recycleables regardless of it would help solve climate change.
but it still took me an embarrassingly long time to realize that i didn't really care about saving the world.
of course, i would prefer for the world not to end, but i find no meaning in trying to stop it. this is true in an extremely visceral sense; maybe that's why it took me so long to realize that it was true at all, what with all the trolley problems and expected-value calculations floating around. i couldn't see past the fact that i should find meaning in preventing doom.
but meaning, for me, is something soft and bright found between the smell of the ocean and the taste of food eaten with friends. it lives in the sound of car karaoke and in the toasty heat trapped under the blanket in the early morning, and it is just as ephemeral as both.
possibly this makes no sense to the crusader-types, who do what they do because, well, what could be more meaningful than saving the world?
sometimes i wish i was shaped like that.
if you're trying to save the world, you have to know what you're saving it from. furthermore, you have to believe that there's a significant chance that the world might end. otherwise, what's the point?
but whenever i try to compute p(doom), a gaping maw opens wider and wider over my eyes, until i get so lost that the whole operation collapses and i am left clenching the glass so hard that i fear it might shatter. the soft bright thing, fleeting as it is, goes into hiding until it is safe to come out again.
so i simply live life as if we aren't doomed at all.
to be clear, i don't think that p(doom) is zero, and i'm glad you crusaders exist. it is heartening to know that there are smart people who are trying to keep everyone from dying, and i'll cheer you on from over here.
but i don't have a p(doom) because i just wouldn't be happy if i did.
thanks to gwern for the intro talk (i explained this three times, so it had to be blog post).
also, thanks to steve hsu for helping me figure out what i was trying to say.