RSS

Antb

Karma: 46

I strongly believe the alignment problem is fundamentally impossible, another form of an undecidable problem. I, however, would prefer to die with dignity. I study methods of minimizing the chances of being wiped out after the advent of ASI.

My current line of research is computational neuroscience for human cognitive augmentation. I work on the heavily flawed theory that the higher the intelligence waterline of humanity, the better the chances we have ASI employs us as part of its goals, instead of ‘recycling’ us as biomass.

What does your philos­o­phy max­i­mize?

AntbMar 1, 2024, 4:10 PM
0 points
1 comment1 min readLW link

Look­ing for Span­ish AI Align­ment Researchers

AntbJan 7, 2023, 6:52 PM
7 points
3 comments1 min readLW link

[Question] What ca­reer ad­vice do you give to soft­ware en­g­ineers?

AntbDec 31, 2022, 12:01 PM
15 points
4 comments1 min readLW link

[Question] Creat­ing su­per­in­tel­li­gence with­out AGI

AntbOct 17, 2022, 7:01 PM
7 points
3 comments1 min readLW link