RSS

Nathan1123

Karma: 35

[Question] Im­pli­ca­tion of Un­com­putable Problems

Nathan1123Jan 30, 2025, 4:48 PM
−3 points
3 comments1 min readLW link

Hu­mans don’t un­der­stand how we do most things

Nathan1123Jun 5, 2023, 2:35 PM
2 points
2 comments2 min readLW link

The Stan­ley Parable: Mak­ing philos­o­phy fun

Nathan1123May 22, 2023, 2:15 AM
6 points
3 comments3 min readLW link

[Question] Is there any liter­a­ture on us­ing so­cial­iza­tion for AI al­ign­ment?

Nathan1123Apr 19, 2023, 10:16 PM
10 points
9 comments2 min readLW link

[Question] Could the simu­la­tion ar­gu­ment also ap­ply to dreams?

Nathan1123Aug 17, 2022, 7:55 PM
6 points
4 comments3 min readLW link

[Question] What is the prob­a­bil­ity that a su­per­in­tel­li­gent, sen­tient AGI is ac­tu­ally in­fea­si­ble?

Nathan1123Aug 14, 2022, 10:41 PM
−3 points
6 comments1 min readLW link

An Un­canny Prison

Nathan1123Aug 13, 2022, 9:40 PM
3 points
3 comments2 min readLW link

In­fant AI Scenario

Nathan1123Aug 12, 2022, 9:20 PM
1 point
0 comments3 min readLW link

Dis­sected boxed AI

Nathan1123Aug 12, 2022, 2:37 AM
−8 points
2 comments1 min readLW link

[Question] Do ad­vance­ments in De­ci­sion The­ory point to­wards moral ab­solutism?

Nathan1123Aug 11, 2022, 12:59 AM
0 points
4 comments4 min readLW link

[Question] How would two su­per­in­tel­li­gent AIs in­ter­act, if they are un­al­igned with each other?

Nathan1123Aug 9, 2022, 6:58 PM
4 points
6 comments1 min readLW link