RSS

RomanS

Karma: 839

[Question] What are some good ar­gu­ments against build­ing new nu­clear power plants?

RomanSAug 12, 2022, 7:32 AM
16 points
15 comments2 min readLW link

A suffi­ciently para­noid pa­per­clip maximizer

RomanSAug 8, 2022, 11:17 AM
17 points
10 comments2 min readLW link

[Question] What if LaMDA is in­deed sen­tient /​ self-aware /​ worth hav­ing rights?

RomanSJun 16, 2022, 9:10 AM
22 points
13 comments1 min readLW link

[linkpost] The fi­nal AI bench­mark: BIG-bench

RomanSJun 10, 2022, 8:53 AM
25 points
21 comments1 min readLW link

[Linkpost] A Chi­nese AI op­ti­mized for killing

RomanSJun 3, 2022, 9:17 AM
−2 points
4 comments1 min readLW link

Pre­dict­ing a global catas­tro­phe: the Ukrainian model

RomanSApr 7, 2022, 12:06 PM
5 points
11 comments2 min readLW link

Con­sume fic­tion wisely

RomanSJan 21, 2022, 8:23 PM
−9 points
56 comments5 min readLW link

A fate worse than death?

RomanSDec 13, 2021, 11:05 AM
−25 points
26 comments2 min readLW link

[Linkpost] Chi­nese gov­ern­ment’s guidelines on AI

RomanSDec 10, 2021, 9:10 PM
61 points
14 comments1 min readLW link

Ex­ter­mi­nat­ing hu­mans might be on the to-do list of a Friendly AI

RomanSDec 7, 2021, 2:15 PM
5 points
8 comments2 min readLW link

Re­s­ur­rect­ing all hu­mans ever lived as a tech­ni­cal problem

RomanSOct 31, 2021, 6:08 PM
48 points
36 comments7 min readLW link

Steel­man ar­gu­ments against the idea that AGI is in­evitable and will ar­rive soon

RomanSOct 9, 2021, 6:22 AM
20 points
12 comments5 min readLW link

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanSSep 22, 2021, 6:29 AM
5 points
2 comments1 min readLW link