RSS

Noosphere89

Karma: 2,349

[Question] When is re­ward ever the op­ti­miza­tion tar­get?

Noosphere8915 Oct 2024 15:09 UTC
33 points
12 comments1 min readLW link

[Question] What does it mean for an event or ob­ser­va­tion to have prob­a­bil­ity 0 or 1 in Bayesian terms?

Noosphere8917 Sep 2024 17:28 UTC
1 point
22 comments1 min readLW link

My dis­agree­ments with “AGI ruin: A List of Lethal­ities”

Noosphere8915 Sep 2024 17:22 UTC
34 points
46 comments18 min readLW link

[Question] Does a time-re­versible phys­i­cal law/​Cel­lu­lar Au­toma­ton always im­ply the First Law of Ther­mo­dy­nam­ics?

Noosphere8930 Aug 2024 15:12 UTC
7 points
11 comments1 min readLW link

Fran­cois Chol­let in­ad­ver­tently limits his claim on ARC-AGI

Noosphere8916 Jul 2024 17:32 UTC
12 points
3 comments1 min readLW link
(x.com)

The prob­lems with the con­cept of an in­fo­haz­ard as used by the LW com­mu­nity [Linkpost]

Noosphere8922 Dec 2023 16:13 UTC
75 points
43 comments3 min readLW link
(www.beren.io)

[Question] What’s the min­i­mal ad­di­tive con­stant for Kol­mogorov Com­plex­ity that a pro­gram­ming lan­guage can achieve?

Noosphere8920 Dec 2023 15:36 UTC
11 points
15 comments1 min readLW link

Ar­gu­ments for op­ti­mism on AI Align­ment (I don’t en­dorse this ver­sion, will re­u­pload a new ver­sion soon.)

Noosphere8915 Oct 2023 14:51 UTC
26 points
127 comments25 min readLW link

Hilbert’s Triumph, Church and Tur­ing’s failure, and what it means (Post #2)

Noosphere8930 Jul 2023 14:33 UTC
−5 points
16 comments15 min readLW link

[Question] Does de­cid­abil­ity of a the­ory im­ply com­plete­ness of the the­ory?

Noosphere8929 Jul 2023 23:53 UTC
6 points
12 comments1 min readLW link

Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

Noosphere8926 Jul 2023 17:54 UTC
6 points
13 comments5 min readLW link

An Opinionated Guide to Com­putabil­ity and Com­plex­ity (Post #0)

Noosphere8924 Jul 2023 17:53 UTC
10 points
10 comments3 min readLW link

[Question] Con­di­tional on liv­ing in a AI safety/​al­ign­ment by de­fault uni­verse, what are the im­pli­ca­tions of this as­sump­tion be­ing true?

Noosphere8917 Jul 2023 14:44 UTC
26 points
10 comments1 min readLW link

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere898 Jun 2023 14:33 UTC
5 points
2 comments2 min readLW link

[Question] Are com­pu­ta­tion­ally com­plex al­gorithms ex­pen­sive to have, ex­pen­sive to op­er­ate, or both?

Noosphere892 Jun 2023 17:50 UTC
7 points
5 comments1 min readLW link

[Question] Does re­versible com­pu­ta­tion let you com­pute the com­plex­ity class PSPACE as effi­ciently as nor­mal com­put­ers com­pute the com­plex­ity class P?

Noosphere899 May 2023 13:18 UTC
6 points
14 comments1 min readLW link

[Question] Are there AI poli­cies that are ro­bustly net-pos­i­tive even when con­sid­er­ing differ­ent AI sce­nar­ios?

Noosphere8923 Apr 2023 21:46 UTC
11 points
1 comment1 min readLW link

[Question] Can we get around Godel’s In­com­plete­ness the­o­rems and Tur­ing un­de­cid­able prob­lems via in­finite com­put­ers?

Noosphere8917 Apr 2023 15:14 UTC
−11 points
12 comments1 min readLW link

[Question] Best ar­gu­ments against the out­side view that AGI won’t be a huge deal, thus we sur­vive.

Noosphere8927 Mar 2023 20:49 UTC
4 points
7 comments1 min readLW link

A case for ca­pa­bil­ities work on AI as net pos­i­tive

Noosphere8927 Feb 2023 21:12 UTC
10 points
37 comments1 min readLW link