D&D.Sci Hyper­sphere Anal­y­sis Part 1: Datafields & Pre­limi­nary Analysis

aphyer13 Jan 2024 20:16 UTC
29 points
1 comment5 min readLW link

Some ad­di­tional SAE thoughts

Hoagy13 Jan 2024 19:31 UTC
30 points
4 comments13 min readLW link

(4 min read) An in­tu­itive ex­pla­na­tion of the AI in­fluence situation

trevor13 Jan 2024 17:34 UTC
12 points
26 comments4 min readLW link

AI #47: Meet the New Year

Zvi13 Jan 2024 16:20 UTC
36 points
7 comments57 min readLW link
(thezvi.wordpress.com)

Take­aways from the NeurIPS 2023 Tro­jan De­tec­tion Competition

mikes13 Jan 2024 12:35 UTC
20 points
2 comments1 min readLW link
(confirmlabs.org)

[Question] Why do so many think de­cep­tion in AI is im­por­tant?

Prometheus13 Jan 2024 8:14 UTC
23 points
12 comments1 min readLW link

Elimi­nat­ing Cookie Ban­ners is Hard

jefftk13 Jan 2024 3:00 UTC
23 points
15 comments3 min readLW link
(www.jefftk.com)

In­tro­duc­ing Align­ment Stress-Test­ing at Anthropic

evhub12 Jan 2024 23:51 UTC
182 points
23 comments2 min readLW link

D&D.Sci(-fi): Coloniz­ing the SuperHyperSphere

abstractapplic12 Jan 2024 23:36 UTC
48 points
23 comments2 min readLW link

Com­mon­wealth Fu­sion Sys­tems is the Same Scale as OpenAI

Jeffrey Heninger12 Jan 2024 21:43 UTC
22 points
13 comments2 min readLW link

Through­put vs. Latency

12 Jan 2024 21:37 UTC
29 points
2 comments13 min readLW link

Sleeper Agents: Train­ing De­cep­tive LLMs that Per­sist Through Safety Training

12 Jan 2024 19:51 UTC
305 points
95 comments3 min readLW link
(arxiv.org)

METAPHILOSOPHY—A Philoso­phiz­ing through log­i­cal consequences

Seremonia12 Jan 2024 18:47 UTC
−7 points
7 comments1 min readLW link

Ideal­ism, Real­is­tic & Pragmatic

Seremonia12 Jan 2024 18:16 UTC
−7 points
3 comments1 min readLW link

The ex­is­ten­tial threat of hu­mans.

Spiritus Dei12 Jan 2024 17:50 UTC
−24 points
0 comments3 min readLW link

[Question] Con­crete ex­am­ples of do­ing agen­tic things?

Jacob G-W12 Jan 2024 15:59 UTC
13 points
10 comments1 min readLW link

Land Recla­ma­tion is in the 9th Cir­cle of Stag­na­tion Hell

Maxwell Tabarrok12 Jan 2024 13:36 UTC
54 points
6 comments2 min readLW link
(maximumprogress.substack.com)

What good is G-fac­tor if you’re dumped in the woods? A field re­port from a camp coun­selor.

Hastings12 Jan 2024 13:17 UTC
137 points
22 comments1 min readLW link

A Chi­nese Room Con­tain­ing a Stack of Stochas­tic Parrots

RogerDearnaley12 Jan 2024 6:29 UTC
20 points
3 comments5 min readLW link

De­cent plan prize an­nounce­ment (1 para­graph, $1k)

lemonhope12 Jan 2024 6:27 UTC
25 points
19 comments1 min readLW link

in­tro­duc­tion to solid ox­ide electrolytes

bhauth12 Jan 2024 5:35 UTC
17 points
0 comments4 min readLW link
(www.bhauth.com)

Ap­ply to the PIBBSS Sum­mer Re­search Fellowship

12 Jan 2024 4:06 UTC
39 points
1 comment2 min readLW link

A Bench­mark for De­ci­sion Theories

StrivingForLegibility11 Jan 2024 18:54 UTC
10 points
0 comments2 min readLW link

An even deeper atheism

Joe Carlsmith11 Jan 2024 17:28 UTC
125 points
47 comments15 min readLW link

Mo­ti­vat­ing Align­ment of LLM-Pow­ered Agents: Easy for AGI, Hard for ASI?

RogerDearnaley11 Jan 2024 12:56 UTC
34 points
4 comments39 min readLW link

Re­pro­gram­ing the Mind: Med­i­ta­tion as a Tool for Cog­ni­tive Optimization

Jonas Hallgren11 Jan 2024 12:03 UTC
27 points
3 comments11 min readLW link

AI-Gen­er­ated Mu­sic for Learning

ethanmorse11 Jan 2024 4:11 UTC
9 points
1 comment1 min readLW link
(210ethan.github.io)

In­tro­duce a Speed Maximum

jefftk11 Jan 2024 2:50 UTC
36 points
28 comments2 min readLW link
(www.jefftk.com)

[Question] Pre­dic­tion mar­kets are con­sis­tently un­der­con­fi­dent. Why?

Sinclair Chen11 Jan 2024 2:44 UTC
11 points
4 comments1 min readLW link

Try­ing to al­ign hu­mans with in­clu­sive ge­netic fitness

peterbarnett11 Jan 2024 0:13 UTC
23 points
5 comments10 min readLW link

Univer­sal Love In­te­gra­tion Test: Hitler

Raemon10 Jan 2024 23:55 UTC
76 points
65 comments9 min readLW link

The Per­cep­tron Controversy

Yuxi_Liu10 Jan 2024 23:07 UTC
65 points
18 comments1 min readLW link
(yuxi-liu-wired.github.io)

The Aspiring Ra­tion­al­ist Congregation

maia10 Jan 2024 22:52 UTC
86 points
23 comments10 min readLW link

An Ac­tu­ally In­tu­itive Ex­pla­na­tion of the Oberth Effect

Isaac King10 Jan 2024 20:23 UTC
60 points
33 comments6 min readLW link

Be­ware the sub­op­ti­mal routine

jwfiredragon10 Jan 2024 19:02 UTC
12 points
3 comments3 min readLW link

The true cost of fences

pleiotroth10 Jan 2024 19:01 UTC
3 points
2 comments4 min readLW link

“Dark Con­sti­tu­tion” for con­strain­ing some superintelligences

Valentine10 Jan 2024 16:02 UTC
3 points
9 comments1 min readLW link
(www.anarchonomicon.com)

[Question] rab­bit (a new AI com­pany) and Large Ac­tion Model (LAM)

MiguelDev10 Jan 2024 13:57 UTC
17 points
3 comments1 min readLW link

Sav­ing the world sucks

Defective Altruism10 Jan 2024 5:55 UTC
48 points
29 comments3 min readLW link

[Question] Ques­tions about Solomonoff induction

mukashi10 Jan 2024 1:16 UTC
7 points
11 comments1 min readLW link

AI as a nat­u­ral disaster

Neil 10 Jan 2024 0:42 UTC
11 points
1 comment7 min readLW link

Stop be­ing sur­prised by the pas­sage of time

10 Jan 2024 0:36 UTC
−2 points
1 comment3 min readLW link

A dis­cus­sion of nor­ma­tive ethics

9 Jan 2024 23:29 UTC
10 points
6 comments25 min readLW link

On the Con­trary, Steel­man­ning Is Nor­mal; ITT-Pass­ing Is Niche

Zack_M_Davis9 Jan 2024 23:12 UTC
44 points
31 comments4 min readLW link

[Question] What’s the pro­to­col for if a novice has ML ideas that are un­likely to work, but might im­prove ca­pa­bil­ities if they do work?

drocta9 Jan 2024 22:51 UTC
6 points
2 comments2 min readLW link

Good­bye, Shog­goth: The Stage, its An­i­ma­tron­ics, & the Pup­peteer – a New Metaphor

RogerDearnaley9 Jan 2024 20:42 UTC
47 points
8 comments36 min readLW link

Bent or Blunt Hoods?

jefftk9 Jan 2024 20:10 UTC
23 points
0 comments1 min readLW link
(www.jefftk.com)

2024 ACX Pre­dic­tions: Blind/​Buy/​Sell/​Hold

Zvi9 Jan 2024 19:30 UTC
33 points
2 comments31 min readLW link
(thezvi.wordpress.com)

An­nounc­ing the Dou­ble Crux Bot

9 Jan 2024 18:54 UTC
52 points
8 comments3 min readLW link

Does AI risk “other” the AIs?

Joe Carlsmith9 Jan 2024 17:51 UTC
59 points
3 comments8 min readLW link