Es­ti­mates of GPU or equiv­a­lent re­sources of large AI play­ers for 2024/​5

CharlesD28 Nov 2024 23:01 UTC
53 points
7 comments9 min readLW link

Au­to­mated mon­i­tor­ing systems

hiki_t28 Nov 2024 18:54 UTC
1 point
0 comments2 min readLW link

AI #92: Be­hind the Curve

Zvi28 Nov 2024 14:40 UTC
82 points
7 comments44 min readLW link
(thezvi.wordpress.com)

Is the mind a pro­gram?

EuanMcLean28 Nov 2024 9:42 UTC
14 points
62 comments7 min readLW link

LLMs Do Not Think Step-by-step In Im­plicit Reasoning

Bogdan Ionut Cirstea28 Nov 2024 9:16 UTC
11 points
0 comments1 min readLW link
(arxiv.org)

A Mer­i­toc­racy of Taste

Daniele De Nuntiis28 Nov 2024 9:10 UTC
2 points
11 comments2 min readLW link

Im­port­ing Bluesky Comments

jefftk28 Nov 2024 3:50 UTC
14 points
0 comments1 min readLW link
(www.jefftk.com)

New o1-like model (QwQ) beats Claude 3.5 Son­net with only 32B parameters

Jesse Hoogland27 Nov 2024 22:06 UTC
68 points
4 comments1 min readLW link
(qwenlm.github.io)

“Map of AI Fu­tures”—An in­ter­ac­tive flowchart

swante27 Nov 2024 21:31 UTC
69 points
3 comments2 min readLW link
(swantescholz.github.io)

How to solve the mi­suse prob­lem as­sum­ing that in 10 years the de­fault sce­nario is that AGI agents are ca­pa­ble of syn­thetiz­ing pathogens

jeremtti27 Nov 2024 21:17 UTC
6 points
0 comments9 min readLW link

ARENA 4.0 Im­pact Report

27 Nov 2024 20:51 UTC
43 points
3 comments13 min readLW link

On AI De­tec­tors Re­gard­ing Col­lege Ap­pli­ca­tions

Kaustubh Kislay27 Nov 2024 20:25 UTC
4 points
2 comments2 min readLW link

When the Scien­tific Method Doesn’t Really Help...

casualphysicsenjoyer27 Nov 2024 19:52 UTC
3 points
1 comment5 min readLW link
(chillphysicsenjoyer.substack.com)

Causal in­fer­ence for the home gardener

braces27 Nov 2024 17:55 UTC
42 points
1 comment5 min readLW link

Re­peal the Jones Act of 1920

Zvi27 Nov 2024 15:00 UTC
146 points
24 comments39 min readLW link
(thezvi.wordpress.com)

Long Live the Usurper

pleiotroth27 Nov 2024 12:10 UTC
21 points
0 comments5 min readLW link

Hope to live or fear to die?

Knight Lee27 Nov 2024 10:42 UTC
3 points
0 comments1 min readLW link

The Queen’s Dilemma: A Para­dox of Control

Daniel Murfet27 Nov 2024 10:40 UTC
24 points
11 comments3 min readLW link

AXRP Epi­sode 38.2 - Jesse Hoogland on Sin­gu­lar Learn­ing Theory

DanielFilan27 Nov 2024 6:30 UTC
34 points
0 comments10 min readLW link

Hier­ar­chi­cal Agency: A Miss­ing Piece in AI Alignment

Jan_Kulveit27 Nov 2024 5:49 UTC
112 points
20 comments11 min readLW link

Facets and So­cial Networks

jefftk27 Nov 2024 3:40 UTC
15 points
1 comment1 min readLW link
(www.jefftk.com)

Call for eval­u­a­tors: Par­ti­ci­pate in the Euro­pean AI Office work­shop on gen­eral-pur­pose AI mod­els and sys­temic risks

27 Nov 2024 2:54 UTC
30 points
0 comments2 min readLW link

Wager­ing on Will And Worth (Pas­cals Wager for Free Will and Value)

Robert Cousineau27 Nov 2024 0:43 UTC
−1 points
2 comments3 min readLW link

Should you have chil­dren? All LessWrong posts about the topic

Sherrinford26 Nov 2024 23:52 UTC
17 points
0 comments16 min readLW link

Dave Kas­ten’s AGI-by-2027 vignette

davekasten26 Nov 2024 23:20 UTC
49 points
8 comments5 min readLW link

Frac­tals to Quasiparticles

James Camacho26 Nov 2024 20:19 UTC
5 points
0 comments5 min readLW link

[Question] What ep­silon do you sub­tract from “cer­tainty” in your own prob­a­bil­ity es­ti­mates?

Dagon26 Nov 2024 19:13 UTC
14 points
6 comments1 min readLW link

Im­pli­ca­tions—How Con­scious Sig­nifi­cance Could In­form Our lives

James Stephen Brown26 Nov 2024 17:42 UTC
7 points
0 comments13 min readLW link

Work­shop Re­port: Why cur­rent bench­marks ap­proaches are not suffi­cient for safety?

26 Nov 2024 17:20 UTC
3 points
1 comment3 min readLW link

You are not too “ir­ra­tional” to know your prefer­ences.

DaystarEld26 Nov 2024 15:01 UTC
230 points
50 comments13 min readLW link

AI & Li­a­bil­ity Ideathon

Kabir Kumar26 Nov 2024 13:54 UTC
18 points
2 comments4 min readLW link
(lu.ma)

Do Large Lan­guage Models Perform La­tent Multi-Hop Rea­son­ing with­out Ex­ploit­ing Short­cuts?

Bogdan Ionut Cirstea26 Nov 2024 9:58 UTC
9 points
0 comments1 min readLW link
(arxiv.org)

Should you in­crease AI al­ign­ment fund­ing, or in­crease AI reg­u­la­tion?

Knight Lee26 Nov 2024 9:17 UTC
3 points
1 comment4 min readLW link

Miti­gat­ing Geo­mag­netic Storm and EMP Risks to the Elec­tri­cal Grid (Shal­low Dive)

Davidmanheim26 Nov 2024 8:00 UTC
16 points
4 comments6 min readLW link

Filled Cupcakes

jefftk26 Nov 2024 3:20 UTC
21 points
2 comments1 min readLW link
(www.jefftk.com)

notes on pri­ori­tiz­ing tasks & cog­ni­tion-threads

Emrik26 Nov 2024 0:28 UTC
3 points
1 comment4 min readLW link

[Question] Why are there no in­ter­est­ing (1D, 2-state) quan­tum cel­lu­lar au­tomata?

Optimization Process26 Nov 2024 0:11 UTC
29 points
13 comments2 min readLW link

Count­ing AGIs

26 Nov 2024 0:06 UTC
75 points
19 comments32 min readLW link

The Prob­lem with Rea­son­ers by Ai­dan McLaughin

t14n25 Nov 2024 20:24 UTC
7 points
1 comment1 min readLW link
(aidanmclaughlin.notion.site)

Lo­cally op­ti­mal psychology

Chipmonk25 Nov 2024 18:35 UTC
37 points
7 comments2 min readLW link
(twitter.com)

a space habitat design

bhauth25 Nov 2024 17:28 UTC
55 points
13 comments9 min readLW link
(bhauth.com)

Arthro­pod (non) sentience

Arturo Macias25 Nov 2024 16:01 UTC
9 points
8 comments4 min readLW link

Cross­post: Devel­op­ing the mid­dle ground on po­larized topics

juliawise25 Nov 2024 14:39 UTC
13 points
16 comments3 min readLW link

Two fla­vors of com­pu­ta­tional functionalism

EuanMcLean25 Nov 2024 10:47 UTC
28 points
9 comments4 min readLW link

Align­ment is not intelligent

Donatas Lučiūnas25 Nov 2024 6:59 UTC
−23 points
18 comments5 min readLW link

Zaragoza ACX/​LW Meetup

Fernand025 Nov 2024 6:56 UTC
1 point
0 comments1 min readLW link

A bet­ter “State­ment on AI Risk?”

Knight Lee25 Nov 2024 4:50 UTC
5 points
6 comments3 min readLW link

Reflec­tions on ML4Good

james__p25 Nov 2024 2:40 UTC
12 points
0 comments1 min readLW link

AI Spe­cial­ized in ML Train­ing Could Create ASI: AGI Is Unnecessary

satopi25 Nov 2024 2:31 UTC
−5 points
1 comment1 min readLW link

I, Token

Ivan Vendrov25 Nov 2024 2:20 UTC
14 points
2 comments3 min readLW link
(nothinghuman.substack.com)