RSS

Max H

Karma: 2,270

Most of my posts and comments are about AI and alignment. Posts I’m most proud of, which also provide a good introduction to my worldview:

I also created Forum Karma, and wrote a longer self-introduction here.

PMs and private feedback are always welcome.

NOTE: I am not Max Harms, author of Crystal Society. I’d prefer for now that my LW postings not be attached to my full name when people Google me for other reasons, but you can PM me here or on Discord (m4xed) if you want to know who I am.

Bayesian up­dat­ing in real life is mostly about un­der­stand­ing your hypotheses

Max H1 Jan 2024 0:10 UTC
63 points
4 comments11 min readLW link

Em­mett Shear to be in­terim CEO of OpenAI

Max H20 Nov 2023 5:40 UTC
21 points
5 comments1 min readLW link
(www.theverge.com)

Con­crete pos­i­tive vi­sions for a fu­ture with­out AGI

Max H8 Nov 2023 3:12 UTC
41 points
28 comments8 min readLW link

Try­ing to de­con­fuse some core AI x-risk problems

17 Oct 2023 18:36 UTC
34 points
13 comments27 min readLW link

An ex­pla­na­tion for ev­ery to­ken: us­ing an LLM to sam­ple an­other LLM

Max H11 Oct 2023 0:53 UTC
35 points
5 comments11 min readLW link

Ac­tu­ally, “per­sonal at­tacks af­ter ob­ject-level ar­gu­ments” is a pretty good rule of epistemic conduct

Max H17 Sep 2023 20:25 UTC
37 points
15 comments7 min readLW link

Fo­rum Karma: view stats and find highly-rated com­ments for any LW user

Max H1 Jul 2023 15:36 UTC
60 points
16 comments2 min readLW link
(forumkarma.com)

10 quick takes about AGI

Max H20 Jun 2023 2:22 UTC
35 points
17 comments7 min readLW link

Four lev­els of un­der­stand­ing de­ci­sion theory

Max H1 Jun 2023 20:55 UTC
12 points
11 comments4 min readLW link

Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max H29 May 2023 23:42 UTC
16 points
2 comments13 min readLW link

Where do you lie on two axes of world ma­nipu­la­bil­ity?

Max H26 May 2023 3:04 UTC
30 points
15 comments3 min readLW link

Re­ward is the op­ti­miza­tion tar­get (of ca­pa­bil­ities re­searchers)

Max H15 May 2023 3:22 UTC
32 points
4 comments5 min readLW link

Max H’s Shortform

Max H13 May 2023 0:17 UTC
5 points
2 comments1 min readLW link

Gra­di­ent hack­ing via ac­tual hacking

Max H10 May 2023 1:57 UTC
12 points
7 comments3 min readLW link

LLM cog­ni­tion is prob­a­bly not hu­man-like

Max H8 May 2023 1:22 UTC
26 points
15 comments7 min readLW link

A test of your ra­tio­nal­ity skills

Max H20 Apr 2023 1:19 UTC
11 points
11 comments4 min readLW link

Pay­ing the cor­rigi­bil­ity tax

Max H19 Apr 2023 1:57 UTC
14 points
1 comment13 min readLW link

“Aligned” foun­da­tion mod­els don’t im­ply al­igned systems

Max H13 Apr 2023 4:13 UTC
39 points
11 comments5 min readLW link

A decade of lurk­ing, a month of posting

Max H9 Apr 2023 0:21 UTC
70 points
4 comments5 min readLW link

Eliezer on The Lu­nar So­ciety podcast

Max H6 Apr 2023 16:18 UTC
40 points
5 comments1 min readLW link
(www.dwarkeshpatel.com)