RSS

Jan_Kulveit

Karma: 3,973

My current research interests:
- alignment in systems which are complex and messy, composed of both humans and AIs?
- actually good mathematized theories of cooperation and coordination
- active inference
- bounded rationality

Research at Alignment of Complex Systems Research Group (acsresearch.org), Centre for Theoretical Studies, Charles University in Prague. Formerly research fellow Future of Humanity Institute, Oxford University

Previously I was a researcher in physics, studying phase transitions, network science and complex systems.

Jan_Kul­veit’s Shortform

Jan_Kulveit14 Nov 2024 0:00 UTC
7 points
5 comments1 min readLW link

You should go to ML conferences

Jan_Kulveit24 Jul 2024 11:47 UTC
110 points
13 comments4 min readLW link

The Liv­ing Planet In­dex: A Case Study in Statis­ti­cal Pitfalls

Jan_Kulveit24 Jun 2024 10:05 UTC
24 points
0 comments4 min readLW link
(www.nature.com)

An­nounc­ing Hu­man-al­igned AI Sum­mer School

22 May 2024 8:55 UTC
50 points
0 comments1 min readLW link
(humanaligned.ai)

In­terLab – a toolkit for ex­per­i­ments with multi-agent interactions

22 Jan 2024 18:23 UTC
69 points
0 comments8 min readLW link
(acsresearch.org)

Box in­ver­sion revisited

Jan_Kulveit7 Nov 2023 11:09 UTC
40 points
3 comments8 min readLW link

[Question] Snap­shot of nar­ra­tives and frames against reg­u­lat­ing AI

Jan_Kulveit1 Nov 2023 16:30 UTC
36 points
19 comments3 min readLW link

We don’t un­der­stand what hap­pened with cul­ture enough

Jan_Kulveit9 Oct 2023 9:54 UTC
86 points
21 comments6 min readLW link

Elon Musk an­nounces xAI

Jan_Kulveit13 Jul 2023 9:01 UTC
75 points
35 comments1 min readLW link
(www.ft.com)

Talk­ing pub­li­cly about AI risk

Jan_Kulveit21 Apr 2023 11:28 UTC
180 points
9 comments6 min readLW link

The self-un­al­ign­ment problem

14 Apr 2023 12:10 UTC
146 points
24 comments10 min readLW link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

10 Apr 2023 18:23 UTC
91 points
8 comments8 min readLW link

Les­sons from Con­ver­gent Evolu­tion for AI Alignment

27 Mar 2023 16:25 UTC
54 points
9 comments8 min readLW link

The space of sys­tems and the space of maps

22 Mar 2023 14:59 UTC
39 points
0 comments5 min readLW link

Cy­borg Pe­ri­ods: There will be mul­ti­ple AI transitions

22 Feb 2023 16:09 UTC
108 points
9 comments6 min readLW link

The Cave Alle­gory Re­vis­ited: Un­der­stand­ing GPT’s Worldview

Jan_Kulveit14 Feb 2023 16:00 UTC
84 points
5 comments3 min readLW link

Deon­tol­ogy and virtue ethics as “effec­tive the­o­ries” of con­se­quen­tial­ist ethics

Jan_Kulveit17 Nov 2022 14:11 UTC
63 points
9 comments1 min readLW link1 review

We can do bet­ter than argmax

Jan_Kulveit10 Oct 2022 10:32 UTC
49 points
4 comments1 min readLW link

Limits to Legibility

Jan_Kulveit29 Jun 2022 17:42 UTC
138 points
11 comments5 min readLW link1 review

Con­ti­nu­ity Assumptions

Jan_Kulveit13 Jun 2022 21:31 UTC
37 points
13 comments4 min readLW link