Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Jonas Hallgren
Karma:
583
AI Safety person currently working on multi-agent coordination problems.
All
Posts
Comments
New
Top
Old
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety—A Pilot Retrospective
Alvin Ånestrand
,
Jonas Hallgren
and
Utilop
10 Jan 2025 16:22 UTC
21
points
0
comments
4
min read
LW
link
Meditation insights as phase shifts in your self-model
Jonas Hallgren
7 Jan 2025 10:09 UTC
8
points
3
comments
3
min read
LW
link
Model Integrity: MAI on Value Alignment
Jonas Hallgren
5 Dec 2024 17:11 UTC
6
points
11
comments
1
min read
LW
link
(meaningalignment.substack.com)
Reprograming the Mind: Meditation as a Tool for Cognitive Optimization
Jonas Hallgren
11 Jan 2024 12:03 UTC
27
points
3
comments
11
min read
LW
link
How well does your research adress the theory-practice gap?
Jonas Hallgren
8 Nov 2023 11:27 UTC
18
points
0
comments
10
min read
LW
link
Jonas Hallgren’s Shortform
Jonas Hallgren
11 Oct 2023 9:52 UTC
3
points
9
comments
1
min read
LW
link
Advice for new alignment people: Info Max
Jonas Hallgren
30 May 2023 15:42 UTC
27
points
4
comments
5
min read
LW
link
Respect for Boundaries as non-arbirtrary coordination norms
Jonas Hallgren
9 May 2023 19:42 UTC
9
points
3
comments
7
min read
LW
link
Max Tegmark’s new Time article on how we’re in a Don’t Look Up scenario [Linkpost]
Jonas Hallgren
25 Apr 2023 15:41 UTC
39
points
9
comments
1
min read
LW
link
(time.com)
The Benefits of Distillation in Research
Jonas Hallgren
4 Mar 2023 17:45 UTC
15
points
2
comments
5
min read
LW
link
Power-Seeking = Minimising free energy
Jonas Hallgren
22 Feb 2023 4:28 UTC
21
points
10
comments
7
min read
LW
link
Black Box Investigation Research Hackathon
Esben Kran
and
Jonas Hallgren
12 Sep 2022 7:20 UTC
9
points
4
comments
2
min read
LW
link
Announcing the Distillation for Alignment Practicum (DAP)
Jonas Hallgren
and
CallumMcDougall
18 Aug 2022 19:50 UTC
23
points
3
comments
3
min read
LW
link
[Question]
Does agent foundations cover all future ML systems?
Jonas Hallgren
25 Jul 2022 1:17 UTC
2
points
0
comments
1
min read
LW
link
[Question]
Is it worth making a database for moral predictions?
Jonas Hallgren
16 Aug 2021 14:51 UTC
1
point
0
comments
2
min read
LW
link
[Question]
Is there any serious attempt to create a system to figure out the CEV of humanity and if not, why haven’t we started yet?
Jonas Hallgren
25 Feb 2021 22:06 UTC
5
points
2
comments
1
min read
LW
link
Back to top