Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Michael Soareverix
Karma:
79
All
Posts
Comments
New
Top
Old
Pivotal Acts are easier than Alignment?
Michael Soareverix
21 Jul 2024 12:15 UTC
1
point
4
comments
1
min read
LW
link
[Question]
Optimizing for Agency?
Michael Soareverix
14 Feb 2024 8:31 UTC
10
points
9
comments
2
min read
LW
link
The Virus—Short Story
Michael Soareverix
13 Apr 2023 18:18 UTC
4
points
0
comments
4
min read
LW
link
Gold, Silver, Red: A color scheme for understanding people
Michael Soareverix
13 Mar 2023 1:06 UTC
17
points
2
comments
4
min read
LW
link
A Good Future (rough draft)
Michael Soareverix
24 Oct 2022 20:45 UTC
10
points
5
comments
3
min read
LW
link
A rough idea for solving ELK: An approach for training generalist agents like GATO to make plans and describe them to humans clearly and honestly.
Michael Soareverix
8 Sep 2022 15:20 UTC
2
points
2
comments
2
min read
LW
link
Our Existing Solutions to AGI Alignment (semi-safe)
Michael Soareverix
21 Jul 2022 19:00 UTC
12
points
1
comment
3
min read
LW
link
Musings on the Human Objective Function
Michael Soareverix
15 Jul 2022 7:13 UTC
3
points
0
comments
3
min read
LW
link
Three Minimum Pivotal Acts Possible by Narrow AI
Michael Soareverix
12 Jul 2022 9:51 UTC
0
points
4
comments
2
min read
LW
link
Could an AI Alignment Sandbox be useful?
Michael Soareverix
2 Jul 2022 5:06 UTC
2
points
1
comment
1
min read
LW
link
Back to top