Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Goals
Tag
Relevant
New
Old
Refinement of Active Inference agency ontology
Roman Leventov
15 Dec 2023 9:31 UTC
16
points
0
comments
5
min read
LW
link
(arxiv.org)
[Question]
How to select a long-term goal and align my mind towards it?
Alexander
24 Dec 2021 11:40 UTC
19
points
8
comments
2
min read
LW
link
Logical Foundations of Government Policy
FCCC
10 Oct 2020 17:05 UTC
2
points
0
comments
17
min read
LW
link
Ideation and Trajectory Modelling in Language Models
NickyP
5 Oct 2023 19:21 UTC
16
points
2
comments
10
min read
LW
link
Goal alignment without alignment on epistemology, ethics, and science is futile
Roman Leventov
7 Apr 2023 8:22 UTC
20
points
2
comments
2
min read
LW
link
AISC Project: Modelling Trajectories of Language Models
NickyP
13 Nov 2023 14:33 UTC
27
points
0
comments
12
min read
LW
link
Complex Behavior from Simple (Sub)Agents
moridinamael
10 May 2019 21:44 UTC
113
points
13
comments
9
min read
LW
link
1
review
The Values-to-Actions Decision Chain
Remmelt
30 Jun 2018 21:52 UTC
29
points
6
comments
10
min read
LW
link
Distinguishing goals from chores
Amir Bolous
10 Jan 2021 7:45 UTC
5
points
1
comment
4
min read
LW
link
Do Humans Want Things?
lukeprog
4 Aug 2011 5:00 UTC
40
points
53
comments
5
min read
LW
link
Unpacking “Shard Theory” as Hunch, Question, Theory, and Insight
Jacy Reese Anthis
16 Nov 2022 13:54 UTC
31
points
9
comments
2
min read
LW
link
Shifting Headspaces—Transitional Beast-Mode
Jonathan Moregård
12 Aug 2024 13:02 UTC
36
points
9
comments
2
min read
LW
link
(honestliving.substack.com)
Galatea and the windup toy
Nicolas Villarreal
26 Oct 2024 14:52 UTC
−4
points
0
comments
13
min read
LW
link
(nicolasdvillarreal.substack.com)
No comments.
Back to top