RSS

Koen.Holtman(Koen Holtman)

Karma: 543

Computing scientist and Systems architect. Currently doing self-funded AI/​AGI safety research. I participate in AI standardization under the company name Holtman Systems Research: https://​​holtmansystemsresearch.nl/​​

Open po­si­tions: Re­search An­a­lyst at the AI Stan­dards Lab

22 Dec 2023 16:31 UTC
17 points
0 comments1 min readLW link

De­mand­ing and De­sign­ing Aligned Cog­ni­tive Architectures

Koen.Holtman21 Dec 2021 17:32 UTC
8 points
5 comments5 min readLW link

Safely con­trol­ling the AGI agent re­ward function

Koen.Holtman17 Feb 2021 14:47 UTC
8 points
0 comments5 min readLW link

Graph­i­cal World Models, Coun­ter­fac­tu­als, and Ma­chine Learn­ing Agents

Koen.Holtman17 Feb 2021 11:07 UTC
6 points
2 comments10 min readLW link

Disen­tan­gling Cor­rigi­bil­ity: 2015-2021

Koen.Holtman16 Feb 2021 18:01 UTC
22 points
20 comments9 min readLW link

Creat­ing AGI Safety Interlocks

Koen.Holtman5 Feb 2021 12:01 UTC
7 points
4 comments8 min readLW link

Coun­ter­fac­tual Plan­ning in AGI Systems

Koen.Holtman3 Feb 2021 13:54 UTC
10 points
0 comments5 min readLW link

New pa­per: AGI Agent Safety by Iter­a­tively Im­prov­ing the Utility Function

Koen.Holtman15 Jul 2020 14:05 UTC
21 points
2 comments6 min readLW link

[Question] The Si­mu­la­tion Epiphany Problem

Koen.Holtman31 Oct 2019 22:12 UTC
15 points
13 comments4 min readLW link

New pa­per: Cor­rigi­bil­ity with Utility Preservation

Koen.Holtman6 Aug 2019 19:04 UTC
44 points
11 comments2 min readLW link