RSS

Koen.Holtman

Karma: 543

Computing scientist and Systems architect. Currently doing self-funded AI/​AGI safety research. I participate in AI standardization under the company name Holtman Systems Research: https://​​holtmansystemsresearch.nl/​​

De­mand­ing and De­sign­ing Aligned Cog­ni­tive Architectures

Koen.HoltmanDec 21, 2021, 5:32 PM
8 points
5 comments5 min readLW link

Safely con­trol­ling the AGI agent re­ward function

Koen.HoltmanFeb 17, 2021, 2:47 PM
8 points
0 comments5 min readLW link

Graph­i­cal World Models, Coun­ter­fac­tu­als, and Ma­chine Learn­ing Agents

Koen.HoltmanFeb 17, 2021, 11:07 AM
6 points
2 comments10 min readLW link

Disen­tan­gling Cor­rigi­bil­ity: 2015-2021

Koen.HoltmanFeb 16, 2021, 6:01 PM
22 points
20 comments9 min readLW link

Creat­ing AGI Safety Interlocks

Koen.HoltmanFeb 5, 2021, 12:01 PM
7 points
4 comments8 min readLW link

Coun­ter­fac­tual Plan­ning in AGI Systems

Koen.HoltmanFeb 3, 2021, 1:54 PM
10 points
0 comments5 min readLW link

New pa­per: AGI Agent Safety by Iter­a­tively Im­prov­ing the Utility Function

Koen.HoltmanJul 15, 2020, 2:05 PM
21 points
2 comments6 min readLW link

[Question] The Si­mu­la­tion Epiphany Problem

Koen.HoltmanOct 31, 2019, 10:12 PM
15 points
13 comments4 min readLW link

New pa­per: Cor­rigi­bil­ity with Utility Preservation

Koen.HoltmanAug 6, 2019, 7:04 PM
44 points
11 comments2 min readLW link