RSS

Eris

Karma: 55

Currently an independent AI Safety researcher. Ex software developer, ex QA.

Prior to working in industry was involved with academic research of cognitive architectures (the old ones). I’m a generalist with a focus on human-like AIs (know a couple of things about developmental psychology, cognitive science, ethology, computational models of the mind).

Personal research vectors: ontogenetic curriculum and the narrative theory. The primary theme is consolidating insights from various mind related areas into plausible explanation of human value dynamics.

A long-time lesswronger (~8 years). Mostly been active in the local LW community (as a consumer and as an org).

Recently I’ve organised a sort peer-to-peer accelerator for anyone who wants to become AI Safety researcher. Right now there are 17 of us.

Was a part of AI Safety Camp 2023 (Positive Attractors team).

All pi­geons are ugly!

ErisJan 28, 2025, 3:18 PM
−9 points
2 comments1 min readLW link

Agency over­hang as a proxy for Sharp left turn

Nov 7, 2024, 12:14 PM
6 points
0 comments5 min readLW link

Nar­ra­tive The­ory. Part 6. Ar­tifi­cial Neu­ral Networks

ErisJul 18, 2023, 9:22 AM
3 points
0 comments2 min readLW link

Nar­ra­tive The­ory. Part 4. Neu­ral Darwinism

ErisJul 17, 2023, 4:45 PM
3 points
0 comments2 min readLW link

Nar­ra­tive The­ory. Part 3. Sim­plest to succeed

ErisJul 16, 2023, 2:41 PM
4 points
0 comments1 min readLW link

Nar­ra­tive The­ory. Part 2. A new way of do­ing the same thing

ErisJul 15, 2023, 10:37 AM
2 points
0 comments1 min readLW link

Introduction

Jun 30, 2023, 8:45 PM
8 points
0 comments2 min readLW link

In­her­ently In­ter­pretable Architectures

Jun 30, 2023, 8:43 PM
4 points
0 comments7 min readLW link

Pos­i­tive Attractors

Jun 30, 2023, 8:43 PM
6 points
0 comments13 min readLW link

An­ton Zhel­toukhov’s Shortform

ErisJun 15, 2023, 8:56 AM
2 points
6 comments1 min readLW link

In­tro to On­to­ge­netic Curriculum

ErisApr 13, 2023, 5:15 PM
20 points
1 comment2 min readLW link