RSS

Neu­ro­mor­phic AI

TagLast edit: Dec 30, 2024, 10:37 AM by Dakara

Neuromorphic AI (‘neuron-shaped’) is a form of AI where most of the functionality has been copied from the human brain. This implies that its inner workings are not necessarily understood by the creators any further than is necessary to simulate them on a computer. It is considered a more unsafe form of AI than either Whole Brain Emulation or de novo AI because its lacks the high quality replication of human values of the former and the possibility of good theoretical guarantees that the latter may have due to cleaner design.

External Links

Safety of Self-Assem­bled Neu­ro­mor­phic Hardware

CanDec 26, 2022, 6:51 PM
16 points
2 comments10 min readLW link
(forum.effectivealtruism.org)

Book re­view: “A Thou­sand Brains” by Jeff Hawkins

Steven ByrnesMar 4, 2021, 5:10 AM
122 points
18 comments19 min readLW link

Brain-Com­puter In­ter­faces and AI Alignment

niplavAug 28, 2021, 7:48 PM
35 points
6 comments11 min readLW link

Hu­man in­stincts, sym­bol ground­ing, and the blank-slate neocortex

Steven ByrnesOct 2, 2019, 12:06 PM
62 points
23 comments11 min readLW link

[In­tro to brain-like-AGI safety] 1. What’s the prob­lem & Why work on it now?

Steven ByrnesJan 26, 2022, 3:23 PM
159 points
19 comments26 min readLW link

[In­tro to brain-like-AGI safety] 4. The “short-term pre­dic­tor”

Steven ByrnesFeb 16, 2022, 1:12 PM
64 points
11 comments13 min readLW link

[In­tro to brain-like-AGI safety] 5. The “long-term pre­dic­tor”, and TD learning

Steven ByrnesFeb 23, 2022, 2:44 PM
54 points
27 comments20 min readLW link

[In­tro to brain-like-AGI safety] 6. Big pic­ture of mo­ti­va­tion, de­ci­sion-mak­ing, and RL

Steven ByrnesMar 2, 2022, 3:26 PM
69 points
17 comments16 min readLW link

[In­tro to brain-like-AGI safety] 7. From hard­coded drives to fore­sighted plans: A worked example

Steven ByrnesMar 9, 2022, 2:28 PM
78 points
0 comments10 min readLW link

[In­tro to brain-like-AGI safety] 8. Take­aways from neuro 1/​2: On AGI development

Steven ByrnesMar 16, 2022, 1:59 PM
57 points
2 comments14 min readLW link

FAI and the In­for­ma­tion The­ory of Pleasure

johnsonmxSep 8, 2015, 9:16 PM
14 points
21 comments4 min readLW link

My take on Ja­cob Can­nell’s take on AGI safety

Steven ByrnesNov 28, 2022, 2:01 PM
72 points
15 comments30 min readLW link1 review

What’s Your Cog­ni­tive Al­gorithm?

RaemonJun 18, 2020, 10:16 PM
75 points
23 comments13 min readLW link

[In­tro to brain-like-AGI safety] 12. Two paths for­ward: “Con­trol­led AGI” and “So­cial-in­stinct AGI”

Steven ByrnesApr 20, 2022, 12:58 PM
44 points
10 comments15 min readLW link

Jeff Hawk­ins on neu­ro­mor­phic AGI within 20 years

Steven ByrnesJul 15, 2019, 7:16 PM
170 points
24 comments12 min readLW link

Con­nec­tomics seems great from an AI x-risk perspective

Steven ByrnesApr 30, 2023, 2:38 PM
101 points
7 comments10 min readLW link1 review

EAI Align­ment Speaker Series #1: Challenges for Safe & Benefi­cial Brain-Like Ar­tifi­cial Gen­eral In­tel­li­gence with Steve Byrnes

Mar 23, 2023, 2:32 PM
28 points
0 comments27 min readLW link
(youtu.be)

Brain-in­spired AGI and the “life­time an­chor”

Steven ByrnesSep 29, 2021, 1:09 PM
65 points
16 comments13 min readLW link

Hu­man prefer­ences as RL critic val­ues—im­pli­ca­tions for alignment

Seth HerdMar 14, 2023, 10:10 PM
26 points
6 comments6 min readLW link

[Question] How do biolog­i­cal or spik­ing neu­ral net­works learn?

Dom PolsinelliJan 31, 2025, 4:03 PM
2 points
1 comment2 min readLW link

The al­ign­ment sta­bil­ity problem

Seth HerdMar 26, 2023, 2:10 AM
35 points
15 comments4 min readLW link

Cor­rect­ing a mis­con­cep­tion: con­scious­ness does not need 90 billion neu­rons, at all

bvbvbvbvbvbvbvbvbvbvbvMar 31, 2023, 4:02 PM
21 points
19 comments1 min readLW link

Ca­pa­bil­ities and al­ign­ment of LLM cog­ni­tive architectures

Seth HerdApr 18, 2023, 4:29 PM
86 points
18 comments20 min readLW link

GPT-4 im­plic­itly val­ues iden­tity preser­va­tion: a study of LMCA iden­tity management

OzyrusMay 17, 2023, 2:13 PM
21 points
4 comments13 min readLW link

The Dark Side of Cog­ni­tion Hypothesis

Cameron BergOct 3, 2021, 8:10 PM
19 points
1 comment16 min readLW link

AI re­searchers an­nounce Neu­roAI agenda

Cameron BergOct 24, 2022, 12:14 AM
37 points
12 comments6 min readLW link
(arxiv.org)

Large Lan­guage Models Suggest a Path to Ems

anithiteDec 29, 2022, 2:20 AM
17 points
2 comments5 min readLW link

Are you sta­bly al­igned?

Seth HerdFeb 24, 2023, 10:08 PM
13 points
0 comments2 min readLW link