RSS

Superintelligence

TagLast edit: Dec 30, 2024, 9:50 AM by Dakara

Superintelligence is a being with superhuman intelligence, and a focus of the Machine Intelligence Research Institute’s research. Specifically, Nick Bostrom (1997) defined it as

“An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

The Machine Intelligence Research Institute is dedicated to ensuring humanity’s safety and prosperity by preparing for the development of an Artificial General Intelligence with superintelligence. Given its intelligence, it is likely to be incapable of being controlled by humanity. It is important to prepare early for the development of friendly artificial intelligence, as there may be an AI arms race. A strong superintelligence is a term describing a superintelligence which is not designed with the same architecture as the human brain.

An Artificial General Intelligence will have a number of advantages aiding it in becoming a superintelligence. It can improve the hardware it runs on and obtain better hardware. It will be capable of directly editing its own code. Depending on how easy its code is to modify, it might carry out software improvements that spark further improvements. Where a task can be accomplished in a repetitive way, a module preforming the task far more efficiently might be developed. Its motivations and preferences can be edited to be more consistent with each other. It will have an indefinite life span, be capable of reproducing, and transfer knowledge, skills, and code among its copies as well as cooperating and communicating with them better than humans do with each other.

The development of superintelligence from humans is another possibility, sometimes termed a weak superintelligence. It may come in the form of whole brain emulation, where a human brain is scanned and simulated on a computer. Many of the advantages a AGI has in developing superintelligence apply here as well. The development of Brain-computer interfaces may also lead to the creation of superintelligence. Biological enhancements such as genetic engineering and the use of nootropics could lead to superintelligence as well.

External Links

See Also

An Ap­peal to AI Su­per­in­tel­li­gence: Rea­sons to Pre­serve Humanity

James_MillerMar 18, 2023, 4:22 PM
39 points
73 comments12 min readLW link

“If we go ex­tinct due to mis­al­igned AI, at least na­ture will con­tinue, right? … right?”

plexMay 18, 2024, 2:09 PM
47 points
23 comments2 min readLW link
(aisafety.info)

The new rul­ing philos­o­phy re­gard­ing AI

Mitchell_PorterNov 11, 2024, 1:28 PM
29 points
0 comments5 min readLW link

Play­ing Minecraft with a Superintelligence

Johannes C. MayerAug 17, 2024, 10:47 PM
3 points
0 comments2 min readLW link

Pre­dic­tion: any un­con­trol­lable AI will turn earth into a gi­ant computer

Karl von WendtApr 17, 2023, 12:30 PM
11 points
8 comments3 min readLW link

Su­per­in­tel­li­gence Read­ing Group—Sec­tion 1: Past Devel­op­ments and Pre­sent Capabilities

KatjaGraceSep 16, 2014, 1:00 AM
43 points
233 comments7 min readLW link

Su­per­in­tel­li­gence read­ing group

KatjaGraceAug 31, 2014, 2:59 PM
31 points
2 comments2 min readLW link

Su­per­in­tel­li­gent AGI in a box—a ques­tion.

DmytryFeb 23, 2012, 6:48 PM
16 points
77 comments2 min readLW link

Su­per­in­tel­li­gence 7: De­ci­sive strate­gic advantage

KatjaGraceOct 28, 2014, 1:01 AM
19 points
60 comments6 min readLW link

Su­per­in­tel­li­gence via whole brain emulation

AlexMennenAug 17, 2016, 4:11 AM
15 points
33 comments3 min readLW link

Su­per­in­tel­li­gence 5: Forms of Superintelligence

KatjaGraceOct 14, 2014, 1:00 AM
22 points
114 comments5 min readLW link

A “Bit­ter Les­son” Ap­proach to Align­ing AGI and ASI

RogerDearnaleyJul 6, 2024, 1:23 AM
60 points
40 comments24 min readLW link

In­ter­mit­tent Distil­la­tions #4: Semi­con­duc­tors, Eco­nomics, In­tel­li­gence, and Tech­nolog­i­cal Progress.

Mark XuJul 8, 2021, 10:14 PM
81 points
9 comments10 min readLW link

En­tropic bound­ary con­di­tions to­wards safe ar­tifi­cial superintelligence

Santiago Nunez-CorralesJul 20, 2021, 10:15 PM
3 points
0 comments2 min readLW link
(www.tandfonline.com)

Su­per­in­tel­li­gence FAQ

Scott AlexanderSep 20, 2016, 7:00 PM
134 points
38 comments27 min readLW link

After Over­mor­row: Scat­tered Mus­ings on the Im­me­di­ate Post-AGI World

Yuli_BanFeb 24, 2024, 3:49 PM
−3 points
0 comments26 min readLW link

How to Build Heaven: A Con­strained Boltz­mann Brain Generator

High TidesDec 13, 2024, 1:04 AM
−8 points
3 comments5 min readLW link

We are headed into an ex­treme com­pute overhang

devrandomApr 26, 2024, 9:38 PM
53 points
34 comments2 min readLW link

Former OpenAI Su­per­al­ign­ment Re­searcher: Su­per­in­tel­li­gence by 2030

Julian BradshawJun 5, 2024, 3:35 AM
69 points
30 comments1 min readLW link
(situational-awareness.ai)

Why is AGI/​ASI Inevitable?

DeathlessAmaranthMay 2, 2024, 6:27 PM
14 points
6 comments1 min readLW link

The Dou­ble Body Paradigm: What Comes After ASI Align­ment?

De_Carvalho_LoickDec 14, 2024, 6:09 PM
1 point
0 comments6 min readLW link

Propos­ing the Post-Sin­gu­lar­ity Sym­biotic Researches

Hiroshi YamakawaJun 20, 2024, 4:05 AM
7 points
1 comment12 min readLW link

Sus­tain­abil­ity of Digi­tal Life Form Societies

Hiroshi YamakawaJul 19, 2024, 1:59 PM
19 points
1 comment20 min readLW link

SYSTEMA ROBOTICA

Ali AhmedAug 12, 2024, 8:34 PM
12 points
2 comments30 min readLW link

[Question] Is an AI re­li­gion jus­tified?

p4rziv4lAug 6, 2024, 3:42 PM
−35 points
11 comments1 min readLW link

redacted

Tomas Rolin (nos)Aug 9, 2024, 11:23 AM
1 point
0 comments1 min readLW link

Su­per­in­tel­li­gent AI is pos­si­ble in the 2020s

HunterJayAug 13, 2024, 6:03 AM
41 points
3 comments12 min readLW link

Knowl­edge Base 1: Could it in­crease in­tel­li­gence and make it safer?

iwisSep 30, 2024, 4:00 PM
−4 points
0 comments4 min readLW link

Happy simulations

FateGrinderOct 1, 2024, 9:05 PM
−5 points
0 comments2 min readLW link

Hu­man, All Too Hu­man—Su­per­in­tel­li­gence re­quires learn­ing things we can’t teach

Ben TurtelDec 26, 2024, 4:26 PM
−13 points
4 comments1 min readLW link
(bturtel.substack.com)

Ge­offrey Hin­ton on the Past, Pre­sent, and Fu­ture of AI

Stephen McAleeseOct 12, 2024, 4:41 PM
22 points
5 comments18 min readLW link

The Hopium Wars: the AGI En­tente Delusion

Max TegmarkOct 13, 2024, 5:00 PM
208 points
60 comments9 min readLW link

Why hu­mans won’t con­trol su­per­hu­man AIs.

Spiritus DeiOct 16, 2024, 4:48 PM
−11 points
1 comment6 min readLW link

P=NP

OnePolynomialOct 17, 2024, 5:56 PM
−25 points
0 comments8 min readLW link

Elon and Roko’s basilisk

Maxh Oct 18, 2024, 8:02 AM
1 point
0 comments1 min readLW link

[Question] Are there more than 12 paths to Su­per­in­tel­li­gence?

p4rziv4lOct 18, 2024, 4:05 PM
−3 points
0 comments1 min readLW link

The Per­sonal Im­pli­ca­tions of AGI Realism

xiznebOct 20, 2024, 4:43 PM
7 points
7 comments5 min readLW link

Dario Amodei’s “Machines of Lov­ing Grace” sound in­cred­ibly dan­ger­ous, for Humans

Super AGIOct 27, 2024, 5:05 AM
8 points
1 comment1 min readLW link

Why Re­cur­sive Self-Im­prove­ment Might Not Be the Ex­is­ten­tial Risk We Fear

Nassim_ANov 24, 2024, 5:17 PM
1 point
0 comments9 min readLW link

llightcb’s Shortform

[Error communicating with LW2 server]Nov 29, 2024, 7:41 PM
1 point
1 comment1 min readLW link

En­sur­ing long-term preser­va­tion of hu­man consciousness

Amy JohnsonJan 16, 2025, 7:09 PM
2 points
0 comments4 min readLW link

What are the differ­ences be­tween AGI, trans­for­ma­tive AI, and su­per­in­tel­li­gence?

Jan 23, 2025, 10:03 AM
10 points
3 comments3 min readLW link
(aisafety.info)

Con­sti­tu­tions for ASI?

ukc10014Jan 28, 2025, 4:32 PM
3 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

How not to build a dystopia

ankJan 29, 2025, 2:16 PM
−9 points
4 comments1 min readLW link

Are we the Wolves now? Hu­man Eu­gen­ics un­der AI Control

BritJan 30, 2025, 8:31 AM
−2 points
1 comment2 min readLW link

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym FibírJan 30, 2025, 10:58 AM
6 points
5 comments10 min readLW link
(tetherware.substack.com)

Can some­one, any­one, make su­per­in­tel­li­gence a more con­crete con­cept?

Ori NagelJan 30, 2025, 11:25 PM
2 points
4 comments4 min readLW link

Did Ar­tifi­cial Su­per­in­tel­li­gence Ex­ist at the Big Bang?

Aaron LiJan 30, 2025, 9:35 PM
1 point
0 comments2 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo Nov 8, 2023, 8:10 PM
1 point
0 comments8 min readLW link

AGI: Hire Soft­ware Eng­ineers—All of Them, Right Now

MGowMar 30, 2023, 6:40 PM
−18 points
3 comments1 min readLW link

P(doom|su­per­in­tel­li­gence) or coin tosses and dice throws of hu­man val­ues (and other re­lated Ps).

MuyydApr 22, 2023, 10:06 AM
−7 points
0 comments4 min readLW link

A con­cise sum-up of the ba­sic ar­gu­ment for AI doom

Mergimio H. DoefevmilApr 24, 2023, 5:37 PM
11 points
6 comments2 min readLW link

Free­dom Is All We Need

Leo GlisicApr 27, 2023, 12:09 AM
−1 points
8 comments10 min readLW link

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo Nov 9, 2023, 4:58 PM
3 points
0 comments1 min readLW link

Out of the Box

jesseduffieldNov 13, 2023, 11:43 PM
5 points
1 comment7 min readLW link

LLMs May Find It Hard to FOOM

RogerDearnaleyNov 15, 2023, 2:52 AM
11 points
30 comments12 min readLW link

A few Su­per­hu­man ex­am­ples of Su­per­al­igned Su­per­in­tel­li­gence from Google Bard (Thanks­giv­ing 2023)

Nov 23, 2023, 7:06 PM
−9 points
1 comment17 min readLW link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià MoretDec 2, 2023, 2:07 PM
26 points
31 comments42 min readLW link

In­fer­ence Speed is Not Unbounded

OneManyNoneMay 8, 2023, 4:24 PM
35 points
32 comments16 min readLW link

Rosko’s Wager

WukshMay 16, 2023, 7:18 AM
1 point
0 comments2 min readLW link

In­stru­men­tal Con­ver­gence to Com­plex­ity Preservation

Macro FlaneurJul 13, 2023, 5:40 PM
2 points
2 comments3 min readLW link

Ram­ble on STUFF: in­tel­li­gence, simu­la­tion, AI, doom, de­fault mode, the usual

Bill BenzonAug 26, 2023, 3:49 PM
5 points
0 comments4 min readLW link

Knowl­edge Base 2: The struc­ture and the method of building

iwisOct 9, 2023, 11:53 AM
2 points
4 comments7 min readLW link

Knowl­edge Base 4: Gen­eral applications

iwisOct 16, 2023, 12:26 PM
3 points
0 comments1 min readLW link

Knowl­edge Base 3: Shop­ping ad­vi­sor and other uses of knowl­edge base about products

iwisOct 9, 2023, 11:53 AM
0 points
0 comments4 min readLW link

Knowl­edge Base 8: The truth as an at­trac­tor in the in­for­ma­tion space

iwisApr 25, 2024, 3:28 PM
−8 points
0 comments2 min readLW link

Con­tain­ing the AI… In­side a Si­mu­lated Reality

HumaneAutomationOct 31, 2020, 4:16 PM
1 point
9 comments2 min readLW link

The Evolu­tion­ary Path­way from Biolog­i­cal to Digi­tal In­tel­li­gence: A Cos­mic Perspective

George360Sep 5, 2023, 5:47 PM
−17 points
0 comments4 min readLW link

In­stru­men­tal Con­ver­gence and hu­man ex­tinc­tion.

Spiritus DeiOct 2, 2023, 12:41 AM
−10 points
3 comments7 min readLW link

Are we all mis­al­igned?

Mateusz MazurkiewiczJan 3, 2021, 2:42 AM
11 points
0 comments5 min readLW link

Suffer­ing-Fo­cused Ethics in the In­finite Uni­verse. How can we re­deem our­selves if Mul­ti­verse Im­mor­tal­ity is real and sub­jec­tive death is im­pos­si­ble.

Szymon KucharskiFeb 24, 2021, 9:02 PM
−2 points
4 comments70 min readLW link

Reflec­tion of Hier­ar­chi­cal Re­la­tion­ship via Nuanced Con­di­tion­ing of Game The­ory Ap­proach for AI Devel­op­ment and Utilization

Kyoung-cheol KimJun 4, 2021, 7:20 AM
2 points
2 comments7 min readLW link

Is the ar­gu­ment that AI is an xrisk valid?

MACannonJul 19, 2021, 1:20 PM
5 points
61 comments1 min readLW link
(onlinelibrary.wiley.com)

Ques­tion 1: Pre­dicted ar­chi­tec­ture of AGI learn­ing al­gorithm(s)

Cameron BergFeb 10, 2022, 5:22 PM
13 points
1 comment7 min readLW link

Why Copi­lot Ac­cel­er­ates Timelines

Michaël TrazziApr 26, 2022, 10:06 PM
35 points
14 comments7 min readLW link

A Cri­tique of AI Align­ment Pessimism

ExCephJul 19, 2022, 2:28 AM
9 points
1 comment9 min readLW link

Ar­tifi­cial in­tel­li­gence wireheading

Big TonyAug 12, 2022, 3:06 AM
5 points
2 comments1 min readLW link

[Question] Would a Misal­igned SSI Really Kill Us All?

DragonGodSep 14, 2022, 12:15 PM
6 points
7 comments6 min readLW link

[Question] Are Speed Su­per­in­tel­li­gences Fea­si­ble for Modern ML Tech­niques?

DragonGodSep 14, 2022, 12:59 PM
9 points
7 comments1 min readLW link

[Question] Why Do Peo­ple Think Hu­mans Are Stupid?

DragonGodSep 14, 2022, 1:55 PM
22 points
41 comments3 min readLW link

[Question] Are Hu­man Brains Univer­sal?

DragonGodSep 15, 2022, 3:15 PM
16 points
28 comments5 min readLW link

Another prob­lem with AI con­fine­ment: or­di­nary CPUs can work as ra­dio transmitters

RomanSOct 14, 2022, 8:28 AM
35 points
1 comment1 min readLW link
(news.softpedia.com)

AGI Im­pos­si­ble due to En­ergy Constrains

TheKlausNov 30, 2022, 6:48 PM
−11 points
13 comments1 min readLW link

Limits of Superintelligence

Aleksei PetrenkoDec 13, 2022, 12:19 PM
1 point
5 comments1 min readLW link

AGI doesn’t need un­der­stand­ing, in­ten­tion, or con­scious­ness in or­der to kill us, only intelligence

James BlahaFeb 20, 2023, 12:55 AM
10 points
2 comments18 min readLW link

[Question] “Wide” vs “Tall” su­per­in­tel­li­gence

TemplarrrMar 19, 2023, 7:23 PM
15 points
8 comments1 min readLW link

[LINK] Wait But Why—The AI Revolu­tion Part 2

Adam ZernerFeb 4, 2015, 4:02 PM
27 points
88 comments1 min readLW link

Scal­able Over­sight and Weak-to-Strong Gen­er­al­iza­tion: Com­pat­i­ble ap­proaches to the same problem

Dec 16, 2023, 5:49 AM
76 points
4 comments6 min readLW link1 review

Nav­i­gat­ing the Nexus of AGI, Ethics, and Hu­man Sur­vival: A Math­e­mat­i­cal Inquiry

Kan YuenyongFeb 29, 2024, 6:47 AM
1 point
0 comments3 min readLW link

Why en­tropy means you might not have to worry as much about su­per­in­tel­li­gent AI

Ron JMay 23, 2024, 3:52 AM
−26 points
1 comment2 min readLW link

5. Mo­ral Value for Sen­tient An­i­mals? Alas, Not Yet

RogerDearnaleyDec 27, 2023, 6:42 AM
33 points
41 comments23 min readLW link

In­ves­ti­gat­ing Alter­na­tive Fu­tures: Hu­man and Su­per­in­tel­li­gence In­ter­ac­tion Scenarios

Hiroshi YamakawaJan 3, 2024, 11:46 PM
1 point
0 comments17 min readLW link

GPT as an “In­tel­li­gence Fork­lift.”

boazbarakMay 19, 2023, 9:15 PM
48 points
27 comments3 min readLW link

W2SG: Introduction

Maria KaprosMar 10, 2024, 4:25 PM
1 point
2 comments10 min readLW link

Si­mu­la­tions and Altruism

FateGrinderJun 2, 2024, 2:45 AM
−7 points
2 comments25 min readLW link

Up­date­less­ness doesn’t solve most problems

Martín SotoFeb 8, 2024, 5:30 PM
130 points
44 comments12 min readLW link

[Question] Op­ti­miz­ing for Agency?

Michael SoareverixFeb 14, 2024, 8:31 AM
10 points
9 comments2 min readLW link

Thoughts for and against an ASI figur­ing out ethics for itself

sweenesmFeb 20, 2024, 11:40 PM
6 points
10 comments3 min readLW link
No comments.