RSS

Gen­eral Intelligence

TagLast edit: 27 Mar 2022 11:26 UTC by plex

General Intelligence or Universal Intelligence is the ability to efficiently achieve goals in a wide range of domains.

This tag is specifically for discussing intelligence in the broad sense: for discussion of IQ testing and psychometric intelligence, see IQ /​ g-factor; for discussion about e.g. specific results in artificial intelligence, see AI. These tags may overlap with this one to the extent that they discuss the nature of general intelligence.

Examples of posts that fall under this tag include The Power of Intelligence, Measuring Optimization Power, Adaption-Executers not Fitness Maximizers, Distinctions in Types of Thought, The Octopus, the Dolphin and Us: a Great Filter tale.

On the difference between psychometric intelligence (IQ) and general intelligence:

But the word “intelligence” commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from “book smarts” which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.

-- Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

Definitions of General Intelligence

After reviewing extensive literature on the subject, Legg and Hutter[1] summarizes the many possible valuable definitions in the informal statement “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use Solomonoff induction—a formalization of Occam’s razor—to construct an universal artificial intelligence with a embedded utility function which assigns less utility to those actions based on theories with higher complexity. They argue this final formalization is a valid, meaningful, informative, general, unbiased, fundamental, objective, universal and practical definition of intelligence.

We can relate Legg and Hutter’s definition with the concept of optimization. According to Eliezer Yudkowsky intelligence is efficient cross-domain optimization. It measures an agent’s capacity for efficient cross-domain optimization of the world according to the agent’s preferences.[2] Optimization measures not only the capacity to achieve the desired goal but also is inversely proportional to the amount of resources used. It’s the ability to steer the future so it hits that small target of desired outcomes in the large space of all possible outcomes, using fewer resources as possible. For example, when Deep Blue defeated Kasparov, it was able to hit that small possible outcome where it made the right order of moves given Kasparov’s moves from the very large set of all possible moves. In that domain, it was more optimal than Kasparov. However, Kasparov would have defeated Deep Blue in almost any other relevant domain, and hence, he is considered more intelligent.

One could cast this definition in a possible world vocabulary, intelligence is:

  1. the ability to precisely realize one of the members of a small set of possible future worlds that have a higher preference over the vast set of all other possible worlds with lower preference; while

  2. using fewer resources than the other alternatives paths for getting there; and in the

  3. most diverse domains as possible.

How many more worlds have a higher preference then the one realized by the agent, less intelligent he is. How many more worlds have a lower preference than the one realized by the agent, more intelligent he is. (Or: How much smaller is the set of worlds at least as preferable as the one realized, more intelligent the agent is). How much less paths for realizing the desired world using fewer resources than those spent by the agent, more intelligent he is. And finally, in how many more domains the agent can be more efficiently optimal, more intelligent he is. Restating it, the intelligence of an agent is directly proportional to:

and it is, accordingly, inversely proportional to:

This definition avoids several problems common in many others definitions, especially it avoids anthropomorphizing intelligence.

See Also

  1. ^
  2. ^

Is Gen­eral In­tel­li­gence “Com­pact”?

DragonGod4 Jul 2022 13:27 UTC
27 points
6 comments22 min readLW link

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
90 points
128 comments6 min readLW link

Refram­ing Su­per­in­tel­li­gence: Com­pre­hen­sive AI Ser­vices as Gen­eral Intelligence

Rohin Shah8 Jan 2019 7:12 UTC
122 points
77 comments5 min readLW link2 reviews
(www.fhi.ox.ac.uk)

The Teacup Test

lsusr8 Oct 2022 4:25 UTC
99 points
32 comments2 min readLW link

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Eliezer Yudkowsky11 Nov 2007 6:39 UTC
156 points
33 comments3 min readLW link

Hu­mans Who Are Not Con­cen­trat­ing Are Not Gen­eral Intelligences

sarahconstantin25 Feb 2019 20:40 UTC
187 points
35 comments6 min readLW link1 review
(srconstantin.wordpress.com)

[Book Re­view] “The Bell Curve” by Charles Murray

lsusr2 Nov 2021 5:49 UTC
94 points
134 comments23 min readLW link

AlphaS­tar: Im­pres­sive for RL progress, not for AGI progress

orthonormal2 Nov 2019 1:50 UTC
113 points
58 comments2 min readLW link1 review

Effi­cient Cross-Do­main Optimization

Eliezer Yudkowsky28 Oct 2008 16:33 UTC
54 points
38 comments5 min readLW link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer Yudkowsky22 Nov 2021 19:35 UTC
205 points
176 comments60 min readLW link1 review

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

29 Nov 2021 19:26 UTC
121 points
39 comments40 min readLW link1 review

The Power of Intelligence

Eliezer Yudkowsky1 Jan 2007 20:00 UTC
115 points
6 comments4 min readLW link

Mea­sur­ing Op­ti­miza­tion Power

Eliezer Yudkowsky27 Oct 2008 21:44 UTC
89 points
37 comments6 min readLW link

Com­plex­ity and Intelligence

Eliezer Yudkowsky3 Nov 2008 20:27 UTC
33 points
78 comments11 min readLW link

Hands-On Ex­pe­rience Is Not Magic

Thane Ruthenis27 May 2023 16:57 UTC
21 points
14 comments5 min readLW link

Quan­tify­ing Gen­eral Intelligence

JasonBrown17 Jun 2022 21:57 UTC
9 points
6 comments13 min readLW link

The Prin­ci­pled In­tel­li­gence Hypothesis

KatjaGrace14 Feb 2018 1:00 UTC
34 points
15 comments4 min readLW link
(meteuphoric.wordpress.com)

Re­sponse to Blake Richards: AGI, gen­er­al­ity, al­ign­ment, & loss functions

Steven Byrnes12 Jul 2022 13:56 UTC
62 points
9 comments15 min readLW link

How min­i­mal is our in­tel­li­gence?

Douglas_Reay25 Nov 2012 23:34 UTC
78 points
214 comments6 min readLW link

Belief in Intelligence

Eliezer Yudkowsky25 Oct 2008 15:00 UTC
111 points
38 comments3 min readLW link

What In­tel­li­gence Tests Miss: The psy­chol­ogy of ra­tio­nal thought

Kaj_Sotala11 Jul 2010 23:01 UTC
53 points
55 comments9 min readLW link

The Limits of In­tel­li­gence and Me: Do­main Expertise

ChrisHallquist7 Dec 2013 8:23 UTC
45 points
79 comments5 min readLW link

The ground of optimization

Alex Flint20 Jun 2020 0:38 UTC
247 points
80 comments27 min readLW link1 review

Thoughts on Loss Land­scapes and why Deep Learn­ing works

beren25 Jul 2023 16:41 UTC
53 points
4 comments18 min readLW link

Book Re­view: The Eureka Factor

drossbucket4 Mar 2019 19:47 UTC
19 points
2 comments13 min readLW link

In­stru­men­tal con­ver­gence is what makes gen­eral in­tel­li­gence possible

tailcalled11 Nov 2022 16:38 UTC
105 points
11 comments4 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
163 points
44 comments6 min readLW link

Chap­ter 24: Machi­avel­lian In­tel­li­gence Hypothesis

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
28 points
1 comment14 min readLW link

Two ex­pla­na­tions for vari­a­tion in hu­man abilities

Matthew Barnett25 Oct 2019 22:06 UTC
87 points
28 comments5 min readLW link1 review

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
191 points
65 comments5 min readLW link2 reviews

What’s up with ChatGPT and the Tur­ing Test?

4 Jan 2023 15:37 UTC
13 points
19 comments3 min readLW link

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
27 points
28 comments5 min readLW link

How Should We Mea­sure In­tel­li­gence Models: Why Use Fre­quency of Ele­men­tal In­for­ma­tion Operations

hwj2024 Oct 2024 16:54 UTC
1 point
0 comments5 min readLW link

Many peo­ple lack ba­sic sci­en­tific knowledge

lsusr29 Mar 2024 6:43 UTC
11 points
20 comments1 min readLW link

On Mea­sur­ing In­tel­lec­tual Perfor­mance—per­sonal ex­pe­rience and sev­eral thoughts

Alexander Gufan20 Sep 2024 17:21 UTC
3 points
2 comments8 min readLW link

Some com­ments on intelligence

Viliam1 Aug 2024 15:17 UTC
30 points
5 comments3 min readLW link

The Power of In­tel­li­gence—The Animation

Writer11 Mar 2023 16:15 UTC
45 points
3 comments1 min readLW link
(youtu.be)

On some first prin­ci­ples of intelligence

Macheng_Shen17 Mar 2023 17:42 UTC
−14 points
0 comments4 min readLW link

Hu­mans are not au­to­mat­i­cally strategic

AnnaSalamon8 Sep 2010 7:02 UTC
549 points
277 comments4 min readLW link

A Case for the Least For­giv­ing Take On Alignment

Thane Ruthenis2 May 2023 21:34 UTC
100 points
84 comments22 min readLW link

The Flynn Effect Clarified

PeterMcCluskey12 Dec 2020 5:18 UTC
34 points
2 comments4 min readLW link
(www.bayesianinvestor.com)

Ben Go­ertzel’s “Kinds of Minds”

JoshuaFox11 Apr 2021 12:41 UTC
12 points
4 comments1 min readLW link

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman Leventov27 Dec 2023 14:51 UTC
33 points
9 comments4 min readLW link

Distinc­tions in Types of Thought

sarahconstantin10 Oct 2017 3:36 UTC
37 points
24 comments13 min readLW link

The Oc­to­pus, the Dolphin and Us: a Great Filter tale

Stuart_Armstrong3 Sep 2014 21:37 UTC
76 points
236 comments3 min readLW link

The Dark Side of Cog­ni­tion Hypothesis

Cameron Berg3 Oct 2021 20:10 UTC
19 points
1 comment16 min readLW link

Muehlhauser-Wang Dialogue

lukeprog22 Apr 2012 22:40 UTC
34 points
288 comments12 min readLW link

“The Uni­verse of Minds”—call for re­view­ers (Seeds of Science)

rogersbacon25 Jul 2023 16:53 UTC
7 points
0 comments1 min readLW link

Con­tra Alexan­der on the Bit­ter Les­son and IQ

Andrew Keenan Richardson26 Jul 2023 0:07 UTC
9 points
1 comment4 min readLW link
(mechanisticmind.com)

[Question] Is Stu­pidity Ex­pand­ing? Some Hy­pothe­ses.

David Gross15 Oct 2020 3:28 UTC
71 points
42 comments5 min readLW link

Memetic Judo #3: The In­tel­li­gence of Stochas­tic Par­rots v.2

Max TK20 Aug 2023 15:18 UTC
8 points
33 comments6 min readLW link

Eco­nomic Defi­ni­tion of In­tel­li­gence?

Eliezer Yudkowsky29 Oct 2008 19:32 UTC
18 points
9 comments7 min readLW link

The One That Isn’t There

Annoyance20 Nov 2009 20:10 UTC
18 points
6 comments3 min readLW link

Au­to­mated in­tel­li­gence is not AI

KatjaGrace1 Nov 2020 23:30 UTC
54 points
10 comments2 min readLW link
(meteuphoric.com)

Fun­da­men­tal Philo­soph­i­cal Prob­lems In­her­ent in AI discourse

AlexSadler16 Sep 2018 21:03 UTC
23 points
1 comment17 min readLW link

Can we cre­ate a func­tion that prov­ably pre­dicts the op­ti­miza­tion power of in­tel­li­gences?

whpearson28 May 2009 11:35 UTC
−7 points
17 comments2 min readLW link

Con­crete vs Con­tex­tual values

whpearson2 Jun 2009 9:47 UTC
−1 points
32 comments3 min readLW link

How to test your men­tal perfor­mance at the mo­ment?

taw23 Nov 2009 18:35 UTC
24 points
74 comments1 min readLW link

In­tel­li­gence en­hance­ment as ex­is­ten­tial risk mitigation

Roko15 Jun 2009 19:35 UTC
21 points
244 comments3 min readLW link

[Question] Will AGI have “hu­man” flaws?

Agustinus Theodorus23 Dec 2020 3:43 UTC
1 point
2 comments1 min readLW link

Pro­duc­tivity as a func­tion of abil­ity in the­o­ret­i­cal fields

Stefan_Schubert26 Jan 2014 13:16 UTC
45 points
34 comments4 min readLW link

Is the ar­gu­ment that AI is an xrisk valid?

MACannon19 Jul 2021 13:20 UTC
5 points
61 comments1 min readLW link
(onlinelibrary.wiley.com)

Deep­Mind: Gen­er­ally ca­pa­ble agents emerge from open-ended play

Daniel Kokotajlo27 Jul 2021 14:19 UTC
247 points
53 comments2 min readLW link
(deepmind.com)

Mo­ti­va­tions, Nat­u­ral Selec­tion, and Cur­ricu­lum Engineering

Oliver Sourbut16 Dec 2021 1:07 UTC
16 points
0 comments42 min readLW link

Ngo and Yud­kowsky on sci­en­tific rea­son­ing and pivotal acts

21 Feb 2022 20:54 UTC
67 points
14 comments35 min readLW link

Prac­ti­cal ev­ery­day hu­man strategizing

akaTrickster27 Mar 2022 14:20 UTC
6 points
0 comments3 min readLW link

Gato’s Gen­er­al­i­sa­tion: Pre­dic­tions and Ex­per­i­ments I’d Like to See

Oliver Sourbut18 May 2022 7:15 UTC
43 points
3 comments10 min readLW link

Iter­ated Distil­la­tion-Am­plifi­ca­tion, Gato, and Proto-AGI [Re-Ex­plained]

Gabe M27 May 2022 5:42 UTC
21 points
4 comments6 min readLW link

I No Longer Believe In­tel­li­gence to be “Mag­i­cal”

DragonGod10 Jun 2022 8:58 UTC
27 points
34 comments6 min readLW link

[Yann Le­cun] A Path Towards Au­tonomous Ma­chine In­tel­li­gence

DragonGod27 Jun 2022 19:24 UTC
38 points
13 comments1 min readLW link
(openreview.net)

[Question] Wacky, risky, anti-in­duc­tive in­tel­li­gence-en­hance­ment meth­ods?

Nicholas / Heather Kross14 Jul 2022 1:40 UTC
20 points
27 comments1 min readLW link

Are In­tel­li­gence and Gen­er­al­ity Orthog­o­nal?

cubefox18 Jul 2022 20:07 UTC
18 points
16 comments1 min readLW link

Re­la­tion­ship be­tween sub­jec­tive ex­pe­rience and in­tel­li­gence?

Q Home24 Jul 2022 9:10 UTC
5 points
4 comments9 min readLW link

Most Ivy-smart stu­dents aren’t at Ivy-tier schools

Aaron Bergman7 Aug 2022 3:18 UTC
82 points
7 comments8 min readLW link
(www.aaronbergman.net)

[Question] Why do Peo­ple Think In­tel­li­gence Will be “Easy”?

DragonGod12 Sep 2022 17:32 UTC
15 points
32 comments2 min readLW link

[Question] Why Do Peo­ple Think Hu­mans Are Stupid?

DragonGod14 Sep 2022 13:55 UTC
22 points
41 comments3 min readLW link

[Question] Are Hu­man Brains Univer­sal?

DragonGod15 Sep 2022 15:15 UTC
16 points
28 comments5 min readLW link

[Question] When try­ing to define gen­eral in­tel­li­gence is abil­ity to achieve goals the best met­ric?

jmh22 Oct 2022 3:09 UTC
5 points
0 comments1 min readLW link

What will the scaled up GATO look like? (Up­dated with ques­tions)

Amal 25 Oct 2022 12:44 UTC
34 points
22 comments1 min readLW link

[Question] His­tor­i­cal ex­am­ples of peo­ple gain­ing un­usual cog­ni­tive abil­ities?

Nicholas / Heather Kross24 Nov 2022 19:01 UTC
8 points
2 comments1 min readLW link

Take part in our gi­ant study of cog­ni­tive abil­ities and get a cus­tomized re­port of your strengths and weak­nesses!

spencerg25 Nov 2022 16:28 UTC
8 points
1 comment1 min readLW link
(www.guidedtrack.com)

Loose Threads on Intelligence

Shoshannah Tekofsky24 Dec 2022 0:38 UTC
11 points
3 comments8 min readLW link

Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

Akira Pyinya30 Dec 2022 19:05 UTC
10 points
4 comments14 min readLW link

Aiming at the Target

Eliezer Yudkowsky26 Oct 2008 16:47 UTC
40 points
40 comments5 min readLW link

[Question] [Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

DragonGod7 Jan 2023 0:56 UTC
29 points
51 comments2 min readLW link

Cri­tique of some re­cent philos­o­phy of LLMs’ minds

Roman Leventov20 Jan 2023 12:53 UTC
52 points
8 comments20 min readLW link

Pow­er­ful mesa-op­ti­mi­sa­tion is already here

Roman Leventov17 Feb 2023 4:59 UTC
35 points
1 comment2 min readLW link
(arxiv.org)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman Leventov2 Mar 2023 11:02 UTC
9 points
1 comment9 min readLW link
(www.jimruttshow.com)

Rec­og­niz­ing Intelligence

Eliezer Yudkowsky7 Nov 2008 23:22 UTC
28 points
30 comments4 min readLW link

The vir­tu­ous cir­cle: twelve con­jec­tures about fe­male re­pro­duc­tive agency and cul­tural self-determination

Miles Saltiel27 Dec 2023 18:25 UTC
0 points
2 comments14 min readLW link

Up­date on Devel­op­ing an Ethics Calcu­la­tor to Align an AGI to

sweenesm12 Mar 2024 12:33 UTC
4 points
2 comments8 min readLW link

[Question] Es­ti­mat­ing Re­turns to In­tel­li­gence vs Num­bers, Strength and Looks

TruePath31 Dec 2023 10:03 UTC
3 points
6 comments1 min readLW link

Speed of Failing

nano_brasca31 Dec 2023 10:39 UTC
8 points
0 comments2 min readLW link

[Question] What should a non-ge­nius do in the face of rapid progress in GAI to en­sure a de­cent life?

kaler1 Jan 2024 8:22 UTC
11 points
16 comments1 min readLW link

Thoughts for and against an ASI figur­ing out ethics for itself

sweenesm20 Feb 2024 23:40 UTC
6 points
10 comments3 min readLW link

On agen­tic gen­er­al­ist mod­els: we’re es­sen­tially us­ing ex­ist­ing tech­nol­ogy the weak­est and worst way you can use it

Yuli_Ban28 Aug 2024 1:57 UTC
10 points
2 comments9 min readLW link

[Question] What is the na­ture of hu­mans gen­eral in­tel­li­gence and it’s im­pli­ca­tions for AGI?

Will_Pearson26 Mar 2024 15:20 UTC
5 points
4 comments1 min readLW link

Ar­tifi­cial In­tel­li­gence and Liv­ing Wisdom

TMFOW29 Mar 2024 7:41 UTC
−6 points
1 comment17 min readLW link
(tmfow.substack.com)

[Question] What are some posthu­man­ist/​more-than-hu­man ap­proaches to defi­ni­tions of in­tel­li­gence and agency? Par­tic­u­larly in ap­pli­ca­tion to AI re­search.

Eli Hiton9 Apr 2024 21:52 UTC
1 point
0 comments1 min readLW link

Towards a New On­tol­ogy of Intelligence

Tara4 Jun 2024 8:19 UTC
1 point
0 comments3 min readLW link

The Align­ment Prob­lem No One Is Talk­ing About

James Stephen Brown10 May 2024 18:34 UTC
10 points
10 comments2 min readLW link
(nonzerosum.games)

What can we learn from or­cas?

Jonasb10 Jun 2024 18:01 UTC
1 point
0 comments8 min readLW link
(www.denominations.io)

What boot­straps in­tel­li­gence?

invertedpassion10 Sep 2024 7:11 UTC
2 points
2 comments1 min readLW link

[Question] How does some­one prove that their gen­eral in­tel­li­gence is above av­er­age?

M. Y. Zuo16 Sep 2024 21:01 UTC
−3 points
12 comments1 min readLW link

Four Phases of AGI

Gabe M5 Aug 2024 13:15 UTC
11 points
3 comments13 min readLW link

The Hopium Wars: the AGI En­tente Delusion

Max Tegmark13 Oct 2024 17:00 UTC
199 points
55 comments9 min readLW link

GPT-4 is bad at strate­gic thinking

Christopher King27 Mar 2023 15:11 UTC
22 points
8 comments1 min readLW link

Where Free Will and Deter­minism Meet

David Bravo4 Apr 2023 10:59 UTC
0 points
0 comments3 min readLW link

A ba­sic math­e­mat­i­cal struc­ture of intelligence

Golol12 Apr 2023 16:49 UTC
4 points
6 comments4 min readLW link

The Soul of the Writer (on LLMs, the psy­chol­ogy of writ­ers, and the na­ture of in­tel­li­gence)

rogersbacon16 Apr 2023 16:02 UTC
11 points
1 comment3 min readLW link
(www.secretorum.life)

What is In­tel­li­gence?

IsaacRosedale23 Apr 2023 6:10 UTC
1 point
0 comments1 min readLW link

[Question] Can a stupid per­son be­come in­tel­li­gent?

A. T.8 Nov 2023 19:01 UTC
12 points
24 comments2 min readLW link

A Gen­er­ally In­tel­li­gent Game

snerx28 Jun 2024 1:31 UTC
−1 points
1 comment4 min readLW link

Im­plicit extortion

paulfchristiano13 Apr 2018 16:33 UTC
29 points
16 comments6 min readLW link
(ai-alignment.com)

How spe­cial are hu­man brains among an­i­mal brains?

zhukeepa1 Apr 2020 1:35 UTC
79 points
38 comments7 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei Dai5 Mar 2019 21:52 UTC
65 points
3 comments3 min readLW link

A rant against robots

Lê Nguyên Hoang14 Jan 2020 22:03 UTC
65 points
7 comments5 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
56 points
41 comments3 min readLW link

AGI and Friendly AI in the dom­i­nant AI textbook

lukeprog11 Mar 2011 4:12 UTC
73 points
27 comments3 min readLW link

My Best and Worst Mistake

Eliezer Yudkowsky16 Sep 2008 0:43 UTC
70 points
17 comments5 min readLW link

Another take on agent foun­da­tions: for­mal­iz­ing zero-shot reasoning

zhukeepa1 Jul 2018 6:12 UTC
64 points
20 comments12 min readLW link

[Linkpost] The AGI Show podcast

Soroush Pour23 May 2023 9:52 UTC
4 points
0 comments1 min readLW link

My Child­hood Role Model

Eliezer Yudkowsky23 May 2008 8:51 UTC
76 points
63 comments5 min readLW link

Notes on Antelligence

Aurigena13 May 2023 18:38 UTC
2 points
0 comments9 min readLW link

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben Pace30 Jan 2020 5:07 UTC
37 points
9 comments27 min readLW link

Ex­pected Creative Surprises

Eliezer Yudkowsky24 Oct 2008 22:22 UTC
53 points
44 comments4 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
62 points
28 comments7 min readLW link

Gen­eral in­tel­li­gence: what is it, what makes it hard, and will we have it soon?

homeopathicsyzygy1 Jun 2023 6:46 UTC
2 points
0 comments21 min readLW link

Beyond Smart and Stupid

PhilGoetz17 May 2011 6:25 UTC
36 points
44 comments3 min readLW link

When An­thro­po­mor­phism Be­came Stupid

Eliezer Yudkowsky16 Aug 2008 23:43 UTC
56 points
12 comments3 min readLW link

Modest Superintelligences

Wei Dai22 Mar 2012 0:29 UTC
34 points
100 comments1 min readLW link

HELP: Do I have a chance at be­com­ing in­tel­li­gent?

johnbgone26 Oct 2010 21:41 UTC
36 points
68 comments1 min readLW link

Mo­ti­va­tion in AI

nickasaf16 Jun 2023 9:50 UTC
−1 points
1 comment2 min readLW link

OpenAI Launches Su­per­al­ign­ment Taskforce

Zvi11 Jul 2023 13:00 UTC
149 points
40 comments49 min readLW link
(thezvi.wordpress.com)
No comments.