RSS

Nar­row AI

TagLast edit: Aug 4, 2023, 8:04 AM by steven0461

A Narrow AI is capable of operating only in a relatively limited domain, such as chess or driving, rather than capable of learning a broad range of tasks like a human or an Artificial General Intelligence. Narrow vs General is not a perfectly binary classification: there are degrees of generality with, for example, large language models having a fairly large degree of generality (as the domain of text is large) without being as general as a human, and we may eventually build systems that are significantly more general than humans.

Strate­gies for keep­ing AIs nar­row in the short term

RossinApr 9, 2022, 4:42 PM
9 points
3 comments3 min readLW link

[Question] Are there sub­stan­tial re­search efforts to­wards al­ign­ing nar­row AIs?

RossinSep 4, 2021, 6:40 PM
11 points
4 comments2 min readLW link

A sum­mary of al­ign­ing nar­rowly su­per­hu­man models

guguFeb 10, 2022, 6:26 PM
8 points
0 comments8 min readLW link

Refram­ing Su­per­in­tel­li­gence: Com­pre­hen­sive AI Ser­vices as Gen­eral Intelligence

Rohin ShahJan 8, 2019, 7:12 AM
122 points
77 comments5 min readLW link2 reviews
(www.fhi.ox.ac.uk)

The al­gorithm isn’t do­ing X, it’s just do­ing Y.

Cleo NardoMar 16, 2023, 11:28 PM
53 points
43 comments5 min readLW link

In­tro­duc­ing Leap Labs, an AI in­ter­pretabil­ity startup

Jessica RumbelowMar 6, 2023, 4:16 PM
103 points
12 comments1 min readLW link

Could util­ity func­tions be for nar­row AI only, and down­right an­ti­thet­i­cal to AGI?

chaosmageMar 16, 2017, 6:24 PM
9 points
38 comments5 min readLW link

Nar­row AI Nanny: Reach­ing Strate­gic Ad­van­tage via Nar­row AI to Prevent Creation of the Danger­ous Superintelligence

avturchinJul 25, 2018, 5:12 PM
12 points
7 comments21 min readLW link

Some thoughts on risks from nar­row, non-agen­tic AI

Richard_NgoJan 19, 2021, 12:04 AM
35 points
21 comments16 min readLW link

AIOS

samhealyDec 31, 2023, 1:23 PM
−3 points
5 comments6 min readLW link

Rele­vant pre-AGI possibilities

Daniel KokotajloJun 20, 2020, 10:52 AM
38 points
7 comments19 min readLW link
(aiimpacts.org)

[Question] Danger(s) of the­o­rem-prov­ing AI?

YitzMar 16, 2022, 2:47 AM
8 points
8 comments1 min readLW link

[Question] Con­strain­ing nar­row AI in a cor­po­rate setting

MaximumLibertyApr 15, 2022, 10:36 PM
28 points
4 comments1 min readLW link

Misal­ign­ment Harms Can Be Caused by Low In­tel­li­gence Systems

DialecticEelOct 11, 2022, 1:39 PM
11 points
3 comments1 min readLW link

The re­ward func­tion is already how well you ma­nipu­late humans

KerryOct 19, 2022, 1:52 AM
20 points
9 comments2 min readLW link

[LINK]s: Who says Wat­son is only a nar­row AI?

ShmiMay 21, 2013, 6:04 PM
6 points
27 comments1 min readLW link

Skep­ti­cism About Deep­Mind’s “Grand­mas­ter-Level” Chess Without Search

Arjun PanicksseryFeb 12, 2024, 12:56 AM
57 points
13 comments3 min readLW link

The de­fault sce­nario for the next 50 years

JulienNov 24, 2024, 2:01 PM
1 point
0 comments6 min readLW link

GPT-4 is bad at strate­gic thinking

Christopher KingMar 27, 2023, 3:11 PM
22 points
8 comments1 min readLW link

We don’t need AGI for an amaz­ing future

Karl von WendtMay 4, 2023, 12:10 PM
18 points
32 comments5 min readLW link