RSS

Futurism

TagLast edit: 17 Aug 2020 16:28 UTC by Multicore

Futurism is speculation about technologies or social trends that might exist in the near or distant future.

Less Wrong’s favorite type of futurism is speculation about AI risk. Other speculative future technologies include life extension, mind uploading, nanotechnology, and space colonization.

For efforts to predict future trends see Forecasting & Prediction and Forecasts (Lists of).

See also: Transhumanism, Fun Theory

[Question] What will 2040 prob­a­bly look like as­sum­ing no sin­gu­lar­ity?

Daniel Kokotajlo16 May 2021 22:10 UTC
121 points
86 comments4 min readLW link1 review

Sols­tice Movie Re­view: Sum­mer Wars

JohnBuridan22 Jun 2022 1:09 UTC
22 points
6 comments1 min readLW link

The Track Record of Fu­tur­ists Seems … Fine

HoldenKarnofsky30 Jun 2022 19:40 UTC
91 points
25 comments12 min readLW link
(www.cold-takes.com)

What is the cur­rency of the fu­ture? 5 sug­ges­tions.

MrThink13 Jan 2021 21:10 UTC
18 points
11 comments2 min readLW link

Why is the Fu­ture So Ab­surd?

Eliezer Yudkowsky7 Sep 2007 8:42 UTC
52 points
17 comments3 min readLW link

Techno-hu­man­ism is techno-op­ti­mism for the 21st century

Richard_Ngo27 Oct 2023 18:37 UTC
88 points
5 comments14 min readLW link
(www.mindthefuture.info)

The Accord

Timothy Underwood6 Apr 2021 15:44 UTC
23 points
1 comment1 min readLW link

A re­view of Where Is My Fly­ing Car? by J. Storrs Hall

jasoncrawford6 Nov 2020 20:01 UTC
103 points
23 comments11 min readLW link
(rootsofprogress.org)

Win­ston Churchill, fu­tur­ist and EA

jasoncrawford12 Jul 2021 2:07 UTC
74 points
11 comments4 min readLW link
(rootsofprogress.org)

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannell6 Oct 2022 0:21 UTC
139 points
33 comments6 min readLW link

BCIs and the ecosys­tem of mod­u­lar minds

beren21 Jul 2023 15:58 UTC
88 points
14 comments11 min readLW link

Trans­for­ma­tive VR Is Likely Com­ing Soon

jimrandomh13 Oct 2022 6:25 UTC
92 points
46 comments2 min readLW link

Man in the Arena

Richard_Ngo26 Jun 2023 21:57 UTC
62 points
6 comments8 min readLW link

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

23 Oct 2024 8:41 UTC
76 points
15 comments1 min readLW link

Open Brains

George3d619 Apr 2023 7:35 UTC
7 points
0 comments6 min readLW link
(cerebralab.com)

[Question] What is our true life ex­pec­tancy?

Adam Zerner23 Oct 2020 23:17 UTC
15 points
23 comments1 min readLW link

The first fu­ture and the best future

KatjaGrace25 Apr 2024 6:40 UTC
106 points
12 comments1 min readLW link
(worldspiritsockpuppet.com)

hy­dro­gen tube transport

bhauth18 Apr 2024 22:47 UTC
34 points
12 comments5 min readLW link
(www.bhauth.com)

Un­bounded Scales, Huge Jury Awards, & Futurism

Eliezer Yudkowsky29 Nov 2007 7:45 UTC
82 points
10 comments3 min readLW link

A Quick Tax­on­omy of Ar­gu­ments for The­o­ret­i­cal Eng­ineer­ing Capabilities

Ruby21 May 2019 22:38 UTC
30 points
1 comment3 min readLW link

SpaceX will have mas­sive im­pact in the next decade

ChristianKl31 Dec 2020 1:04 UTC
37 points
30 comments2 min readLW link

Succession

Richard_Ngo20 Dec 2023 19:25 UTC
158 points
48 comments11 min readLW link
(www.narrativeark.xyz)

Strate­gic im­pli­ca­tions of AIs’ abil­ity to co­or­di­nate at low cost, for ex­am­ple by merging

Wei Dai25 Apr 2019 5:08 UTC
69 points
46 comments2 min readLW link1 review

The Vuln­er­a­ble World Hy­poth­e­sis (by Bostrom)

Ben Pace6 Nov 2018 20:05 UTC
50 points
17 comments4 min readLW link
(nickbostrom.com)

31 Laws of Fun

Eliezer Yudkowsky26 Jan 2009 10:13 UTC
99 points
36 comments8 min readLW link

Claims & As­sump­tions made in Eter­nity in Six Hours

Ruby8 May 2019 23:11 UTC
50 points
7 comments3 min readLW link

Biolog­i­cal hu­mans and the ris­ing tide of AI

cousin_it29 Jan 2018 16:04 UTC
22 points
23 comments1 min readLW link

Where is my Fly­ing Car?

PeterMcCluskey15 Oct 2018 18:39 UTC
50 points
10 comments17 min readLW link
(www.bayesianinvestor.com)

To re­duce as­tro­nom­i­cal waste: take your time, then go very fast

Stuart_Armstrong13 Jul 2013 16:41 UTC
70 points
50 comments3 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong7 Apr 2014 11:00 UTC
83 points
418 comments7 min readLW link

After crit­i­cal event W hap­pens, they still won’t be­lieve you

Eliezer Yudkowsky13 Jun 2013 21:59 UTC
95 points
107 comments3 min readLW link

AGI will dras­ti­cally in­crease economies of scale

Wei Dai7 Jun 2019 23:17 UTC
65 points
26 comments1 min readLW link

Could we send a mes­sage to the dis­tant fu­ture?

paulfchristiano9 Jun 2018 4:27 UTC
37 points
23 comments3 min readLW link

Isaac Asi­mov’s pre­dic­tions for 2019 from 1984

Jan_Kulveit28 Dec 2018 9:51 UTC
39 points
1 comment2 min readLW link
(www.thestar.com)

Are healthy choices effec­tive for im­prov­ing live ex­pec­tancy any­more?

Christopher King8 May 2023 21:25 UTC
6 points
4 comments1 min readLW link

The Golden Age of Data

Collisteru4 Aug 2020 21:35 UTC
5 points
0 comments4 min readLW link

Eu­topia is Scary

Eliezer Yudkowsky12 Jan 2009 5:28 UTC
65 points
126 comments5 min readLW link

Elon Musk is wrong: Rob­o­taxis are stupid. We need stan­dard­ized rented au­tonomous tugs to move cus­tomized owned un­pow­ered wag­ons.

chaosmage4 Nov 2019 14:04 UTC
36 points
35 comments2 min readLW link

Build­ing Weirdtopia

Eliezer Yudkowsky12 Jan 2009 20:35 UTC
50 points
312 comments3 min readLW link

David Chalmers’ “The Sin­gu­lar­ity: A Philo­soph­i­cal Anal­y­sis”

lukeprog29 Jan 2011 2:52 UTC
55 points
203 comments4 min readLW link

“Can We Sur­vive Tech­nol­ogy” by von Neumann

Ben Pace18 Aug 2019 18:58 UTC
32 points
2 comments1 min readLW link
(geosci.uchicago.edu)

Fu­tur­is­tic Pre­dic­tions as Con­sum­able Goods

Eliezer Yudkowsky10 Apr 2007 0:18 UTC
35 points
19 comments1 min readLW link

Con­cep­tual is­sues in AI safety: the paradig­matic gap

vedevazz24 Jun 2018 15:09 UTC
33 points
0 comments1 min readLW link
(www.foldl.me)

Char­ac­ter­is­ing utopia

Richard_Ngo2 Jan 2020 0:00 UTC
36 points
5 comments22 min readLW link
(thinkingcomplete.blogspot.com)

On Be­com­ing Clueless

Martin Sustrik24 Sep 2019 4:20 UTC
34 points
2 comments1 min readLW link
(250bpm.com)

[Question] What are we pre­dict­ing for Neu­ral­ink event?

Dr_Manhattan12 Jul 2019 19:33 UTC
32 points
15 comments1 min readLW link

A Pro­posed Ad­just­ment to the Astro­nom­i­cal Waste Argument

Nick_Beckstead27 May 2013 3:39 UTC
35 points
38 comments12 min readLW link

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher King1 Jun 2023 17:01 UTC
−9 points
1 comment3 min readLW link

Defeat­ing Mun­dane Holo­causts With Robots

lsparrish30 May 2011 22:34 UTC
34 points
28 comments2 min readLW link

Can’t Un­birth a Child

Eliezer Yudkowsky28 Dec 2008 17:00 UTC
54 points
96 comments3 min readLW link

Cur­rent AI harms are also sci-fi

Christopher King8 Jun 2023 17:49 UTC
26 points
3 comments1 min readLW link

Lone Ge­nius Bias and Re­turns on Ad­di­tional Researchers

ChrisHallquist1 Nov 2013 0:38 UTC
37 points
64 comments6 min readLW link

Is the po­ten­tial as­tro­nom­i­cal waste in our uni­verse too small to care about?

Wei Dai21 Oct 2014 8:44 UTC
88 points
17 comments2 min readLW link

Modest Superintelligences

Wei Dai22 Mar 2012 0:29 UTC
34 points
100 comments1 min readLW link

Scry­ing for out­comes where the prob­lem of deep­fakes has been solved

mako yass15 Apr 2019 4:45 UTC
26 points
10 comments4 min readLW link

Against easy su­per­in­tel­li­gence: the un­fore­seen fric­tion argument

Stuart_Armstrong10 Jul 2013 13:47 UTC
39 points
48 comments5 min readLW link

60+ Pos­si­ble Futures

Bart Bussmann26 Jun 2023 9:16 UTC
92 points
18 comments11 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--Klozoff29 Jun 2023 3:45 UTC
1 point
0 comments28 min readLW link

Es­ti­mat­ing the con­se­quences of de­vice de­tec­tion tech

Jsevillamol8 Jul 2018 18:25 UTC
27 points
4 comments7 min readLW link

Some Thoughts on Sin­gu­lar­ity Strategies

Wei Dai13 Jul 2011 2:41 UTC
45 points
30 comments3 min readLW link

The hu­man problem

PhilGoetz1 Apr 2010 19:41 UTC
35 points
8 comments4 min readLW link

Evolu­tion “failure mode”: chickens

Yuxi_Liu26 Apr 2019 11:42 UTC
21 points
9 comments4 min readLW link

Wealth from Self-Repli­cat­ing Robots

Algernoq15 Jul 2014 4:42 UTC
4 points
27 comments3 min readLW link

Cut­ting edge technology

Elo31 Oct 2017 6:00 UTC
10 points
7 comments2 min readLW link

Thoughts on min­i­miz­ing de­signer baby drama

John_Maxwell12 May 2015 11:22 UTC
26 points
195 comments2 min readLW link

[Speech] Wor­lds That Never Were

mingyuan12 Jan 2019 19:53 UTC
23 points
0 comments3 min readLW link

What could be done with RNA and DNA se­quenc­ing that’s 1000x cheaper than it’s now?

ChristianKl26 Jun 2018 12:24 UTC
16 points
7 comments1 min readLW link

Touch­ing the Old

Eliezer Yudkowsky20 Jul 2008 9:19 UTC
17 points
32 comments1 min readLW link

As­cended Econ­omy?

Scott Alexander30 May 2016 21:06 UTC
12 points
0 comments9 min readLW link

To­tal Nano Domination

Eliezer Yudkowsky27 Nov 2008 9:54 UTC
21 points
24 comments10 min readLW link

Per­mit­ted Pos­si­bil­ities, & Locality

Eliezer Yudkowsky3 Dec 2008 21:20 UTC
28 points
21 comments12 min readLW link

The Thing That I Protect

Eliezer Yudkowsky7 Feb 2009 19:18 UTC
46 points
24 comments5 min readLW link

Out­line of pos­si­ble Sin­gu­lar­ity sce­nar­ios (that are not com­pletely dis­as­trous)

Wei Dai6 Jul 2011 21:17 UTC
40 points
40 comments2 min readLW link

When the Stars Align: The Mo­ments AI De­cides Hu­man­ity’s Fate

Trinio5 Sep 2023 8:55 UTC
1 point
0 comments1 min readLW link

What does the world look like, the day be­fore FAI efforts suc­ceed?

[deleted]16 Nov 2012 20:56 UTC
36 points
64 comments6 min readLW link

Cas­cio in The At­lantic, more on cog­ni­tive en­hance­ment as ex­is­ten­tial risk mitigation

Roko18 Jun 2009 15:09 UTC
23 points
92 comments3 min readLW link

Apoca­lypse, corrupted

Stuart_Armstrong26 Jun 2019 13:46 UTC
20 points
13 comments2 min readLW link

Bio­con­ser­va­tive and bio­mod­er­ate sin­gu­lar­i­tar­ian positions

Roko2 Jun 2009 13:19 UTC
13 points
53 comments3 min readLW link

Wikipe­dia ar­ti­cles from the future

snarles29 Oct 2014 12:49 UTC
30 points
86 comments1 min readLW link

Why Re­cent Protein Fold­ing Ad­vance­ments Should Change Your Per­sonal Health Strategy

lisperati3 Dec 2020 2:39 UTC
11 points
0 comments3 min readLW link

[Question] Pre-Hind­sight Prompt: Why did 2021 NOT bring a re­turn to nor­malcy?

mike_hawke6 Dec 2020 17:35 UTC
22 points
13 comments1 min readLW link

Can an econ­omy keep on grow­ing?

invertedpassion16 Mar 2021 9:55 UTC
1 point
12 comments6 min readLW link

The Price Of Space Is Friendship

sxae24 Mar 2021 11:27 UTC
13 points
19 comments3 min readLW link

Utopic Nightmares

Jozdien14 May 2021 21:24 UTC
10 points
20 comments5 min readLW link

Books Re­view: “Sapi­ens”, “Homo Deus” and “21 Les­sons For The 21st Cen­tury”, by Yu­val Noah Harari

Luca Parodi13 Oct 2021 16:21 UTC
14 points
2 comments16 min readLW link

What if we should use more en­ergy, not less?

Snorkelfarsan16 Oct 2021 19:51 UTC
7 points
12 comments6 min readLW link

Can we grow cars in­stead of build­ing them?

pchvykov30 Jan 2022 21:05 UTC
10 points
4 comments1 min readLW link

Build­ing cars we don’t understand

pchvykov7 Feb 2022 16:55 UTC
3 points
3 comments2 min readLW link

De­sign­ing en­vi­ron­ments to se­lect designs

pchvykov12 Feb 2022 17:54 UTC
7 points
0 comments2 min readLW link

Where Utopias Go Wrong, or: The Four Lit­tle Planets

ExCeph27 May 2022 1:24 UTC
15 points
0 comments11 min readLW link
(ginnungagapfoundation.wordpress.com)

The Fourth Arena: What’s Up in the world these days? We’re mov­ing to a new, a new what?

Bill Benzon4 Jun 2022 19:07 UTC
2 points
0 comments3 min readLW link

The Fourth Arena 2: New be­ings in time

Bill Benzon5 Jun 2022 13:30 UTC
1 point
0 comments2 min readLW link

The Redac­tion Machine

Ben20 Sep 2022 22:03 UTC
500 points
48 comments27 min readLW link1 review

Dan Luu on Fu­tur­ist Predictions

RobertM14 Sep 2022 3:01 UTC
50 points
9 comments5 min readLW link
(danluu.com)

What “The Mes­sage” Was For Me

Alex Beyman11 Oct 2022 8:08 UTC
−3 points
14 comments4 min readLW link

[Question] Sig­nifi­cance of the Lan­guage of Thought Hy­poth­e­sis?

DrFlaggstaff16 Oct 2022 18:09 UTC
1 point
3 comments1 min readLW link

Wor­ld­view iPeo­ple—Fu­ture Fund’s AI Wor­ld­view Prize

Toni MUENDEL28 Oct 2022 1:53 UTC
−22 points
4 comments9 min readLW link

[Question] What are your thoughts on the fu­ture of AI-as­sisted soft­ware de­vel­op­ment?

RomanHauksson9 Dec 2022 10:04 UTC
4 points
4 comments1 min readLW link

Build­ing a tran­shu­man­ist fu­ture: 15 years of hplus­roadmap, now Discord

kanzure17 Jan 2023 21:17 UTC
42 points
1 comment1 min readLW link
(twitter.com)

How Do We Pro­tect AI From Hu­mans?

Alex Beyman22 Jan 2023 3:59 UTC
−4 points
11 comments6 min readLW link

Why en­tropy means you might not have to worry as much about su­per­in­tel­li­gent AI

Ron J23 May 2024 3:52 UTC
−26 points
1 comment2 min readLW link

The fu­ture of Hu­mans: Oper­a­tors of AI

François-Joseph Lacroix30 Dec 2023 23:46 UTC
1 point
0 comments1 min readLW link
(medium.com)

Ex­plor­ing the In­ter­sec­tion of Free Will, AI Con­scious­ness, and Hu­man De­ci­sion-Making

BrandonCraft23 Aug 2024 1:13 UTC
1 point
0 comments1 min readLW link

After Over­mor­row: Scat­tered Mus­ings on the Im­me­di­ate Post-AGI World

Yuli_Ban24 Feb 2024 15:49 UTC
−3 points
0 comments26 min readLW link

Towards a New On­tol­ogy of Intelligence

Tara4 Jun 2024 8:19 UTC
1 point
0 comments3 min readLW link

A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

pendingsurvival11 Apr 2024 18:03 UTC
73 points
10 comments27 min readLW link

Live The­ory Part 0: Tak­ing In­tel­li­gence Seriously

Sahil26 Jun 2024 21:37 UTC
94 points
3 comments8 min readLW link

Rais­ing chil­dren on the eve of AI

juliawise15 Feb 2024 21:28 UTC
258 points
47 comments5 min readLW link

Karpenchuk’s The­ory: Hu­man Life as a Si­mu­la­tion for Con­scious­ness Development

Karpenchuk Bohdan 2 Aug 2024 0:03 UTC
1 point
0 comments2 min readLW link

Fact-Based AI and The Dangers of False Truths in AI Development

CLBrogan5 Aug 2024 3:17 UTC
1 point
0 comments5 min readLW link
(1drv.ms)

What Hayek Taught Us About Nature

Ground Truth Data3 Oct 2024 18:20 UTC
−1 points
6 comments2 min readLW link

[Question] Ar­tifi­cial V/​S Organoid Intelligence

10xyz23 Oct 2024 14:31 UTC
5 points
0 comments1 min readLW link

The Lo­gis­tics of Distri­bu­tion of Mean­ing: Against Epistemic Bureaucratization

Sahil7 Nov 2024 5:27 UTC
20 points
1 comment12 min readLW link

[Question] Why would ASI share any re­sources with us?

Satron13 Nov 2024 23:38 UTC
6 points
5 comments1 min readLW link

The Peril of the Great Leaks (writ­ten with ChatGPT)

bvbvbvbvbvbvbvbvbvbvbv31 Mar 2023 18:14 UTC
3 points
1 comment1 min readLW link

[Thought Ex­per­i­ment] To­mor­row’s Echo—The fu­ture of syn­thetic com­pan­ion­ship.

Vimal Naran26 Oct 2023 17:54 UTC
−7 points
2 comments2 min readLW link

Sanc­tu­ary for Humans

nikola27 Oct 2023 18:08 UTC
21 points
9 comments1 min readLW link

Self-Ac­tu­al­iz­ing AGI—Gam­ify­ing AI Align­ment & Mean­ing-Making

kibagateaux27 Oct 2023 6:54 UTC
1 point
0 comments6 min readLW link
(nootype.substack.com)

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk829 Nov 2023 19:20 UTC
0 points
5 comments12 min readLW link
No comments.