RSS

Futurism

TagLast edit: Feb 8, 2025, 12:32 AM by lesswrong-internal

Futurism is speculation about technologies or social trends that might exist in the near or distant future.

Less Wrong’s favorite type of futurism is speculation about AI risk. Other speculative future technologies include life extension, mind uploading, nanotechnology, and space colonization.

For efforts to predict future trends see Forecasting & Prediction and Forecasts (Lists of).

See also: Transhumanism, Fun Theory

[Question] What will 2040 prob­a­bly look like as­sum­ing no sin­gu­lar­ity?

Daniel KokotajloMay 16, 2021, 10:10 PM
122 points
86 comments4 min readLW link1 review

Sols­tice Movie Re­view: Sum­mer Wars

SebastianG Jun 22, 2022, 1:09 AM
22 points
6 comments1 min readLW link

The Track Record of Fu­tur­ists Seems … Fine

HoldenKarnofskyJun 30, 2022, 7:40 PM
91 points
25 comments12 min readLW link
(www.cold-takes.com)

What is the cur­rency of the fu­ture? 5 sug­ges­tions.

MrThinkJan 13, 2021, 9:10 PM
18 points
11 comments2 min readLW link

Why is the Fu­ture So Ab­surd?

Eliezer YudkowskySep 7, 2007, 8:42 AM
52 points
17 comments3 min readLW link

Techno-hu­man­ism is techno-op­ti­mism for the 21st century

Richard_NgoOct 27, 2023, 6:37 PM
88 points
5 comments14 min readLW link
(www.mindthefuture.info)

The Accord

Timothy UnderwoodApr 6, 2021, 3:44 PM
23 points
1 comment1 min readLW link

A re­view of Where Is My Fly­ing Car? by J. Storrs Hall

jasoncrawfordNov 6, 2020, 8:01 PM
103 points
23 comments11 min readLW link
(rootsofprogress.org)

Win­ston Churchill, fu­tur­ist and EA

jasoncrawfordJul 12, 2021, 2:07 AM
74 points
11 comments4 min readLW link
(rootsofprogress.org)

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannellOct 6, 2022, 12:21 AM
138 points
33 comments6 min readLW link

BCIs and the ecosys­tem of mod­u­lar minds

berenJul 21, 2023, 3:58 PM
88 points
14 comments11 min readLW link

Trans­for­ma­tive VR Is Likely Com­ing Soon

jimrandomhOct 13, 2022, 6:25 AM
92 points
46 comments2 min readLW link

Man in the Arena

Richard_NgoJun 26, 2023, 9:57 PM
64 points
6 comments8 min readLW link

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

Oct 23, 2024, 8:41 AM
93 points
23 comments1 min readLW link

Open Brains

George3d6Apr 19, 2023, 7:35 AM
7 points
0 comments6 min readLW link
(cerebralab.com)

[Question] What is our true life ex­pec­tancy?

Adam ZernerOct 23, 2020, 11:17 PM
15 points
23 comments1 min readLW link

SpaceX will have mas­sive im­pact in the next decade

ChristianKlDec 31, 2020, 1:04 AM
37 points
30 comments2 min readLW link

hy­dro­gen tube transport

bhauthApr 18, 2024, 10:47 PM
34 points
12 comments5 min readLW link
(www.bhauth.com)

Un­bounded Scales, Huge Jury Awards, & Futurism

Eliezer YudkowskyNov 29, 2007, 7:45 AM
83 points
10 comments3 min readLW link

A Quick Tax­on­omy of Ar­gu­ments for The­o­ret­i­cal Eng­ineer­ing Capabilities

RubyMay 21, 2019, 10:38 PM
30 points
1 comment3 min readLW link

The first fu­ture and the best future

KatjaGraceApr 25, 2024, 6:40 AM
106 points
12 comments1 min readLW link
(worldspiritsockpuppet.com)

Succession

Richard_NgoDec 20, 2023, 7:25 PM
159 points
48 comments11 min readLW link
(www.narrativeark.xyz)

The boat

RomanSNov 22, 2024, 12:56 PM
4 points
0 comments2 min readLW link

Math­e­mat­i­cal Fu­tur­ol­ogy: From Pseu­do­science to Ri­gor­ous Framework

Wenitte ApiouNov 30, 2024, 3:27 AM
−1 points
1 comment2 min readLW link

prÆy

oimrqsJan 11, 2025, 7:42 PM
1 point
0 comments1 min readLW link

[Question] Whose track record of AI pre­dic­tions would you like to see eval­u­ated?

Jonny SpicerJan 29, 2025, 12:05 PM
2 points
3 comments1 min readLW link

How To Prevent a Dystopia

ankJan 29, 2025, 2:16 PM
−3 points
4 comments1 min readLW link

Are we the Wolves now? Hu­man Eu­gen­ics un­der AI Control

BritJan 30, 2025, 8:31 AM
−2 points
1 comment2 min readLW link

2024 was the year of the big bat­tery, and what that means for so­lar power

transhumanist_atom_understanderFeb 1, 2025, 6:27 AM
36 points
1 comment8 min readLW link

Ra­tional Utopia & Nar­row Way There: Mul­tiver­sal AI Align­ment, Non-Agen­tic Static Place AI, New Ethics… (V. 4)

ankFeb 11, 2025, 3:21 AM
13 points
8 comments35 min readLW link

[Question] Could AGI re­sult in a Dark For­est type of situ­a­tion?

MagpieJackFeb 12, 2025, 8:36 PM
1 point
0 comments1 min readLW link

The Peril of the Great Leaks (writ­ten with ChatGPT)

bvbvbvbvbvbvbvbvbvbvbvMar 31, 2023, 6:14 PM
3 points
1 comment1 min readLW link

[Thought Ex­per­i­ment] To­mor­row’s Echo—The fu­ture of syn­thetic com­pan­ion­ship.

Vimal NaranOct 26, 2023, 5:54 PM
−7 points
2 comments2 min readLW link

Sanc­tu­ary for Humans

Nikola JurkovicOct 27, 2023, 6:08 PM
22 points
9 comments1 min readLW link

Self-Ac­tu­al­iz­ing AGI—Gam­ify­ing AI Align­ment & Mean­ing-Making

kibagateauxOct 27, 2023, 6:54 AM
1 point
0 comments6 min readLW link
(nootype.substack.com)

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk8Nov 29, 2023, 7:20 PM
0 points
5 comments12 min readLW link

Strate­gic im­pli­ca­tions of AIs’ abil­ity to co­or­di­nate at low cost, for ex­am­ple by merging

Wei DaiApr 25, 2019, 5:08 AM
69 points
46 comments2 min readLW link1 review

The Vuln­er­a­ble World Hy­poth­e­sis (by Bostrom)

Ben PaceNov 6, 2018, 8:05 PM
50 points
17 comments4 min readLW link
(nickbostrom.com)

31 Laws of Fun

Eliezer YudkowskyJan 26, 2009, 10:13 AM
102 points
36 comments8 min readLW link

Claims & As­sump­tions made in Eter­nity in Six Hours

RubyMay 8, 2019, 11:11 PM
50 points
7 comments3 min readLW link

Biolog­i­cal hu­mans and the ris­ing tide of AI

cousin_itJan 29, 2018, 4:04 PM
22 points
23 comments1 min readLW link

Where is my Fly­ing Car?

PeterMcCluskeyOct 15, 2018, 6:39 PM
50 points
10 comments17 min readLW link
(www.bayesianinvestor.com)

To re­duce as­tro­nom­i­cal waste: take your time, then go very fast

Stuart_ArmstrongJul 13, 2013, 4:41 PM
70 points
50 comments3 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_ArmstrongApr 7, 2014, 11:00 AM
83 points
418 comments7 min readLW link

After crit­i­cal event W hap­pens, they still won’t be­lieve you

Eliezer YudkowskyJun 13, 2013, 9:59 PM
95 points
107 comments3 min readLW link

AGI will dras­ti­cally in­crease economies of scale

Wei DaiJun 7, 2019, 11:17 PM
65 points
26 comments1 min readLW link

Could we send a mes­sage to the dis­tant fu­ture?

paulfchristianoJun 9, 2018, 4:27 AM
37 points
23 comments3 min readLW link

Isaac Asi­mov’s pre­dic­tions for 2019 from 1984

Jan_KulveitDec 28, 2018, 9:51 AM
39 points
1 comment2 min readLW link
(www.thestar.com)

Are healthy choices effec­tive for im­prov­ing live ex­pec­tancy any­more?

Christopher KingMay 8, 2023, 9:25 PM
4 points
4 comments1 min readLW link

The Golden Age of Data

CollisteruAug 4, 2020, 9:35 PM
5 points
0 comments4 min readLW link

Eu­topia is Scary

Eliezer YudkowskyJan 12, 2009, 5:28 AM
65 points
127 comments5 min readLW link

Elon Musk is wrong: Rob­o­taxis are stupid. We need stan­dard­ized rented au­tonomous tugs to move cus­tomized owned un­pow­ered wag­ons.

chaosmageNov 4, 2019, 2:04 PM
36 points
35 comments2 min readLW link

Build­ing Weirdtopia

Eliezer YudkowskyJan 12, 2009, 8:35 PM
50 points
312 comments3 min readLW link

David Chalmers’ “The Sin­gu­lar­ity: A Philo­soph­i­cal Anal­y­sis”

lukeprogJan 29, 2011, 2:52 AM
55 points
203 comments4 min readLW link

“Can We Sur­vive Tech­nol­ogy” by von Neumann

Ben PaceAug 18, 2019, 6:58 PM
33 points
2 comments1 min readLW link
(geosci.uchicago.edu)

Fu­tur­is­tic Pre­dic­tions as Con­sum­able Goods

Eliezer YudkowskyApr 10, 2007, 12:18 AM
35 points
19 comments1 min readLW link

Con­cep­tual is­sues in AI safety: the paradig­matic gap

vedevazzJun 24, 2018, 3:09 PM
33 points
0 comments1 min readLW link
(www.foldl.me)

Char­ac­ter­is­ing utopia

Richard_NgoJan 2, 2020, 12:00 AM
40 points
5 comments22 min readLW link
(thinkingcomplete.blogspot.com)

On Be­com­ing Clueless

Martin SustrikSep 24, 2019, 4:20 AM
34 points
2 comments1 min readLW link
(250bpm.com)

[Question] What are we pre­dict­ing for Neu­ral­ink event?

Dr_ManhattanJul 12, 2019, 7:33 PM
32 points
15 comments1 min readLW link

A Pro­posed Ad­just­ment to the Astro­nom­i­cal Waste Argument

Nick_BecksteadMay 27, 2013, 3:39 AM
35 points
38 comments12 min readLW link

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher KingJun 1, 2023, 5:01 PM
−9 points
1 comment3 min readLW link

Defeat­ing Mun­dane Holo­causts With Robots

lsparrishMay 30, 2011, 10:34 PM
34 points
28 comments2 min readLW link

Can’t Un­birth a Child

Eliezer YudkowskyDec 28, 2008, 5:00 PM
57 points
96 comments3 min readLW link

Cur­rent AI harms are also sci-fi

Christopher KingJun 8, 2023, 5:49 PM
26 points
3 comments1 min readLW link

Lone Ge­nius Bias and Re­turns on Ad­di­tional Researchers

ChrisHallquistNov 1, 2013, 12:38 AM
37 points
64 comments6 min readLW link

Is the po­ten­tial as­tro­nom­i­cal waste in our uni­verse too small to care about?

Wei DaiOct 21, 2014, 8:44 AM
91 points
17 comments2 min readLW link

Modest Superintelligences

Wei DaiMar 22, 2012, 12:29 AM
34 points
100 comments1 min readLW link

Scry­ing for out­comes where the prob­lem of deep­fakes has been solved

mako yassApr 15, 2019, 4:45 AM
26 points
10 comments4 min readLW link

Against easy su­per­in­tel­li­gence: the un­fore­seen fric­tion argument

Stuart_ArmstrongJul 10, 2013, 1:47 PM
39 points
48 comments5 min readLW link

60+ Pos­si­ble Futures

Bart BussmannJun 26, 2023, 9:16 AM
93 points
18 comments11 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--KlozoffJun 29, 2023, 3:45 AM
1 point
0 comments28 min readLW link

Es­ti­mat­ing the con­se­quences of de­vice de­tec­tion tech

JsevillamolJul 8, 2018, 6:25 PM
27 points
4 comments7 min readLW link

Some Thoughts on Sin­gu­lar­ity Strategies

Wei DaiJul 13, 2011, 2:41 AM
45 points
30 comments3 min readLW link

The hu­man problem

PhilGoetzApr 1, 2010, 7:41 PM
35 points
8 comments4 min readLW link

Wealth from Self-Repli­cat­ing Robots

AlgernoqJul 15, 2014, 4:42 AM
4 points
27 comments3 min readLW link

Cut­ting edge technology

EloOct 31, 2017, 6:00 AM
10 points
7 comments2 min readLW link

Thoughts on min­i­miz­ing de­signer baby drama

John_MaxwellMay 12, 2015, 11:22 AM
26 points
195 comments2 min readLW link

[Speech] Wor­lds That Never Were

mingyuanJan 12, 2019, 7:53 PM
23 points
0 comments3 min readLW link

What could be done with RNA and DNA se­quenc­ing that’s 1000x cheaper than it’s now?

ChristianKlJun 26, 2018, 12:24 PM
16 points
7 comments1 min readLW link

Touch­ing the Old

Eliezer YudkowskyJul 20, 2008, 9:19 AM
17 points
32 comments1 min readLW link

As­cended Econ­omy?

Scott AlexanderMay 30, 2016, 9:06 PM
12 points
0 comments9 min readLW link

To­tal Nano Domination

Eliezer YudkowskyNov 27, 2008, 9:54 AM
21 points
24 comments10 min readLW link

Per­mit­ted Pos­si­bil­ities, & Locality

Eliezer YudkowskyDec 3, 2008, 9:20 PM
28 points
21 comments12 min readLW link

The Thing That I Protect

Eliezer YudkowskyFeb 7, 2009, 7:18 PM
46 points
24 comments5 min readLW link

Out­line of pos­si­ble Sin­gu­lar­ity sce­nar­ios (that are not com­pletely dis­as­trous)

Wei DaiJul 6, 2011, 9:17 PM
40 points
40 comments2 min readLW link

When the Stars Align: The Mo­ments AI De­cides Hu­man­ity’s Fate

TrinioSep 5, 2023, 8:55 AM
1 point
0 comments1 min readLW link

What does the world look like, the day be­fore FAI efforts suc­ceed?

[deleted]Nov 16, 2012, 8:56 PM
36 points
64 comments6 min readLW link

Cas­cio in The At­lantic, more on cog­ni­tive en­hance­ment as ex­is­ten­tial risk mitigation

RokoJun 18, 2009, 3:09 PM
23 points
92 comments3 min readLW link

Apoca­lypse, corrupted

Stuart_ArmstrongJun 26, 2019, 1:46 PM
20 points
13 comments2 min readLW link

Bio­con­ser­va­tive and bio­mod­er­ate sin­gu­lar­i­tar­ian positions

RokoJun 2, 2009, 1:19 PM
13 points
53 comments3 min readLW link

Wikipe­dia ar­ti­cles from the future

snarlesOct 29, 2014, 12:49 PM
30 points
86 comments1 min readLW link

Why Re­cent Protein Fold­ing Ad­vance­ments Should Change Your Per­sonal Health Strategy

lisperatiDec 3, 2020, 2:39 AM
11 points
0 comments3 min readLW link

[Question] Pre-Hind­sight Prompt: Why did 2021 NOT bring a re­turn to nor­malcy?

mike_hawkeDec 6, 2020, 5:35 PM
22 points
13 comments1 min readLW link

Can an econ­omy keep on grow­ing?

invertedpassionMar 16, 2021, 9:55 AM
1 point
12 comments6 min readLW link

The Price Of Space Is Friendship

sxaeMar 24, 2021, 11:27 AM
13 points
19 comments3 min readLW link

Utopic Nightmares

JozdienMay 14, 2021, 9:24 PM
10 points
20 comments5 min readLW link

Books Re­view: “Sapi­ens”, “Homo Deus” and “21 Les­sons For The 21st Cen­tury”, by Yu­val Noah Harari

Luca ParodiOct 13, 2021, 4:21 PM
19 points
2 comments16 min readLW link

What if we should use more en­ergy, not less?

SnorkelfarsanOct 16, 2021, 7:51 PM
7 points
12 comments6 min readLW link

Can we grow cars in­stead of build­ing them?

pchvykovJan 30, 2022, 9:05 PM
10 points
4 comments1 min readLW link

Build­ing cars we don’t understand

pchvykovFeb 7, 2022, 4:55 PM
3 points
3 comments2 min readLW link

De­sign­ing en­vi­ron­ments to se­lect designs

pchvykovFeb 12, 2022, 5:54 PM
7 points
0 comments2 min readLW link

Where Utopias Go Wrong, or: The Four Lit­tle Planets

ExCephMay 27, 2022, 1:24 AM
15 points
0 comments11 min readLW link
(ginnungagapfoundation.wordpress.com)

The Fourth Arena: What’s Up in the world these days? We’re mov­ing to a new, a new what?

Bill BenzonJun 4, 2022, 7:07 PM
2 points
0 comments3 min readLW link

The Fourth Arena 2: New be­ings in time

Bill BenzonJun 5, 2022, 1:30 PM
1 point
0 comments2 min readLW link

The Redac­tion Machine

BenSep 20, 2022, 10:03 PM
502 points
48 comments27 min readLW link1 review

Dan Luu on Fu­tur­ist Predictions

RobertMSep 14, 2022, 3:01 AM
50 points
9 comments5 min readLW link
(danluu.com)

What “The Mes­sage” Was For Me

Alex BeymanOct 11, 2022, 8:08 AM
−3 points
14 comments4 min readLW link

[Question] Sig­nifi­cance of the Lan­guage of Thought Hy­poth­e­sis?

DrFlaggstaffOct 16, 2022, 6:09 PM
2 points
3 comments1 min readLW link

Wor­ld­view iPeo­ple—Fu­ture Fund’s AI Wor­ld­view Prize

Toni MUENDELOct 28, 2022, 1:53 AM
−22 points
4 comments9 min readLW link

[Question] What are your thoughts on the fu­ture of AI-as­sisted soft­ware de­vel­op­ment?

RomanHaukssonDec 9, 2022, 10:04 AM
4 points
4 comments1 min readLW link

Build­ing a tran­shu­man­ist fu­ture: 15 years of hplus­roadmap, now Discord

kanzureJan 17, 2023, 9:17 PM
42 points
1 comment1 min readLW link
(twitter.com)

How Do We Pro­tect AI From Hu­mans?

Alex BeymanJan 22, 2023, 3:59 AM
−4 points
11 comments6 min readLW link

Why en­tropy means you might not have to worry as much about su­per­in­tel­li­gent AI

Ron JMay 23, 2024, 3:52 AM
−26 points
1 comment2 min readLW link

The fu­ture of Hu­mans: Oper­a­tors of AI

François-Joseph LacroixDec 30, 2023, 11:46 PM
1 point
0 comments1 min readLW link
(medium.com)

Ex­plor­ing the In­ter­sec­tion of Free Will, AI Con­scious­ness, and Hu­man De­ci­sion-Making

BrandonCraftAug 23, 2024, 1:13 AM
1 point
0 comments1 min readLW link

After Over­mor­row: Scat­tered Mus­ings on the Im­me­di­ate Post-AGI World

Yuli_BanFeb 24, 2024, 3:49 PM
−3 points
0 comments26 min readLW link

Places of Lov­ing Grace [Story]

ankFeb 18, 2025, 11:49 PM
−1 points
0 comments4 min readLW link

Towards a New On­tol­ogy of Intelligence

TaraJun 4, 2024, 8:19 AM
1 point
0 comments3 min readLW link

A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

pendingsurvivalApr 11, 2024, 6:03 PM
73 points
10 comments27 min readLW link

Static Place AI Makes Agen­tic AI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ankFeb 13, 2025, 10:35 PM
1 point
2 comments11 min readLW link

ank’s Shortform

ankJan 21, 2025, 4:55 PM
1 point
11 comments1 min readLW link

Live The­ory Part 0: Tak­ing In­tel­li­gence Seriously

SahilJun 26, 2024, 9:37 PM
101 points
3 comments8 min readLW link

A prac­ti­cal guide to tiling the uni­verse with hedonium

Vittu PerkeleDec 16, 2024, 9:25 PM
−9 points
1 comment1 min readLW link
(perkeleperusing.substack.com)

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ankFeb 22, 2025, 12:12 AM
1 point
0 comments6 min readLW link

Rais­ing chil­dren on the eve of AI

juliawiseFeb 15, 2024, 9:28 PM
274 points
47 comments5 min readLW link

Karpenchuk’s The­ory: Hu­man Life as a Si­mu­la­tion for Con­scious­ness Development

Karpenchuk Bohdan Aug 2, 2024, 12:03 AM
1 point
0 comments2 min readLW link

Fact-Based AI and The Dangers of False Truths in AI Development

CLBroganAug 5, 2024, 3:17 AM
1 point
0 comments5 min readLW link
(1drv.ms)

Elon Musk and So­lar Futurism

transhumanist_atom_understanderDec 21, 2024, 2:55 AM
30 points
27 comments5 min readLW link

What Hayek Taught Us About Nature

Ground Truth DataOct 3, 2024, 6:20 PM
−1 points
6 comments2 min readLW link

[Question] Ar­tifi­cial V/​S Organoid Intelligence

10xyzOct 23, 2024, 2:31 PM
9 points
0 comments1 min readLW link

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym FibírFeb 28, 2025, 11:12 AM
3 points
0 comments11 min readLW link
(tetherware.substack.com)

The Lo­gis­tics of Distri­bu­tion of Mean­ing: Against Epistemic Bureaucratization

SahilNov 7, 2024, 5:27 AM
27 points
1 comment12 min readLW link
No comments.