Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Since the point is to de-echo-chamber it, I’m offering some optimizations for general language for general audiences (I will come back to this later, since I’m in the middle of moving to SF):
AI can be much smarter than humans and use information much more efficiently than humans to [make decisions/achieve objectives]. For example, AlphaZero learned to be superhuman at Go in only a few days.
An AI smarter than humans could become extremely dangerous by [leveraging its intelligence against humans the same way humans leveraged our intelligence against less-intelligent animals such as chickens and gorillas]. The AI would not need human technology because it could invent its own[,] and it wouldn’t need [actors/agents/a body] such as robots because it could use hacking or social manipulation to implement its plans [through humans and human systems].
There are lots of [computers] in the world and many people have access to them. AI software and hardware are continually getting better, and {even if it clearly becomes necessary,] it [will] be very difficult to completely halt progress in these areas.
Many actors are working on AGI research[,] and even if one or more of them refrain from making progress, the other actors can still [proceed and] create AGI. AGI progress would merely slow down if many organizations decided not to create it. [skip the sentence about “pivotal acts” which are too far radical for any reasonable chance of implementation in the current geopolitical environment, and will disturb policymakers reading it]
Initially, the leading organization will have the ability to create AGI. After that milestone is reached, weaker organizations will also be able to create AGI. During this time, the leading organization might not have much time to solve the alignment problem {the alignment problem has not been properly described yet at this point in the report].
If one actor decides to limit [or weaken] the capabilities of their [own] systems, other actors can still create more capable systems. Also, a weaker system would be less useful. [An AGI] doing really useful work requires powerful general cognition[,] which has the potential to be unsafe [due to the risk of a rapidly growing gap in intelligence between an AGI and its human operators].
An AGI would have to be intelligent and general enough to solve a wide variety of problems to be really useful, but these capabilities would also make the AI dangerous. An AGI smart enough to [invent a cure for] cancer might also [be smart enough to invent a way to] cause human extinction[; humans invented nuclear weapons around the same time as inventing general treatments for a wide variety of cancers, such as radiation therapy]. Therefore, useful AGI is not guaranteed to be safe or harmless: it would have the [innovativeness needed] to destroy the world[,] and we [will] need safety measures to ensure that it continues to [remain] beneficial [without glitches and unpredictable behavior emerging as the gap grows between human intelligence and that AGI’s intelligence].
[Skip the ENTIRE SECTION “We need an AGI that will perform a pivotal act to save the world” because that is wildly inappropriate to suggest to a tech executive or top military brass in today’s AI-powered geopolitical environment; if such a thing really is a necessity, it will be fine for them to recieve that advice when it comes up]
I will continue this further; right now I have much less mental energy than I normally have, and I also have no idea whether the work I’m doing here will ever amount to anything.
Thanks for providing some feedback.
I’ve incorporated many of the edits you suggested to make it more readable and accessible.
I think the paragraph on pivotal acts is a key part of the original essay so I decided not to remove it.
Instead, I made some significant edits to the paragraph. The edits I made put more emphasis on the definition of what a pivotal act is and I tried to remove as much potentially offensive content as possible. For example, I removed the pivotal act example of ‘burn all GPUs’ and instead described the term more generally as an action that would reduce existential risk.