The assumption goes that after ingesting human data, it can remix it (like humans do for art, for example) and create its own synthetic data it can then train on. The go-to example is AlphaGo, which after playing a ton of simulated games against itself, became great at Go. I am not qualified enough to give my informed opinion or predictions, but that’s what I know.
Person
If AGI happens this decade the risks are very much real and valid and should not be dismissed, certainly not for such a flimsy reason.
Especially considering what people consider the near-term risks, which we can expect to become more and more visible and present, will likely shift the landscape in regards to taking AI x-risk seriously. I posit x-risk won’t remain speculative for long with roughly the same timeline you gave.
I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I’d call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the ability to suffer in general (not just medical cases like depression) as I have seen being brought up by futurist circles is also akin to death. I think it could possibly be a somewhat literal death, where my conscious experience actually stops if radical changes would occur, but I am completely uneducated and unqualified on how consciousness works.
I think that a hypothetical me, even with my memories, who is physically unable to experience any negative emotions would be philosophically dead. It would be unable to learn nor reflect and its fundamentally different subjective experience would be so radically different from me, and any future biological me should I grow older naturally, that I do not think memories alone would be enough to keep my identity. To my awareness, the majority of people would think similarly and that there is value ascribed to our human nature, including limitations, which has been reinforced by our media and culture. Though whether this attachment is purely a product of coping, I do not know. What I do know is that it is the current reality for every functional human being now and has been for thousands of years. I believe people would prefer sticking with it than relinquishing it for vague promises of ascended consciousness. I think this is somewhat supported by my subjective observation that to a lot of people who want a posthuman existence and what it entails, their end goal seems to often come back to creating simulations they themselves can live in normally.
I’m curious though if you have any hopes for the situation regarding the nebulous motivations of some AGI researchers, especially as AI and its risks have recently started becoming “mainstream”. Do you expect to see changes and their views challenged? My question is loaded, but it seems you are already invested in its answer.
Are you really doubting that 225 000 people flew to Liberia just to vote for him?
Jokes aside, yes, it was historically way easier back then to sabotage or trick elections. Liberia is a special case, where the human rights abuses there which included suspicions of practicing slavery very nearly resulted in it being placed as a Polish protectorate (yes, Poland). Flow of information being slow and mostly conveyed through newspapers or newsreel at the theater really gave anyone in charge who had decent executive power a better hand in manipulating outcomes of elections. Right now, we have increased scrutiny making it way harder to successfully pull it out, which is why states that do it tend not to even bother trying to hide it (Belarus, Russia’s referendums in Ukraine, Algeria, etc.)