Economist.
Sherrinford
An “Observatory” For a Shy Super AI?
Sorry, but where/how would I do that?
When I write a post and select text, a menu appears where I can select text appearance properties etc. However, in my latest post, this menu does not appear when I edit the post and select text. Any idea why that could be the case?
That would be great, but maybe it is covered much more in your bubble than in large newspapers etc? Moreover, if this is covered like the OpenAI-internal fight last year, the typical news outlet comment will be: “crazy sci-fi cult paranoid people are making noise about this totally sensible change in the institutional structure of this very productive firm!”
My impression is that the OpenAI thing has a larger effect ive negative impact on the world compared to the FTX thing, but less people will notice it.
It probably depends on whom are communicating to. I guess there are people not used to using such analogies or thought experiments, and would immediately think: “This is a silly question, orangutans cannot invent humans!”, and the same people would still think about the question in the way you intend if you break it down into several steps.
The goal would be that forecasters would be forced to make internally consistent forecasts. That should reduce noise, firstly by reducing unintentional errors, secondly by cleaning up probabilities (by quasi-automatically adjusting the percentages of candidates who may previously have been considered to be low-but-relevant-probability candidates), and thirdly by crowding out forecasters who do not want to give consistent forecasts (which I assume correlates with low-quality forecasts). It should also make forecasts more legible and thus increase the demand for metaculus.
Metaculus currently lists 20 people who could be elected US President (“This question will resolve as Yes for the person who wins the 2024 US presidential election, and No for all other options.”, “Closes Nov 7, 2024″), and the sum of their probabilities is greater than 104%. Either this is not consistent, or I don’t understand it and with all due modesty, if that is the reason for my confusion, then I think many people in the target audience will also be confused.
The link is a link to a facebook webpage telling my that I am about to leave facebook. Is that intentional?
Metaculus should adjust election forecasting questions such that forecasters are forced to make their forecasts add up to 100% over all options (with an additional option “noone of the above”).
I agree that Nanobots are not a necessary part of AI takeover scenarios. However, I perceive them as a very illustrative kind of “the AI is smart enough for plans that make resistance futile and make AI takeover fast” scenarios.
The word “typical” is probably misleading, sorry; most scenarios on LW do not include Nanobots. OTOH, LW is a place where such scenarios are at least taken seriously.
So p(scenario contains Nanobots|LW or rationality community is the place of discussion of the scenario) is probably not very high, but p(LW or rationality community is the place of discussion of the scenario|scenario contains Nanobots) probably is...?
Yes, people care about things that are expected to happen today rather than in 1,000 years or later. That is a problem that people fighting against climate change have been pointing out for a long time. At the same time, with respect to AI, my impression is that many people do not react to developments that will quickly have strong implications, while some others write a lot about caring about humanity’s long-term future.
Thanks for the list! Yes, it is possible to imagine stories that involve a superintelligence.
I could not imagine a movie/successful story where everybody is killed by an AGI within seconds because it has prepared that in secrecy and nobody realized it, and nobody could do anything about it. Seems like lacking a happy end and even a story.
However, I am glad to be corrected, and will check the links, the stories will surely be interesting!
Gnargh. Of course someone has a counterexample. But I don’t think that is the typical lw AGI warning scenario. However, this could become a “no true Scotsman” discussion...
I don’t understand this question. Why would the answer to that question matter? (In your post, you write “If the answer is yes to all of the above, I’d be a little more skeptical.”) Also, the “story” is not really popular. Outside of LessWrong discussions and few other places, people seem to think that every expectation about the future that involves a superintelligent agentic AGI sounds like science fiction and therefore does not have to be taken seriously.
Actually, lesswrong AGI warnings don’t sound like they could be the plot of a successful movie. In a movie, John Connor organizes humanity to fight against skynet. That does not seem plausible with LW-typical nanobot scenarios.
Wouldn’t way 2 likely create a new species unaligned with humans?
Congratulations! If it is not too personal, would you share your considerations that informed your answer to that question?
I don’t understand your point, is it:
a) Life always ends with death, and many people believe that if their life ends with death they don’t want to live at all or
b) Giving birth always gives “joy to yourself and the newborn” while also causing “suffering of other newborns”. (If so, why?)
If you have a source on the Roman Empire, I’d be interested. Both in just descriptions of trends and in rigorous causal analysis. I’ve heard somewhere that there was population growth-rate decline in the Roman Empire below replacement level, which doesn’t seem to fit with all the claims about the causes of population growth-rate decline I heard in my life.
Thanks for helping. In the end, I deleted the post and started from scratch and then it worked.