tonyoconnor
I was surprised by that as well, but I took that from an article by Jules Lobel, Professor of Law, University of Pittsburgh Law School based on a book he wrote:
Influential intellectuals such as Bertrand Russell and the famous physicist Leo Szilard supported preventive war arguments, as did noted mathematicians such as John Williams, a leading figure at the RAND Corporation, and John von Neumann, the founder of gametheory.129 Von Neumann was a particularly strong advocate, remarking in 1950 that “[i]f you say why not bomb them tomorrow, I say why not today?If you say today at 5 o’clock, I say why not one o’clock?”130
For that claim he in turn cites Marc Trachtenberg’s History and Strategy, which I do not have access to.
Thanks for the clarification. If that’s the plausible scenario for Aligned AGI, then I was drawing a sharper line between Aligned and Unaligned than was warranted. I will edit some part of the text on my website to reflect that.
Thanks for your comment. This is something I should have stated a bit more explicitly.
When I mentioned “single state (or part thereof)”, the part thereof was referring to these groups or groups in other countries that are yet to be formed.
I think the chance of government intervention is quite high in the slow take-off scenario. It’s quite likely that any group successfully working on AGI will slowly but noticeably start to accumulate a lot of resources. If that cannot be concealed, it will start to attract a lot of attention. I think it is unlikely that the government and state bureaucracy would be content to let such resources accumulate untouched e.g. the current shifting attitude to Big Tech in Brussels and Washington.
In a fast take-off scenario, I think we can frame things more provocatively: the group that develops AGI either becomes the government, or the government takes control while it still can. I’m not sure what the relative probabilities are here, but in both circumstances you end up with something that will act like a state, and be treated as a state by other states, which is why I model them like a state in my analysis. For example, even if OpenAI and DeepMind are friendly to each other, and that persists over decades, I can easily imagine the Chinese state trying to develop an alternative that might not be friendly to those two groups, especially if the Chinese government perceive them as promoting a different model of government.
Thanks for your comment.
If someone wants to estimate the overall existential risk attached to AGI, then it seems fitting that they would estimate the existential risk attached to the scenarios where we have 1) only unaligned AGI, 2) only aligned AGI, or 3) both. The scenario you portray is a subset of 1). I find it plausible. But most relevant discussion on this forum is devoted to 1) so I wanted to think about 2). If some non-zero probability is attached to 2), that should be a useful exercise.
I thought it was clear I was referring to Aligned AGI in the intro and the section heading. And of course, exploring a scenario doesn’t mean I think it is the only scenario that could materialise.
AI and the Probability of Conflict
Thanks! There seems to be an openness towards error correction which is admirable and unfortunately uncommon.
I’ve started browsing and posting here a bit so I should introduce myself.
I’ve been writing online for around five months and put some draft chapters of a book on my website. The objective is to think about how to immunise a society from decline, which basically means trying to find the right balance between creativity and cohesion (not that they are inversely related—it’s quite possible to have neither). Because I can’t buy into any worldview out there today, I’ve tried to systematise my thoughts into a philosophy I call Metasophism. It’s a work in progress, and most of what I read and write links into that in some way.
Prediction mechanisms are something commonly discussed here which I’ve partly integrated, but I need to think more about that which this site will help with I think.
How did I end up here? A commenter on an early post of mine mentioned LW, which I didn’t then frequent even though I was familiar with some of the writers here. That caused me to check it out, and the epistemic culture caused me to stick around.
When it costs 20$ to transport a kg to low-earth orbit we might find a way to to mine palladium that can be sold for $34,115 per kg on earth or gold that can be sold for $60,882 per kg.
It would be interesting to see some kind of analysis of what the effect of asteroid mining could be on the prices of these commodities. For example, the global supply of palladium is just over 200 tonnes, so if asteroid mining could match that the price could fall quite dramatically.
The support provided in the book is purely anecdotal (along the lines of what I discussed above) and doesn’t really discuss any other models. The alternative explantions I discuss such as re-religiofication due to material conditions are not mentioned in the book, which is wrote in a somewhat impressionistic manner.
Book Review: ‘History Has Begun: The Birth of a New America’ by Bruno Maçães
Thanks for elaborating.
I agree with the point about utilities, and the fact that for utility-like services (more specifically, those with overwhelming network effects and economies of scale) it should be illegal to prevent access unless the person to whom service is being denied is doing something illegal.
Thanks for this very comprehensive review. It raises many interesting questions.
If people needed self-actualization, why choose anti-technology crusades? Why not self-actualize through invention, or art?
I think part of this is that you react against a system that doesn’t give you much status. If the social system allocates most status and resources to people who can master the creation of technology and the allocation of capital, but you’re not capable of that, then you will tend to criticise that system. And of course, most people are not capable of invention and art, or have never been given an opportunity to develop those faculties.
A stable social system needs to have a way of giving everyone access to meaning, especially those who don’t succeed in a conventional, material sense. Valorising technological progress and consumption can provide meaning for some, but not for those who don’t succeed materially. In contrast, a religion like Christianity gave extra meaning to those who suffered, and in this way counterbalaced unequal material outcomes. That’s my interpretation, anyway. As for how one might give everyone access to meaning in a postmodern world, I have some thoughts on that here (Section 11.1).
According to a study conducted by Tillinghast-Towers Perrin, the cost of the U.S. tort system consumes about two percent of GDP, on average.
It would be very interesting to compare this to other countries. My loose impression is that the number of cases relating to tort law increased quite dramatically in Ireland over the past twenty years, such that it has had a big effect on the price of insurance. There are regular news items about such cases. But I don’t see those in the media of other European countries.
In Ireland (and maybe in the US), this problem could be solved by two actions. First, imposing maximum damages via legislation. Consider whiplash. According to one article, “the average amount paid out in Ireland for whiplash was 4.4 times higher than for similar injuries in England and Wales” so “if whiplash claims were capped at a maximum of €5,000, average premiums would drop from €700 to between €550 and €590 for most insured people.”
Second, moving away from punitive damages which seems to have been embraced by the US system but rejected in most European systems.
Thanks.
While there is probably is value in getting the broader population to become more risk-tolerant, I agree with the general gist of your first point.
Regarding your second, something that prevents people from speaking freely is the fear that unorthodox opinions will prevent them rising in hierarchies where selection is performed by those above them. Most people like to be flattered and have unquestioning followers, and will promote those in turn. This could also be the case in non-organisational hierarchies, such as academia. I try to address this problem in Ch. 6 of my book by designing a different mode of selection: one where those at the top have little or no power to decide who gets positions and resources.
As for people moving money without restrictions, I haven’t really thought about that very much. Is there a particular example you are thinking of?
It does seem that the Trachtenberg reference basically relies upon individual recollections (which I don’t trust), and the following extract from a 1944 letter by Szilard to Vannevar Bush (my bold):
While one could make the argument there that he is arguing for a pre-emptive strike, it is sufficiently ambiguous (controlling by force could also mean conventional forces, and “used” could imply a demonstration rather then a deployment on a city) that I would prefer to delete the reference to Szilard in this article. Also because I’ve seen many more instances where this view was attributed to Russell and von Neumann, but this is the only case where it has been attributed to Szilard.