My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it’s okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.
Jayson_Virissimo
There’s nothing unjustified about appealing to your parents’ authority. Parents are legally responsible for their children: they have literal (not epistemic) authority over them, although it’s not absolute.
I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus’ model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus’ model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.
...I (mostly) trust them to just not do things like build an AI that acts like an invasive species...
What is the basis of this trust? Anecdotal impressions of a few that you know personally in the space, opinion polling data, something else?
I don’t have a solution to this, but I have a question that might rule in or out an important class of solutions.
The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that’s $150 billion. There are about 2 million people in Gaza.If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatima in and Fatima receives $37,000 for emigrating, etc… How many such pairings would that facilitate?
Thanks, that’s getting pretty close to what I’m asking for. Since posting the above, I’ve also found Katja Grace’s Argument for AI x-risk from competent malign agents and Joseph Carlsmith’s Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren’t of that type, their members can’t easily engage with those arguments. For example:
...if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including—and then models with data. I’m not saying you have to like those models. But the point is: there’s something you look at and then you make up your mind whether or not you like those models; and then they’re tested against data. So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we’ve been talking about this seriously, there isn’t a single model done. Period. Flat out.
So, I don’t think any idea should be dismissed. I’ve just been inviting those individuals to actually join the discourse of science. ‘Show us your models. Let us see their assumptions and let’s talk about those.’ The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It’s a bad practice in virtually any theory of risk communication.
-- Tyler Cowen, Risks and Impact of Artificial Intelligence
is there a canonical source for “the argument for AGI ruin” somewhere, preferably laid out as an explicit argument with premises and a conclusion?
-- David Chalmers, TwitterIs work already being done to reformulate AI-risk arguments for these communities?
IMO, Andrew Ng is the most important name that could have been there but isn’t. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don’t advocate violence, it does seem like violence is the logical conclusion of their worldview—so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?
Does that mean the current administration is finally taking AGI risk seriously or does that mean they aren’t taking it seriously?
IIRC, he says that in Intuition Pumps and Other Tools for Thinking.
I noticed that Meta (Facebook) isn’t mentioned as being participants. Is that because they weren’t asked to or because they were asked but declined?
...there is hardly any mention about memorization on either LessWrong or EA Forum.
I’m curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the
Memory and Mnemonics
andSpaced Repetition
tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for “Anki” specifically is currently returning ~800+ comments.
FWIW, if my kids were freshmen at a top college, I would advise them to continue schooling, but switch to CS and take every AI-related course that was available if they hasn’t already done so.
When I worked for a police department a decade ago, we used Zebra, not Zulu, for Z, but our phonetic alphabet started with Adam, Baker, Charles, etc...
Strictly speaking it is a (conditional) “call for violence”, but we often reserve that phrase for atypical or extreme cases rather than the normal tools of international relations. It is no more a “call for violence” than treaties banning the use of chemical weapons (which the mainstream is okay with), for example.
If anyone on this website had a decent chance of gaining capabilities that would rival or exceed those of the global superpowers, then spending lots of money/effort on a research program to align them would be warranted.
How many LessWrong users are there? What is the base rate for cult formation? Shouldn’t we answer these questions before speculating about what “should be done”?
Virtue ethics says to decide on rules ahead of time.
This may be where our understandings of these ethical views diverges. I deny that virtue ethicists are typically in the position to decide on the rules (ahead of time or otherwise). If what counts as a virtue isn’t strictly objective, then it is at least intersubjective, and is therefore not something that can decided on by an individual (at least relatively). It is absurd to think to yourself “maybe good knives are dull” or “maybe good people are dishonest and cowardly”, and when you do think such thoughts it is more readily apparent that you are up to no good. On the other hand, the sheer number of parameters the consequentialist can play with to get their utility calculation to come to the result they are (subconsciously) seeking supplies them with an enormous amount of ammunition for rationalization.
It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that “changes to one’s beliefs should generally also be probabilistic, rather than total”, but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.