Interesting. I didn’t know Russia’s defences had degraded so much.
whpearson
I’m curious what type of nuclear advantage you think America has. It is is still bound by MAD due to nukes on submersibles.
I think that US didn’t have a sufficient intelligence capability to know where to inspect. Take Israel as an example.
CIA were saying in 1968 ”...Israel might undertake a nuclear weapons program in the next several years”. When Israel had already built a bomb in 1966.
While I think the US could have threatened the soviets into not producing nuclear weapons at that point in time, I think I have trouble seeing how the US could put in the requisite controls/espionage to prevent India/China/Uk etc from developing nuclear weapons later on.
I think the generalised flinching away from hypocrisy in itself, is mainly a status thing. Of the explanations for hypocrisy given.
Deception
Lack of will power
Inconsistent thinking
None of them are desirable traits to have in allies (at least visible to other people).
- Mar 31, 2018, 4:00 AM; 2 points) 's comment on Hufflepuff Cynicism on Hypocrisy by (
- Mar 31, 2018, 4:35 AM; 2 points) 's comment on Hufflepuff Cynicism on Hypocrisy by (
- Mar 31, 2018, 6:25 AM; 1 point) 's comment on Hufflepuff Cynicism on Hypocrisy by (
I might take this up at a later date. I want to solve AI alignment, but I don’t want to solve it now. I’d prefer it if our societies institutions (both governmental and non-governmental) were a bit more prepared.
Differential research that advances safety more than AI capability still advances AI capability.
Gambling on your knowledge might work, rather thank on your luck (at least in a rationalist setting).
It is interesting to think about, what does this look like as a societal norm. Physical risk gets you to adrenaline junkies, social standing can get you many places (Burning Culture is one, pushing the boundaries of social norms). Good ol’ Goodheart.
Another element of the exciting-ness of risk is the novelty. We are making risky choices everyday. To choose to go to university is a risky choice, sometimes you make a good network/grow as a person or learn something useful. Other times it is just a complete waste of time and money. But it is seen as a normal option, so it has no cache.
To chose not to do something has elements of risk too. If you never expose yourself to small risk, you risk struggling later in life, because you never got a big pay off compared to the people that put themselves out there. But that kind of risk taking is rarely lauded.
I often like to bring questions of behaviour back to the question of what kind of society we want. How does risk fit into that society?
It is Fear and the many ways it is used in society and can make a potential problem seem bigger than it is. In the general things like FUD; a concrete example of that being the red scare. Often it seems to have an existence bigger than any individual, which is why it got made a member of the pantheon, albeit a minor one
With regards to the Group, people have found fear of the Other easier to form. Obligatory sociology potential non-replicability warning.
I personally wouldn’t fetishize being exciting too much. Boring stability is what allows civilisation to continue to do what functioning it somehow, against all the odds, manages to do. Too much exciting is just chaos.
That said, I would like more exciting in the world. One thing I’ve learnt anything from working on a live service is that any attempt at large-scale change, not matter how well planned/prepared for has an element of risk.
what kinds of risks should we take?
It might be worth enumerating the things we can risk. Your example covers at least getting the feeling of risking the phyiscal body. Other things I thought of off the top of my head.
Social Standing—E.g. Write an essay on something you are interested in that doesn’t link immediately to the interests of your community.
Money—Taking a large bet on something. This tends not to be exciting to me, but other people might like it.
Emotional—Hard to give non-specific examples here. Declaring your love or being vulnerable in front of someone, maybe? Probably not exciting for the rationalist community, but for others.
Other risks, such as risking your organisations/communities status/well being seem like they would have thorny issues of consent.
I’ve probably missed some categories though.
I didn’t/don’t have time to do the science justice, so I just tried my hand at the esoteric. It was scratching a personal itch, if I get time I might revisit this.
I’m reminded of this Paul Graham essay. So maybe it is not all western cities. But the focus of the elite in those cities.
. What happened? More generally, what makes a social role exciting or boring at a certain point in time?
So I think the question is what qualities are incentivised in the social role. So for lots of bankers the behaviour that is incentivised is reliability and trustworthiness. It is not just the state that likes people to be predictable and boring, but the people giving lots of money to someone to keep safe, will also select for predictability and boringness.
Russia I imagine the people selected for were the people that could navigate the polictical/social scene. I imagine there is an amount of gambling involved in that, that lots of people failing and falling. Does that fit with your experience?
I suspect you wouldn’t find the Silicon Valley or Boston elite boring because there is certain amount of exploration and being novel that is required.
Shadow
I like arguing with myself. So it is fun to make the best case. But yup I was going beyond what people might. I think I find arguments against naive views less interesting so spice them up some.
In accelerando the participants in Economy 2.0 had a treacherous turn because they had the pressure of being in a sharply competitive, resource hungry environment. This could have happened if they were EM or even aligned AGI to a subset of humanity, if they don’t solve co-ordination problems.
This kind of evolutionary problem has not been talked about for a bit (everyone seems focussed on corrigibility etc), so maybe people have forgotten? I think it worth making it explicit that that is what you need to worry about. But the question then becomes should we worry about it now or when we have cheaper intelligence and a greater understanding of how intelligences might co-ordinate?
Edit: One might even make the case we should focus our thought on short term existential risks, like avoiding nuclear war during the start of AGI, because if we don’t pass that test we won’t get to worry about super intelligence. And you can’t use the cheaper later intelligence to solve that problem.
I feel that this post is straw-manning “I don’t think superintelligence is worth worrying about because I don’t think that a hard takeoff is realistic” a bit.
A steel man might be,
I don’t feel super intelligence is worth worrying at this point, as in a soft takeoff scenario we will have lots of small AGI related accidents (people wire heading themselves with AI). This will provide both financial incentives to companies to concentrate of safety to stop themselves getting sued and if they are using it themselves, stopping the damages caused by it to themselves. It will also provide government incentives to introduce regulation to make them safe, from political pressure. AGI Scientists on the cusp of creating AGI have incentives to not be associated with the bad consequences of AGI, they are also on the best position to understand what safe guards are needed.
Also there will be a general selective pressure towards safe AGI as we would destroy the unaligned ones with the safer/most alignable ones. There is no reason to expect a treacherous turn when the machines get to a decisive strategic advantage, as we will have seen treacherous behaviour in AGIs that are not super rational or good at hiding their treachery and then designed against it.
It is only when there is the chance of foom do we the current generation need to worry about super intelligence right now.
As such it would be better to save money now and use the componded interest to then buy safer AGI from safety focused AGI companies to distribute to needy people. The safety focused company will have greater knowledge of AGI and be able to make more a lot more AGI safety for the dollar than we currently can with our knowledge.
If you want to make AGI, then worrying about the super intelligence case is probably a good exercise in seeing where the cracks are in your system to avoid the small accidents.
I’m not sure I believe it. But it is worth seeing that incentives for safety are there.
But when you’re an adult, you are independent. You have choice to decline interactions you find unpleasant. You don’t need everyone you know to like you to have a functioning life. There are still people and institutions to navigate, but they aren’t out to get you. They won’t thwart your cookie quests. You are free.
I think this depends a lot on the context. The higher profile you are the more people might be out to get you, because they can gain something by dragging you down. See twitter mobs etc.
Similarly if you want to doing something that might be controversial you can’t just waltz out and do it, unless you are damn sure it is right. Building strong alliances and not making too many enemies seems important as well. Sometimes you need to do the unpleasant interactions because that is also what being an adult is about.
But nice post, I’m sure it will help some people.
Ah, makes sense. I saw something on facebook by Robert Wiblin arguing against unnamed people in the “evidence-based optimist” group. And thought I was missing something important going on, for both you and cousin_it to react to. You have not been vocal on take off scenarios before. But it seems it is just conincidence.
Thanks for the explanation.
I have to say I am a little puzzled. I’m not sure who you and cousin_it are talking to with these moderate take off posts. I don’t see anyone arguing that a moderate take off would be okay by default.
Even more mainstream places like mit, seem to be saying it is too early to focus on AI safety, rather than never focus on AI safety. I hope that there would conversation around when to focus on AI safety. While there is no default fire alarm it doesn’t mean you can’t construct one. Get people working on AGI science to say what they expect their creations to be capable of and formulate a plan for what to do if it is vastly more capabale than they expect.
I suppose there is the risk that the AGI or IA is suffering while helping out humanity as well.
Questions for an AGI project
I didn’t know that!
I do think there is a difference in strategy though still. In the foom scenario you want to keep small the number of key players or people that might become key players.
In the non-foom you have the unhappy compromise between trying to avoid too many accidents and building up defense early vs practically everyone in time being a key player and needing to know how to handle AGI.
Lots of people involved in thinking about AI seem to be in a zero sum, winner-take-all mode. E.g. Macron.
I think there will be significant founder effects from the strategies of the people that create AGI. The development of AGI will be used as an example of what types of strategies win in the future during technological development. Deliberation may tell people that there are better equilibrium. But empiricism may tell people that they are too hard to reach.
Currently the positive-sum norm of free exchange of scientific knowledge is being tested. For good reasons, perhaps? But I worry for the world if lack of sharing of knowledge gets cemented as the new norm. It will lead to more arms races and make coordination harder on the important problems. So if the creation of AI leads to the destruction of science as we know it, I think we might be in a worse position.
I, perhaps naively, don’t think it has to be that way.