Much harder to put enough capital together to make it worthwhile,
chasmani
Beat me to it. Yes the lesson is perhaps to not create prediction markets that incentivise manipulation of that market towards bad outcomes. The post could be expanded to a better question of, given that prediction markets can incentivise bad behaviour, how can we create prediction markets that incentivise good behaviour?
This reminds me somewhat of the potentially self-fulfilling prophecy of defunding bad actors. E.g. if we expect that global society will react to climate change by ultimately preventing oil companies from extracting and selling their oil field assets. Then those assets are worth much less than their balance sheets claim, so we should divest from oil companies. That reduces the power of oil companies that then makes climate change legislation easier to implement and the prophecy is fulfilled. Here the share price is the prediction market.
I’d ask the question whether things typically are aligned or not. There’s a good argument that many systems are not aligned. Ecosystems, society, companies, families, etc all often have very unaligned agents. AI alignment, as you pointed out, is a higher stakes game.
Your proofs all rely on lotteries over infinite numbers of outcomes. Is that necessary? Maybe a restriction to finite lotteries avoids the paradox.
Leinbiz’s Law says that you cannot have separate objects that are indistinguishable from each other. It sounds like that is what you are doing with the 3 brains. That might be a good place to flesh out more to make progress on the question. What do you mean exactly by saying that the three brains are wired up to the same body and are redundant?
I’ve always thought that the killer app of smart contracts is creating institutions that are transparent, static and unstoppable. So for example uncensored media publishing, defi, identity, banking. It’s a way to enshrine in code a set of principles of how something will work that then cannot be eroded by corruption or interference.
There is the point that 80% of people can say that they are better than average drivers and actually be correct. People value different things in driving, and optimise for those things. One person’s good driver may be safe, someone else may value speed. So both can say truthfully and correctly that they are a better driver than the other. When you ask them about racing it narrows the question to something more specific.
You can expand that to social hierarchies too. There isn’t one hierarchy, there are many based on different values. So I can feel high status at being a great musician while someone else can feel high status at earning a lot, and we can both be right.
I think a problem you would have is that the speed of information in the game is the same as the speed of, say, a glider. So an AI that is computing within Life would not be able to sense and react to a glider quickly enough to build a control structure in front of it.
I’d say 1 and 7 (for humans). The way humans understand go is different to how bots understand go. We use heuristics. The bots may use heuristics too but there’s no reason we could comprehend those heuristics. Considering the size of the state space it seems that the bot has access to ways of thinking about go that we don’t, the same way a bot can look further ahead in a chess games than we could comprehend.
Why are we still paying taxes if we have AI this brilliant? Surely we then have ridiculous levels of abundance
I strongly disagree with your sentiments.
Advertising is bad because it’s fundamentally about influencing people to do things they wouldn’t do otherwise. That takes us all away from what’s actually important. It also drives the attention economy, which turns the process of searching for information and learning about the world into a machine for manipulating people. Advertising should really be called commercial propaganda—that reveals more clearly what it is. Privacy is only one aspect of the problem.
Your arguments are myopic in that they are all based on the current system we have now, which is built around advertising models. Of course those models don’t work well without advertising. If we reduced advertising the world would keep on turning and human ingenuity would come up with other ways for information to be delivered and funded. I don’t need to define what new system that would be to say that advertising is bad.
Maybe look at GAMS
I find rationalist cringey for some reason and won’t describe myself like that. As you said, in seems to discount intuition, emotion and instinct. 99% of human behaviour is driven by irrational forces and that’s not necessarily a bad thing. The word rationalist to me feels like a denial of our true nature and a doomed attempt to be purely rational—rather than trying to be a bit more deliberate in action and beliefs
What I want to know is how bad an effect, exactly, will a solar storm be likely to have. It’s all very vague.
How long will it take to get the power back on? A couple of days? Weeks? Months? Those are very different scenarios.
And, can we do something now to turn the monty long scenario into a week? Maybe we can stockpile a few transformers or something.
Just a writing tip. Might help to define initialisations at least once before using them. EA isn’t self evidently effective altruism.
I’m in the UK. Rules are stricter than ever but also people are taking it seriously, more than the 2nd lockdown. And it’s January and freezing cold so no one wants to go out anyway.
With neuropreservation you might also lose the sense of embodiment, of being in a body and the body being a part of you. That could be extremely traumatic to the point where you wouldn’t want to come back without your body. It is unclear whether that could be successfully countered using a “grown” body or a sophisticated simulation if you are being uploaded.
Good point. I think it would depend on how useful the word is in describing the world. If your culture has very different norms between “boyfriend/girlfriend” and fiancé then a replacement for fiancé would likely appear.
I suppose that on one extreme you would have words that are fundamental to human life or psychology e.g. water, body, food, cold. These I’m sure would reappear if banned. Then on the other extreme you have words associated with somewhat arbitrary cultural behaviour e.g. thanksgiving, tinsel, Twitter, hatchback. These words may not come back if the thing they are describing is also banned.
Uncle/father is an interesting one. Those different meanings could be described with compound words. Father could be “direct makuakane” and uncle “brother makuakane”, or something like that. We already use compound words in family relations in English like “grandfather” whereas Spanish it is “abuelo”.
You might be interested in this paper, it supports the idea of a constant information processing rate in text. “Different languages, similar coding efficiency: Comparable information rates across the human communicative niche”, Coupe, Mi Oh, Dediu, Pellegrino.. 2019, Science Advances.
I would agree that language would likely adapt to newspeak by simply using other compound words to describe the same thing. Within a generation or two these would then just become the new word. Presumably the Orwellian government would have to continually ban these new words. Perhaps with enough pressure over enough years the ideas themselves would be forgotten, which is perhaps Orwell’s point.
I think the claim that sophisticated word use is caused by intelligence signalling requires more evidence. It is I’m sure one aspect of the behaviour. But a wider vocabulary is also beneficial in terms of being able to more clearly and efficiently disambiguate and communicate ideas. This could be especially true I think when communicating across contexts—having context specific language may help prevent misunderstandings that would arise with a more limited vocabulary. It would be interesting to try and model that with ideas from information theory.
There are extra costs here that aren’t being included. There’s a cost to maintaining the pill box—perhaps you consider that small but it’s extra admin and we’re already drowning in admin. There’s a cost to my self identity of being a person who carries around pills like this (don’t mean to disparage it, just not for me). There’s also potentially hidden costs of not getting ill occasionally, both mentally and physically.