Can anybody point me to what choice of interpretation changes? From what I understand it is an interpretation, so there is no difference in what Copenhagen/MWI predict and falsification isn’t possible. But for some reason MWI seems to be highly esteemed in LW—why?
Jan_Rzymkowski
Small observation of mine. While watching out for sunk cost fallacy it’s easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it’s destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily—when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.
Big thanks for poiting me to Sleeping beauty.
It is a solution to me—it doesn’t feel like a suffering, just as few minute tease before sex doesn’t feel that way.
What I had in mind isn’t a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from “real” world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.
By “me” I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise—which might of course be a simulated mind.
Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of “now”.
Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure).
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
What is R? LWers use it very often, but Google search doesn’t provide any answers—which isn’t surprising, it’s only one letter.
Also: why is it considered so important?
I’d say the only requirement is spending some time living on Earth.
Thanks, I’d get to sketching drafts. But it’ll take some time.
There’s also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can’t use gravity to devise traps, there’s no fire, much simpler geology, lithe prospects for farming, etc.
Or, conversely, Great Filter doesn’t prevent civilizations from colonising galaxies, and we’ve been colonised long time ago. Hail Our Alien Overlords!
And I’m serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
It’s not impossible that human values are itself conflicted. Sole existence of AGI would “rob” us from that, because even if AGI restrained from doing all the work for humans, it would still be “cheating”—AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.
Anyway—I can think of at least two more ways. First is creating games, vastly simulating the “joy of work”. Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.
PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that’s not the case—even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn’t necessarily indicate to defect-defect scenario.
This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.
It wasn’t my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.
Sounds very reasonable.
I’m not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don’t have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.
I’ll use this post to address some of your claims, but don’t treat that as argument for when AI would be created:
How are Ray Kurzweil’s extrapolations an empiric data? If I’m not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn’t possible, because the technology wasn’t known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).
I’m not sure what you’re saying here. That we can assume AI won’t arrive next month because it didn’t arrive last month, or the month before last, etc.? That seems like shaky logic.
I’m saying that, I guess, everybody would agree that AI will not arrive in a month. I’m interested on what basis we’re making such claim. I’m not trying to make an argument about when will AI arrive, I’m genuinely asking.
You’re right about comforting factor of AI coming soon, I haven’t thought of that. But still, developement of AI in near future would probably mean that its creators haven’t solved the friendliness problem. Current methods are very black-box. More than that, I’m a bit concerned about current morality and governement control. I’m a bit scared, what may people of today do with such power. You don’t like gay marriage? AI can probably “solve” that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it’s hard to tell where would our preferences point to.
If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030′s on technological pathways that can be foreseen today.
Are you taking in account that to this day we don’t truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)
Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.
I’m tempted to call that bollocks. Would you expect a FOOM, if you’d give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can’t even understand nematoda’s neural network. You expect them to understand whole 100 billion human brain?
Sorry for the above, it would need a much longer discussion, but I really don’t have strength for that.
I hope it would be in any way helpful.
This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren’t safe. I don’t think anyone can trace their prediction back to empiric data, it all comes from our minds’ black boxes, to which biases have full access and which we can’t examine with our conciousness.
While I find Mark’s prediction far from accurate, I know it might be just because I wouldn’t like it. I like to think that I would have some impact on AGI research, that some new insights are needed rather than just pumping more and more money in SIRI-like products. Developement of AI in next 10-15 years would mean that no qualitative research were needed and that all what is to be done is honing current technology. It would also mean there was time for thorough developement of friendliness and we may end up with AI catastroph.
While I guess human level AI to rise in about 2070s, I know I would LIKE if it happened in 2070s. And I base this prediction on no solid base.
Can anybody point me to any near-empiric data concerning, when AGI may be developed? Anything more solid than hunch of even most prominent AI researcher? Applying Moore’s law seems a bit magical, it without doubt has some Bayesian effect, but with little certainity.
The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.
Yeah. Though actually it’s more of a simplified version of a more serious problem.
One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn’t yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power.
EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.
But Musk starts with mentioning “Terminator”. There’s plenty of sf literature showing much more accuratly danger of AI, though none of them as widely known as “Terminator”.
That AI may have unexpected dangers seems too vague to me, to expect Musk to think along lines of LWers.
It’s not only unlikely—what’s much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.
This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn’t smart, it’s much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.
It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn’t know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.
My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you’re not believing in God, people will come to your house and kill your family—and if you believed in God they wouldn’t do that, because they’d somehow fear the God. I don’t see how is this anything else but an emotional trick.
I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don’t regularly pay a racket. But the opposite thing is creating exciting graphic visions. Watching your loved one raped is not as bad as losing a loved one—but it creates a much better psychological effect, targeted to elicit emotional blackmail.