You can use relativity to demonstrate that certain events can happen simultaneity in on reference frame and not in others, but I’m not seeing any way to do that in this case, assuming that the simulated and non-simulated future civilizations are both in the same inertial reference frame. Am I missing something?
Yosarian2
That’s one of the advantages to what’s known as “preference utilitarianism”. It defines utility in terms of the preference of people; so, if you have a strong preference towards remaining alive, then remaining alive is therefore the pro-utility option.
The answer to those objections, by the way, is that an “adequately objective” metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information.
Elizer attempted to deal with that problem by defining a certain set of things as “h-right”, that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that’s good enough.
I don’t think that’s actually true.
Even if it was, I don’t think you can say you have a belief if you haven’t actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don’t think it’s accurate to say “you believe D” until you’ve actually figured out that it logically follows from A, B, and C.
If you’ve never even thought of something I don’t think you can say that you “believe” it..
Except by their nature, if you’re not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let’s call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say “I appear to believe that X is true.” (Now this is 2 bits of information; x and belief in x”)
Step 3: You notice yourself thinking step 2. Now you say “I appear to believe that I believe that X is true.” (3 bits of information, x, belief in x, and belief in belief in x.)
If at any point you stop storing one of those steps, the next step becomes untrue; if you are not storing, say, step 11 in your head right now (belief in belief in belief....) then step 12 would be false, because you don’t actually believe step 11. After all, “belief” is fundamentally a question of your state of mind, and if you don’t have state x in your mind, if you’ve never even explicitly considered stage x, it can’t really be a belief, right?
Fair.
I actually think a bigger weakness in your argument is here:
I believe that I believe that I believe that I exist. And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer’s (fatuous) requirements for assigning a certain level of confidence to a proposition.
That can’t actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
I think the best possible argument against “I think, therefore I am” is that there may be something either confused or oversimplified about either your definition of “I”, your definition of “think”, or your definition of “am”.
“I” as a concept might turn out to not really have much meaning as we learn more about the brain, for example, in which case the most you could really say would be “Something thinks therefore something thinks” which loses a lot of the punch of the original.
Here’s a question. As humans, we have the inherent flexibility to declare that something has either a probability of zero or a probability of one, and then the ability to still change our minds later if somehow that seems warranted.
You might declare that there’s a zero probability that I have the ability to inflict infinite negitive utility on you, but if then I take you back to a warehouse where I have banks of computers that I can mathematically demonstrate contain uploaded minds which are going to suffer in the equivalent of hell for an infinite amount of subjective time, you likely would at that point to change your estimate to something greater then zero. But if you actually set the probability to zero, you can’t actually do that without violating Bayesian rules, right?
It seems there’s a disconnect here; it may be better, in terms of actually using our minds to reason, to be able to treat a probability as if it were 0 or 1, but only because we can later change our minds if we realize we made an error; in which case it probably wasn’t actually 0 or 1 to start with in the strictest sense.
Uh. About 10 posts ago I linked you to a long list of published scientific papers, many of which you can access online. If you wanted to see the data, you easily could have.
Hmm. I could see that being a serious threat, at least a potentially civilization-ending one.
Again though, would you agree that the best way to reduce the risk of this threat is biotech research itself?
What probabilistic predictions do we have for the so called “Climate Change” or “to frack is to die” Green “science”?
There were models that have predicted how much the Earth would be expected to heat up given certain amounts of carbon released into the atmosphere from the early 1990′s, and they have been pretty accurate predictions.
Anyway, I’m not opposed to fracking, at least not in the short run. It’s still probably less harmful to people’s health and to the environment then coal is, even with the earthquakes. But I don’t think there’s any doubt it causes small earthquakes; if nothing else, you can just observe the fact that areas which do a lot of fracking now have lots of small earthquakes in places that simply never have before. That’s just a fact, an emperical observation.
If you want to come up with an alternate scientific hypothesis to explain that fact, feel free, but I don’t see how you can deny the accuracy of the observation.
A probabilistic prediction is still a prediction. Or do you think nuclear physics isn’t science either?
Sure, that’s true. It’s hard to say for sure, but like I said, if overall research on how to treat viruses and diseases gets more resources then bioweapons research, it should be able to pull ahead, I would think. I think we are likely eventually get to a point where new diseases just aren’t that much of an issue because they get picked up and dealt with quickly, or because we have special engineered defenses against them built into our bodies, ect, and then it wouldn’t matter if they’re natural mutations or genetic engineered diseases.
I think there’s a bigger threat of someone recreating smallpox or Spanish influenza or something in their basement before we get to that point, and that could be catastrophic if we don’t have the tools to deal with it yet, but that’s not actually an existential threat, although it could kill millions. Creating a truly novel disease that would be both contagious and fatal enough to actually be an existential threat, it seems to me, would be a much more difficult challenge; not that it’s impossible, but I don’t see someone doing it with a “CRISPR at home” set in his basement anytime soon.
(shrug) They have models which predict that the frequency of the earthquakes will increase by a certain degree, and those models have proved extremly accurate so far. They can’t predict single earthquakes, no, nobody can do that, but that doesn’t mean they don’t have any understanding of what’s going on here.
It’s not the fracking itself that causes the earthquakes. It’s the act of pumping the contaminated water back underground, which changes the balance underground and allows tectonic plates to move in ways they otherwise would not be able to. You could do fracking without that last step, but then you’d have to find something else to do with the water.
And yes, the energy was already underground, but that doesn’t actually mean that the earthquake was going to happen anyway. The dynamics are apparently more complicated then that. But there have been a dramatic increase in the number of earthquakes in areas with fracking, many of which had never really had earthquakes before, and a lot of those earthquakes are serious enough to cause property damage.
Nothing to do with “PR science” at all. It’s just science. The results are what the results are. You can’t just dismiss scientific evidence for political reasons.
Yeah, that’s a good point; one some level, any purely logical system always has to start with certain axioms that you can’t prove within that system, and in the real world that’s probably even more true.
I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don’t end up conflicting with other axioms?
Yeah, that’s very true. But in the future, I think that we’re going to get to a point where we figure out how to the new tools of biotechnology to deal with viruses in a more direct way.
For example, there was some interesting research a few months ago about using CRISPR to remove HIV from live mice, by carefully snipping out the HIV DNA from infected cells directly.
http://sci-hub.io/10.1016/j.ymthe.2017.03.012
I’m not sure if that specific research will turn out to be significant or not, but in the long run, I think that biotech research is going to give us many new tools to deal with both viruses and bacteria in general, and that those will also be effective against bio-weapons.
Ok, reading your first essay, my first thought is this:
Let’s say that you are correct and the future will see a wide variety of human and post-human species, cultures, and civilizations, which look at the universe in very different ways and have very different mindsets, and which use technology and science in entierly different ways which may totally baffle outside observers. To quote your essay:
The point is that different species may be in the same situation with respect to each others’ ability to manipulate the physical world. A species X could observe something happening in the universe but have no way, in principle, of understanding the causal mechanisms behind the explanandum-phenomenon. By the same token, species X could wiggle some feature of the universe in a way that species Y finds utterly perplexing. The result could be a very odd and potentially catastrophic sort of “mutually asymmetrical warfare,” where the asymmetry here refers to fundamental differences in how various species understand the universe and, therefore, are able to weaponize it. Unlike a technologically “advanced” civilization on Earth fighting a more technologically “primitive” society, such space conflicts would be more like Homo sapiens engaged in an all-out war with bonobos—except that this asymmetry would be differently mirrored back toward us.
If that is true, then it seems to me that the civilizations which would have the biggest advantages would be very diverse civilizations, wouldn’t it? If at some point in the future, a certain civilization (say, for the sake of example, let’s call it the “inner solar system civilization”) has a dozen totally different varieties of humans and transhumans and post-humans and maybe uplifted animals and maybe some kind of AI’s or whatever, living in wildly different environments, with very different goals and ideas and ways of looking at the universe, and these different groups develop in contact with each other in such a way that they still are generally on good terms and share ideas and information (even when they don’t really understand each other), it seems like that diverse civilization would have a huge advantage in any kind of conflict with monopolar civilizations where everyone in the civilization had the same worldview. The diverse civilization would have a dozen different types of technologies and worldviews and ways of “wiggling some feature in the universe”, while the monopoplar civilization would only have one; the diverse civilization would probably also advance more quickly overall.
So, if that is true, then I would think that in that kind of future situation, the civilizations that would have the greatest advantage in any possible conflict would be the very diverse civilizations with a large number of different sub-types of civilizations living in harmony with each other; and those civilizations would, I suspect, also tend to be the most peaceful and the least likely to start an interstellar war just because that other civilization seemed different or weird to them. More likely a diverse civilization that already is sharing ideas between a dozen different species of posthumans would be more interested in sharing ideas and knowledge with more distance civilizations instead of engaging in war with them.
Maybe I’m just being overly optimistic, but it seems like that may be a way out of the problem you are talking about.
The tech is cheapening, but I think there’s a lot more resources going into developing biotech to fight viruses and bacteria then there is in developing genetically engineered bioweapons.
I don’t think lack of life extension research funding actually comes from people not wanting to live, I think it has more to do with the fact that the vast majority of people don’t take it seriously yet and don’t beleive that we could actually significantly change our lifespan. That’s compounded with a kind of “sour grapes” defensive reflex where when people think they can never get something they try to convince themselves they don’t really want it.
I think that if progress is made that at some point there will be a phase change where, when more people start to realize that it is possible and suddenly flip from not caring at all to caring a great deal.