I think the answers to 1 and 2 are as reasonably close to 0 as calculated probabilities can be. That may be independent with the question of how reasonable it is to step into the teleporters, however.
It looks like confused thinking to me when people associate their own conscious existence with the clone that comes out of the teleporter. Sure, you could believe that your consciousness gets teleported along with the information needed to construct a copy of your body, but that is just an assumption that isn’t needed as part of the explanation of the physical process. From that, also, stems the problems of needing to explain which copy your consciousness would “prefer”, if there are multiples; or whether consciousness would be somehow split or combined.
The troubling issues that follow from the teleporter problem are the questions it raises about the actual makeup of the phenomenon we identify as our own consciousness. It seems to me that it may well be that our conception of a persistent personal consciousness is fully illusory, in which case it may be said that the original questions are all ambiguous in terms of the referent of “you”. In this conception, an instance of “you” may have qualia, but this qualia is not connected to any actually persistent identity.
If the idea that we have an actually persistent conscious experience is an illusion, then the question of whether we should use the teleporter, cloning or otherwise, is mostly about how comfortable we are with it, and how desirable the outcome of using it is likely to be as a practical matter. If you bite that bullet then using the teleporter should have no more impact than going under anesthesia or even a deep dreamless sleep. If the illusion model is true, then selfishness with regard to experience is simply a mark of not being able to personally accept the falseness of your own identity, in which case you are likely to not choose to use the teleporter for that reason.
For the record, I feel very uncomfortable with the idea of using the teleporter. Currently, the idea “feels” like suicide. But I don’t know that there’s any rational basis for that.
Ansel
Thanks for the response, especially including specific examples.
My motivation for asking these questions, is to anticipate that which will be obvious and of greatest humanitarian concern in hindsight, say in a year.
This is a scenario that I think is moderately probable, that I’m worried about:
Part 1, most certain: Israeli airstrikes continue, unclear if they’re still using their knocking system much. Due in part to deliberate Hamas mixing of combatants and non-combatants, numbers of civilian casualties rise over time.
Part 2, less certain: Israel continues to withhold or significantly restrict electricity and/or food/medical supplies. Civilian casualties rise over time.Part 3, less certain: Israel proceeds with an invasion/occupation of Gaza. Goals could be restricted to killing known members of Hamas, destroying Hamas materiel, rescuing hostages, or they could be expanded to some kind of occupation or even resettlement objectives.
With part 2 and 3, the possibilities for non-combatant casualties seem largely open ended. The results (if these things happen) will depend not just on Israel’s conduct, but also the reaction from Hamas and the general Palestinian population.
I think that those who are able to consider the situation dispassionately, both inside and outside of Israel, should be clear that the maximally aggressive Israeli response would be tragic and catastrophic. The question, therefore, is how much restraint can be shown; and to a lesser extent, if the response can do any good. As a backdrop to all this, I also consider that it’s as yet uncertain whether, among other considerations, there could be more attacks against Israel yet to come in the near term.
I understand that you might not have much to say about all this since it’s largely speculation, just thought I’d throw in my thoughts about the situation.
My utmost sympathy goes out to the civilians (and soldiers for that matter) who have been harmed in such a horrible way. The conduct of Hamas is unspeakable.
My guess is that you most likely do not expect the currently unfolding Israeli response to result in a massive humanitarian tragedy (please correct me if that’s wrong). Do you have any specific response to those who have concerns in this vein?
Specifically, the likely results of denying food supplies and electricity to Gaza seem disastrous for the civilians therein. Water disruption is also dangerous, though I read that water is being trucked in.
Also, Israel seems to be gearing up for a very large scale operation in Gaza, with potentially tens of thousands of soldiers involved. What is your expectation of the casualties—of combatants, for both sides, and non-combatants on the Palestinian side?
I disagree with this. The fact that the active mechanism of any functional weight loss strategy is having lower caloric intake than expenditure is obviously a critical aspect of dieting that makes sense to talk about, so I disagree with calling it a red herring.
Calorie counting doesn’t work well for everyone as a weight loss strategy, but it does work for some people. Obviously a strategy that works well when adhered to, and which some people can successfully adhere to, is worth talking about. Also obviously, people who have trouble with implementing it themselves should try other strategies. Find the strategy that works for you, and combine with a form of exercise that you enjoy.
The parent post amusingly equated “accurately communicating your epistemic status”, which is the value I selected in the poll, with eating babies. So I adopted that euphemism (dysphemism?) in my tongue-in-cheek response.
Also, this: https://en.wikipedia.org/wiki/A_Modest_Proposal
I modestly propose that eating babies is more likely to have good outcomes, including with regard to the likelihood of apocalypse, compared to the literal stated goal of avoiding the apocalypse.
In my opinion, the risk analysis here is fundamentally flawed. Here’s my take on the two main SETI scenarios proposed in the OP:
Automatic disclosure SETI—all potential messages are disclosed to the public pre analysis. This is dangerous if it is possible to send EDM (Extremely Dangerous Messages—world exploding/world hacking), and plausible to expect they would be sent.Committee vetting SETI—all potential messages are reviewed by a committee of experts, who have the option of unilaterally concealing information they deem to be dangerous.
The argument in the OP hinges on portraying the first scenario as risky, with the second scenario motivated based on avoiding that risk. But the risk to be avoided there is fully theoretical, there’s no concrete basis EDM (obviously if smart people think there can be/should be a concrete basis for them, I’d love to see it fleshed out).
But the second scenario has much more plausible risk! Conditioned on both scenarios eventually receiving alien messages, the second scenario could still be dangerous if EDM aren’t real. By handling alien messages with unilateral secrecy, you’re creating a situation where normal human incentives for wealth, personal aggrandizement, or even altruistic principles could lead a small, insular group to try to seize power using alien technology. The main assumption for this risk to be a factor, is that aliens sending us messages could have significantly superior technology. This seems more plausible than the existence of EDM, which is after all essentially the same claim but incredibly stronger.
Some people might even see the ability to seize power with alien tech as a feature, probably. But I think this is an underdiscussed and essential aspect to the analysis of public disclosure SETI vs secret committee SETI. To my mind, it dominates the risk of EDM until there’s a basis for claiming that EDM are real.
Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.
That does clarify where you’re coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation.
E.G., an attractor labelled something like “SBF’s actions were not part of EA because EA doesn’t do those Bad Things”.Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portion of the reasoning on SBF in the past week looks motivated in service of the above.
The following isn’t really pointed at you, just my thoughts on the situation.
I think that there’s nearly unavoidable tension with trying to float arguments that deal with the optics of SBF’s connection to EA, from within EA. Which is a thing that is explicitly happening in this thread. Standards of epistemic honesty are in conflict with the group need to hold together. While the truth of the matter is and may remain uncertain, if SBF’s fraud was motivated wholly or in part by EA principles, that connection should be taken seriously.My personal opinion is that, the more I think about it, the more obvious it seems that several cultural features of LW adjacent EA are really ideal for generating extremist behavior. People are forming consensus thought groups around moral calculations that explicitly marginalize the value of all living people, to say nothing of the extreme side of negative consequentialism. This is all in an overall environment of iconoclasm and disregarding established norms in favor of taking new ideas to their logical conclusion.
These are being held in an equilibrium by stabilizing norms. At the risk of stating the obvious, insofar as the group in question is a group at all, it is heterogeneous; the cultural features I’m talking about are also some of the unique positive values of EA. But these memes have sharp edges.
From what I’ve heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn’t submit to him. This isn’t very “EA” by the usual lights.
It’s not immediately clear to me that this isn’t a No True Scotsman fallacy.
I’d be interested in someone with legal expertise weighing in on whether the farm example is in violation of child labor laws. There are special regulations and exemptions for farms, especially run by a parent or person standing for the parent, but a nine year old driving that tractor seems very likely to be illegal to me. I broadly agree with all the stuff about letting children roam, and it comports well with my own experience, but tractors in particular can be very dangerous and 9 seems very young to be doing genuinely independent ag work like this. Would be interested in other people’s thoughts.
It seems like you might be reading into the post what you want to see to some extent(after reading what I wrote, it looked like I’m trying to be saucy paralleling your first sentence, just want to be clear that to me this is a non valenced discussion), the OP returns to referring to K-type and T-type individual people after discussing their formal framework. That’s what makes me think that classifying people into the binary categories is meant to be the main takeaway.
I’m not going to pretend to be more knowledgeable than I am about this kind of framework, but I would not have commented anything if the post had been something like “Tradeoffs between K-type and T-type theory valuation” or anything along those lines.
Like I said, I don’t think the case has remotely been made for being able to identify well defined camps of people, and I think it’s inconsistent to say that there are K-type and T-type people, which is a “real classification”, and then talk about the spectrum between K-type and T-type people. This implies that K-type and T-type people really aren’t exclusive camps, and that there are people with a mix of K-type and T-type decision making.
I’m not persuaded at all by the attempt to classify people into the two types. See: in your table of examples, you specify that you tried to include views you endorse in both columns. However, if you were effectively classified by your own system, your views should fit mainly or completely in one column, no?
The binary individual classification aspect of this doesn’t even seem to be consistent in your own mind, since you later talk about it as a spectrum.
Maybe you meant it as a spectrum the whole time but that seems antithetical to putting people into two well defined camps.
Setting those objections aside for a moment, there is an amusing meta level of observing which type would produce this framework.
One would expect a Prime Minister to be Prime over Ministers. I don’t see the need to rename everything Ministry of This or That, so Prime Minister doesn’t really seem appropriate.
Would you be willing to summarize the point you’re making at the object level? Is it something like “the Soviets had to make the Molotov Ribbentrop pact, and that doesn’t say anything meaningful about their cultural approach to the interaction of world religions”? I don’t want to put words in your mouth or anything, I just want to understand the “extremely low-epistemics” bit.
It seems like you’ve retreated fully from your bailey:
“at the risk of being the Captain Obvious, I must remind the readers that mountain climbing is stupid”
to your motte:
“There is no greatness in being the 5001th man who climbed Everest”
I suspect most people responding take greater issue with the former position, so maybe if you still stand by it you could defend that one.
To me, it seems like the standard of “if it increases your chances of dying, it’s a stupid recreational activity” is one that is unlikely to be applied evenly by just about anyone.
E.G., if you want to apply that consistently, you should probably have a very restrictive diet, and definitely not play video games for moderate to long periods of time (risk of death from blood clots, sedentariness, etc)
Conceptually I like the framing of “playing to your outs” taken from card games. In a nutshell, you look for your victory conditions and backchain your strategy from there, accepting any necessary but improbable actions that keep you on the path to possible success. This is exactly what you describe, I think, so the transposition works and might appeal intuitively to those familiar with card games. Personally, I think avoiding the “miracle” label has a significant amount of upside.
Not every occupation is the same, but nations occupied by military force are often denied the ability to run their own affairs with regard to legal proceedings, defence, etc. In particular not being allowed to have final authority over legal matters on their own soil seems to historically be a great sticking point: see the Austro-Hungarian demands of Serbia leading to WW1.
This is one of the key domains which defines the authority of a sovereign nation, whereas it doesn’t seem that uncommon in history for there to be foreign military assets in a nation as a non-occupying force that does not damage the sovereignty of that nation. Auxiliary troops, mercenaries, allied soldiers.
From this perspective, U.S. bases look like occupation insofar as they damage the sovereignty of the occupied nation, and look like anything but occupation to the degree that they protect or abide by that sovereignty. Russian propaganda would of course claim, that the former dramatically outweighs the latter.
I think it’s useful to point out that training muscles for strength/size results in a well documented phenomenon called supercompensation. However, training for other qualities like speed doesn’t really work the same way. There’s lots of irrational training done because people make an inferential leap from the supercompensation they see in strength training and apply it to cases which intuitively seem like they might be analogues (e.g., weighted sprints don’t make you faster).
I think counterexamples are relevant because sometimes intuition points out real analogues, and sometimes fake ones, so we should value evidence and mechanistic explanation over analogies and cultural beliefs.
Sorry if this is a little incoherent, I wrote it when I was really sleepy.
This seems transparently false based on the most cursory of research. Just reading the wikipedia article, the story of the Black War/Tasmanian genocide seems to have ended with the last ~100 or so aboriginal Tasmanians surviving, out of a possible initial pre-contact population of between 3000-7000 (with 30 years elapsing between contact and the final exile of the Tasmanians from their homeland).
So: that the nation was wiped out by the British? I would evaluate that as true. That nobody survived? Totally false. Was it a genocide? As a wikipedia warrior I definitely don’t deserve to stake a position without actually reading more, but the historiography as presented in the wiki seems ambiguous about the aptness of applying the genocide label. “Cultural genocide” I think needs a new label, but that category absolutely would apply as far as I can see.