Cultural identity, in any reasonable world, is about the people around you and your way of life, not where you are on a map.
Arcayer
(In theory you could buy a piece of land, but in practice, countries are unwilling to sell.)
Buying land from governments really hasn’t been a very legitimate concept from the beginning. Even if they are willing to sell, the people living there probably don’t want you ruling them, and where they don’t want to sell, I fail to see the crime against humanity in paying people to move to another country until there are few enough left that you can walk in, become the super majority, and declare yourself the new government.
Of course, that doesn’t mean men with guns won’t try to stop you. I can very much see how elites with guns like this environment where no one ever has the option of forming a new country, or buying out their own country from under them. The problem here is that people who are not powerful elites tolerate this, and don’t consider that we could cut governments out of this equation entirely.
Realistically, Israel and the west already have their plans laid and aren’t going to change them. In that sense, there are no options.
Unrealistically, Israel should relocate. To Moldova, specifically. As for the Moldovans, buy them out. Offer up enough money and choices for new citizenship that the vast majority accept and leave and Israel can accept the remainder as full citizens without having to worry about cultural dilution/losing democratic elections/etc.
In a even more unrealistically reasonable world, middle eastern countries would be willing to fund this, as they’re the main beneficiaries.
On that note, Taiwan should relocate next.
Somewhat nitpicking
this has not led to a biological singularity.
I would argue it has. Fooms have a sort of relativistic element, where being inside a foom does not feel special. Just because history is running millions of times faster than before, doesn’t really feel like anything.
With all of that said, what is and isn’t a foom is somewhat blurry at the edges, but I’d argue that biology, brains, and farming all qualify. Conversely, that more has happened in the last couple centuries than the previous couple eons. Of course, this claim is heavily dependent on the definition of “things happening”, in terms of say, mass moved, none of this has mattered at all, but in terms of, things mattering, the gap seems nigh infinite.
Looking at the world from a perspective where fooms have happened, in fact, multiple times, doesn’t give me confidence that fooms just aren’t physically something that’s allowed.
I direct skepticism at boosters supporting fast enough timelines to reach AGI within the near future, that sounds like a doomer only position.
In the end, children are still humans.
Half of childhood is a social construct. (In particular, most of the parts pertaining to the teenage years)
Half of the remainder won’t apply to a given particular child. Humans are different.
A lot of that social construct was created as part of a jobs program. You shouldn’t expect it to be sanely optimized towards excuses made up fifty years after the fact.
Childhood has very little impact on future career/social status/college results. They’ve done all sorts of studies, and various nations have more or less education, and the only things I’ve seen that produce more impact than a couple IQ points are like, not feeding your children. Given access to resources, after the very early years, children are basically capable of raising themselves.
In summary, it’s best not to concern yourself with social rituals more than necessary and just learn who the actual person in front of you is, and what they need.
I note one of my problems with “trust the experts” style thinking, is a guessing the teacher’s password problem.
If the arguments for flat earth and round earth sound equally intuitive and persuasive to you, you probably don’t actually understand either theory. Sure, you can say “round earth correct”, and you can get social approval for saying correct beliefs, but you’re not actually believing anything more correct than “this group I like approves of these words.”
My experience is that rationalists are hard headed and immune to evidence?
More specifically, I find that the median takeaway from rationalism is that thinking is hard, and you should leave it up to paid professionals to do that for you. If you are a paid professional, you should stick to your lane and never bother thinking about anything you’re not being paid to think about.
It’s a serious problem rationalism that half of the teachings are about how being rational is hard, doesn’t work, and takes lots of effort. It sure sounds nice to be a black belt truth master who kicks and punches through fiction and superstition, but just like a real dojo, the vast majority, upon seeing a real black belt, realize they’ll never stand a chance in a fight against him, and give up.
More broadly, I see a cooperate defect dilemma where everybody’s better off in a society of independent thinkers where everybody else is more wrong, but in diverse ways that don’t correlate, such that truth is the only thing that does correlate. However, the individual is better off being less wrong, by aping wholesale whatever everybody else is doing.
In summary, the pursuit of being as unwrong as possible is a ridiculous goodharting of rationality and doesn’t work at scale. To destroy that which the truth may destroy, one must take up his sword and fight, and that occasionally, or rather, quite frequently, involves being struck back, because lies are not weak and passive entities that merely wait for the truth to come slay them.
This is sort of restating the same argument in a different way, but:
it is not in the interests of humans to be Asmodeus’s slaves.
From there I would state, does assigning the value [True] to [Asmodeus], via [Objective Logic] prove that humans should serve Asmodeus, or does it prove that humans should ignore objective logic? And if we had just proven that humans should ignore objective logic, were we ever really following objective logic to begin with? Isn’t it more likely that that this thing we called [Objective Logic] was in fact, not objective logic to begin with, and the entire structure should be thrown out, and something else should instead be called [Objective Logic] which is not that, and doesn’t appear to say humans should serve Asmodeus?
Because AI safety sucks?
Yes, yes, convenient answer, but the phrasing of the question seriously does make me think the other side should take this as evidence that AI safety is just not a reasonable concern. This is basically saying that there’s a strong correlation between having a negative view of X, and being reliable on issues that aren’t X, that would make a lot of sense if X was bad.
So, a number of issues stand out to me, some have been noted by others already, but:
My impression is that there are also less endorsable or less altruistic or more silly motives floating around for this attention allocation.
A lot of this list looks to me like the sort of heuristics where, societies that don’t follow them inevitably crash, burn and become awful. A list of famous questions where the obvious answer is horribly wrong, and there’s a long list of groups who came to the obvious conclusion and became awful, and it’s become accepted wisdom to not do that, except among the perpetually stubborn “It’ll be different this time” crowd, and doomers who insist “well, we just have to make it work this time, there’s no alternative”.
if anyone chooses to build, everything is destroyed
The problem with our current prisoner’s dilemma is that China has already openly declared their intentions. You’re playing against a defect bot. Also, your arguments are totally ineffective against them, because you’re not writing in Chinese. And, the opposition is openly malicious, and if alignment turns out to be easy, this ends with hell on earth, which is much worse than the false worst case of universal annihilation.
On the inevitability of AI: I find current attempts at AI alignment to be spaceships with sliderules silliness and not serious. Longer AI timelines are only useful if you can do something with the extra time. You’re missing necessary preconditions to both AI and alignment, and so long as those aren’t met, neither field is going to make any progress at all.
On qualia: I expect intelligence to be more interesting in general than the opposition expects. There are many ways to maximize paperclips, and even if technically, one path is actually correct, it’s almost impossible to produce sufficient pressure to direct a utility function directly at that. I expect an alien super intelligence that’s a 99.9999% perfect paperclip optimizer, and plays fun games on the side, to play above 99% of the quantity of games that a fun game optimizer would get. I accuse the opposition of bigotry towards aliens, and assert that the range of utility functions that produce positive outcomes is much larger than the opposition believes. Also, excluding all AI that would eliminate humanity, excludes lots of likable AI that would live good lives, but reach the obviously correct conclusion that humans are worse than them and need to go, while failing exclude any malicious AI that values human suffering.
On anthropics: We don’t actually experience the worlds that we fail to make interesting, so there’s no point worrying about them anyway. The only thing that actually matters is the utility ratio. It is granted that, if this worldline looked particularly heaven-oriented, and not hellish, it would be reasonable to maximize the amount of qualia attention by being protective of local reality, but just looking around me, that seems obviously not true.
On Existential Risk: I hold that the opposition massively underestimates current existential risks excluding AI, most of which AI is the solution to. The current environment is already fragile. Any stable evil government anywhere means that anything that sets back civilization threatens stagnation or worse, aka, every serious threat, even those that don’t immediately wipe out all life, most notably nuclear weapons, constitutes an existential risk. Propaganda and related can easily drive society into an irrecoverable position using current techniques. Genetics can easily wipe us out, and worse, in either direction. Become too fit, and we’re the ones maximizing paperclips. Alternatively, there’s the grow giant antlers and die problem where species trap themselves in a dysgenic spiral. Evolution does not have to be slow, and especially if social factors accelerate the divide between losers and winners, we could easily breed ourselves to oblivion in a few generations. Almost any technology could get us all killed. Super pathogens with a spread phase and a kill phases. Space technology that slightly adjusts the pathing of large objects. Very big explosions. Cheap stealth, guns that fire accurately across massive distances, fast transportation, easy ways to produce various poison gasses. There seems to be this idea that just because it isn’t exotic it won’t kill you.
In sum: I fully expect that this plan reduces the chances of long term survival of life, while also massively increasing the probability of artificial hell.
Something I would Really really like anti-AI communities to consider is that regulations/activism/etc aimed to harm AI development and slow AI timelines do not have equal effects on all parties. Specifically, I argue that the time until the CCP develops CCP aligned AI is almost invariant, whilst the time until Blender reaches sentience potentially varies greatly.
I am Much much more hope for likeable AI via open source software rooted in a desire to help people and make their lives better, than (worst case scenario) malicious government actors, or (second) corporate advertisers.
I want to minimize first the risk of building Zon-Kuthon. Then, Asmodeus. Once you’re certain you’ve solved A and B, you can worry about not building Rovagug. I am extremely perturbed about the AI alignment community whenever I see any sort of talk of preventing the world being destroyed where this moves any significant probability mass from Rovagug to Asmodeus. A sensible AI alignment community would not bother discussing Rovagug yet, and would especially not imply that the end of the world is the worst case scenario.
However, these hypotheses are directly contradicted by the results of the “win-win” condition, where participants were given the ability to either give to their own side or remove money from the opposition.
I would argue this is a simple stealing is bad heuristic. I would also generally expect subtraction to anger the enemy and cause them stab more kittens.
Republicans are the party of the rich, and they get so much money that an extra $1,000,000 won’t help them.
Isn’t this a factual error?
With the standard warning that this is just my impression and is in no way guaranteed to be actually good advice:
My largest complaint is that the word to content ratio is too high. As an example:
It was an hour and a half trip for this guy when he flew and pushed himself, and about two and a half at what he thought was a comfortable pace.
Could drop one half and be almost as informative. Just:
This guy could’ve made the trip within a few hours at a comfortable pace.
Would’ve been fine. It can be inferred that he can go faster if that’s a comfortable pace, and even the flying can be inferred from surrounding statements.
There’s also no need to be super specific about these things if it’s not going to be plot relevant. Even if the exact number is plot relevant, I doubt many people are going to remember such details after reading a few more chapters. Focus on what’s important. Particularly, focus on what’s important to the character. Is his flight time really what matters most to him right now? A lot of characterization can flow from what a character does and doesn’t pay attention to. Dumping the entire sensorium on the reader, while technically accurate, leaves a shallow impression of the character.
I would argue that good writing tends to condense data as far as it will go, so long as the jargon count is kept at a subdued level.
Zelensky clearly stated at the Munich Security Conference that if the west didn’t give him guarantees that he wasn’t going to get he would withdraw from the Budapest Memorandum. This is a declared intent to develop nuclear weapons, and is neither in doubt nor vague in meaning.
Russia also accuses Ukraine of developing bioweapons. All of the evidence for this comes through Russia, so I wouldn’t expect someone who didn’t already believe Russia’s narrative to believe said accusations, but in any case, bioweapons development is held by Russia to be among the primary justifications of the invasion.
One thing I’ve been noting, which seems like the same concept as this is:
Most “alignment” problems are caused by a disbalance between the size of the intellect and the size of the desire. Bad things happen when you throw ten thousand INT at objective: [produce ten paperclips].
Intelligent actors should only ever be asked intelligent questions. Anything less leads at best to boredom, at worst, insanity.
A: Because Ukraine was shelling Donbas.
B: Because Ukraine was threatening to invade and conquer Crimea.
C: Because Ukraine was developing/threatening to develop weapons of mass destruction.
D: Because Russia is convinced that the west is out to get it, and the Russian people desire victory over the west, serving as a show of force and thus potential deterrent to future hostile actions.
E: Because Ukraine cut off Crimea’s water supply, and other such nettling actions.
2: No.
If an AI can do most things a human can do (which is achievable using neurons apparently because that’s what we’re made of)
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that’s what’s being said, and if so, on what basis.
3: I’m not asking what makes intelligent AI dangerous. I’m asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I understand the situation correctly, if gpt just keeps increasing the number of parameters, gpt five or six or so will require more memory than exists on the planet, and assuming someone built it anyway, I still expect it to be unable to wash dishes. Even assuming you have the memory, running the training would take longer than human history on modern hardware. Even assuming deep learning “works” in the mathematical sense, that doesn’t make it a viable path to high levels of intelligence in the near future.
Given doom in thirty years, or given that researching deep learning is dangerous, it should be the case that this problem: never existed to begin with and I’m misunderstanding something / is easily bypassed by some cute trick / we’re going to need a lot better hardware in the near future.
Moldova isn’t the only plausible option or anything, my reasoning is just, it has good land, the population is low enough that they could be bought out a price that isn’t too absurd, they’re relatively poor and could use the money, it’s a relatively new country with a culture similar to a number of other countries and it’s squarely in western territory and thus shouldn’t be much of a source of conflict.