That’s probably the only “military secret” that really matters.
The soldiers guarding the outer wall and the Citadel treasurer that pays their overtime wages would beg to differ.
That’s probably the only “military secret” that really matters.
The soldiers guarding the outer wall and the Citadel treasurer that pays their overtime wages would beg to differ.
Yes, that is the price she got for giving the information.
I think that AI people that are very concerned about AI risk tend to view loss of control risk as very high, while eternal authoritarianism risks are much lower.
I’m not sure how many people see the risk of eternal authoritarianism as much lower and how many people see it as being suppressed by the higher probability of loss of control[1]. Or in Bayesian terms:
P(eternal authoritarianism) = P(eternal authoritarianism | control is maintained) P(control is maintained)
Both sides may agree that P(eternal authoritarianism | control is maintained) is high, only disagreeing on P(control is maintained).
Here, ‘control’ is short for all forms of ensuring AI alignment to humans, whether all or some or one.
As far as I understand, “a photon in a medium” is a quasiparticle. Actual photons always travel at the speed of light, and the “photon” that travels through glass at a lower speed is the sum of an incredibly complicated process that cancels out perfectly into something that can be described as one or several particles if you squint a little because the energy of the electromagnetic field excitation can’t be absorbed by the transparent material and because of preservation of momentum.
The model of the photon “passing by atoms and plucking them” is a lie to children, an attempt to describe a quantum phenomenon in classically comprehensible terms. As such, what “the momentum of the quasiparticle” is depends on what you consider to be part of the quasiparticle, or which parts of the quasiparticle(s) you measure when doing an experiment.
Specifically, for the mirror-in-a-liquid, when light hits the liquid it is refracted. That refraction makes the angle of the path of the light more steep, which means the mirror has to reflect momentum that is entering at a more steep angle, and so the momentum the mirror measured is multiplied by the refractive index. At the quasiparticle level, when hitting the liquid interface, the light interacts with the quasiparticle phonon [sic] field of the liquid, exchanging momentum to redirect the quasiparticle light, and the mirror has to reflect both the quasiparticle light and the phonon field, resulting in the phonons being included with “the momentum of the light”.
However, for the light-through-a-glass-fiber, you are measuring the momentum of the phonons as part of the not-light, because the phonons are part of the medium, so part of the glass fiber getting nudged by the light beam.
I’m not sure how this works out in rigorous calculation, but this is my intuition pump for a [1]-ish answer.
Though compare and contrast Dune’s test of the gom jabbar:
You’ve heard of animals chewing off a leg to escape a trap? There’s an animal kind of trick. A human would remain in the trap, endure the pain, feigning death that he might kill the trapper and remove a threat to his kind.
Even if you are being eaten, it may be right to endure it so that you have an opportunity to do more damage later.
You’re suggesting angry comments as an alternative for mass retributive downvoting. That easily implies mass retributive angry comments.
As for policing against systemic bias in policing, that’s a difficult problem that society struggles with in many different areas because people can be good at excusing their biases. What if one of the generals genuinely makes a comment people disagree with? How can you determine to what extent people’s choice to downvote was due to an unauthorized motivation?
It seems hard to police without acting draconically.
Just check their profile for posts that do deserve it that you were previously unaware of. You can even throw a few upvotes at their well-written comments. It’s not brigading, it’s just a little systemic bias in your duties as a user with upvote-downvote authority.
Are you trying to prime people to harass the generals?
Besides, it’s not mass downvoting, it’s just that the increased attention to their accounts revealed a bunch of poorly written comments that people genuinely disagree with and happen to independently decide are worthy of a downvote :)
“why not just” is a standard phrase for saying what you’re proposing would be simple or come naturally if you try. Combined with the rest of the comment talking about straightforwardness and how little word count, and it does give off a somewhat combatitive vibe.
I agree with your suggestion and it is good to hear that you don’t intend it imply that it is simple, so maybe it would be worth editing the original comment to prevent miscommunication for people who haven’t read it yet. For the time being I’ve strong-agreed with your comment to save it from a negativity snowball effect.
No. I would estimate that there are fewer rich people willing to sacrifice their health for more income than there are poor people willing to do the same. Rich people typically take more holidays, report higher job satisfaction, suffer fewer stress-related ailments, and spend more time and money on luxuries rather than reinvesting into their careers (including paying basic cost of living to be employable).
And not for lack of options. CEOs can get involved with their companies and provide useful labor by putting their nose to the grindstone, or kowtow to investors for growth opportunities. Investors can put a lot of labor into finding financial advisors and engaging in corporate intrigue to get an advantage on the market. Celebrities can work on their performance and their image to become more popular and get bigger signing deals.
Perhaps to clarify, “the grind” isn’t absolute economic value or hours worked, it’s working so hard that it cannibalizes other things you value.
There are rich people pushing themselves work 60+ hour days struggling to keep a smile on their face while people insult and demean them. And there are poor people who live as happy ascetics, enjoying the company of their fellows and eating simple meals, choosing to work few hours even if it means forgoing many things the middle class would call necessities.
There are more rich people that choose to give up the grind than poor people. It’s tougher to accept a specific form of suffering if you see that 90% of your peers are able to solve the suffering with work than if 1% of your peers can. Right now, accepting your own mortality is normal even for rich people, but as soon as the 20% richest can live forever, suddenly everyone who isn’t immortal will feel poor for lacking it. Maybe some will still embrace death rather than endure professional abuse 60 hours per week, but that is suddenly a much harder decision.
My first order guess for the mental definition of poverty would be if >X% of the population in your polity is able to afford a solution to various intense forms of suffering, but you can’t. (Where X seems to be around 50-80%)
A rich 5th century BCE Athenian was a man for who any illness was very likely fatal, who loses half his children between ages ½ and 18, whose teeth are slowly ground down to the roots by milldust, who lived one bad harvest away from famine, and more. But now we characterize poor people by their desperate struggle to escape the things the Athenian rich person accepted as normal and inevitable.
I can see two ways UBI may solve poverty:
Given UBI makes up a sufficiently high fraction of all income, the poorest person does not feel like their wealth differs enough from that of everyone else to conceptualize their suffering as poverty.
Given sufficiently advanced technology, the UBI is enough to solve all major forms of suffering, from death to emotional abuse trauma, and even those with less stuff don’t feel like they’re missing out.
Option (1) may be possible already, especially in more laid-back cultures, but if not then UBI won’t solve poverty yet.
All of that said, why is this the standard you choose to measure it by? Even if UBI doesn’t solve poverty it can relieve suffering. And while other anxieties might take the place of the old on the hedonic treadmill, it does feel like life is better when sources of suffering are taken away. If you offered me $5000 but if I get depressed a child gets eaten alive by wolves before my eyes, I would say no, so apparently I do prefer my worst days over a hunter-gatherer’s worst days.
Quality of life is what matters. UBI can take out financial anxiety and poverty traps, and likely improve exercise and physical health and nutrition and social satisfaction (assuming the economy doesn’t collapse). Those are all things anyone could recognize as valuable, even if they have more urgent matters themselves.
You seem to approach the possible existence of a copy like a premise, with as question whether that copy is you. However, what if we reverse that? Given we define ‘a copy of you’ as another one of you, how certain is it that a copy of you could be made given our physics? What feats of technology are necessary to make that copy?
Also, what would we need to do to verify that a claimed copy is an actual copy? If I run ChatGPT-8 and ask it to behave like you would behave based on your brain scan and it manages to get 100% fidelity in all tests you can think of, is it a copy of you? Is a copy of you inside of it? If not, in what ways does the computation that determines an alleged copy’s behavior have to match your native neuronal computation for it to be or contain a copy? Can a binary computer get sufficient fidelity?
I’m fine with uploading to a copy of myself. I’m not as optimistic about a company with glowing reviews offering to upload/make a copy of me.
In the post you mention Epistemic Daddies, mostly describing them as sources that are deferred to for object-level information.
I’d say there is also a group of people who seek Epistemic Mommies. People looking for emotional assurance that they’re on the right path and their contribution to the field is meaningful; for assurance that making mistakes in reasoning is okay; for someone to do the annoying chores of epistemic hygiene so they can make the big conclusions; for a push to celebrate their successes and show them off to others; etc.
Ultimately both are ultimately about deferring to others (Epistemic Parents?) for information, but Epistemic Mommies are deferred to for procedural information. Givewell makes for a tempting Epistemic Daddy, but Lesswrong sequences make for a tempting Epistemic Mommy.
Your advice is pretty applicable to both, but I feel like the generalization makes it easier to catch more examples of unhealthy epistemic deference.
I kind of… hard disagree?
Effective Samaritans can’t be a perfect utility inverse of Effective Altruists while keeping the labels of ‘human’, ‘rational’, or ‘sane’. Socialism isn’t the logical inverse of Libertarianism; both are different viewpoints on how to achieve the common goal of societal prosperity.
Effective Samaritans won’t sabotage an EA social experiment any more than Effective Altruists will sabotage an Effective Samaritan social experiment. If I received a letter from Givewell thanking me for my donation that was spent on sabotaging a socialist commune, I would be very confused—that’s what the CIA is for. I frankly don’t expect either the next charter city or the next socialist commune to produce a flourishing society, but I do expect both to give valuable information that would allow both movements and society in general to improve their world models.
Also, our priors are not entirely trapped. It can seem that way because true political conversion rarely happens in public, and often not even consciously, but people join or leave movements regularly when their internal threshold is passed. Effective Altruist/Samaritan forums will always defend Effective Altruism/samaritanism as long as there is one EA/ES on earth, but if evidence (or social condemnation) builds up against it people whose thresholds are reached will just leave. Of course the movement as a whole can also update, but not in the same way as individual members.
People do tend to be resistant to evidence that goes against their (political) beliefs, but that resistance gets easier to surmount the fewer people there are that are less fanatical about the movement than you. Also, active rationalist practices like doubt can also help get your priors unstuck.
So in a world with EAs and ESs living side by side, there would constantly be people switching from one movement to the other or vice versa. And as either ES or EA projects get eaten by a grue, one of these rates will be greater than the other until one or both have too few supporters to do much of anything. This may have already happened (at least for the precise “Effective Samaritan” ideology of rationalism/bayesianism + socialism + individual effective giving).
So I don’t think we need to resort to frequentism. Sure we can use frequentism to share the same scientific journal, but in terms of cooperation we can just all run our own experiments, share the data, and regularly doubt ourselves.
On third order, people who openly worry about X-Risk may get influenced by their environment, becoming less worried as a result of staying with a company whose culture denies X-Risk, which could eventually even cause them to contribute negatively to AI Safety. Preventing them from getting hired prevents this.
That sounds like something a cross between learned helplessness and madman theory.
The madman theory angle is “If I don’t respond well to threats of negative outcomes, people (including myself) have no reason to threaten me”. The learned helplessness angle is “I’ve never been able to get good sets of tasks and threats, and trying to figure something out usually leads to more punishment, so why put in any effort?”
Combine the two and you get “Tasks with risks of negative outcomes? Ugh, no.”
With learned helplessness, the standard mechanism for (re)learning agency is being guided through a productive sequence by someone who can ensure the negative outcomes don’t happen, getting more and more control over the sequence each time until you can do it on your own, then adapting it to more and more environments.
Avoiding tasks with possible negative outcomes isn’t really feasible, so getting hands-on help with handling threat of negative consequences seems useful. Probably from a mental coach or psychologist.
The app doesn’t help people who struggle with setting reasonable tasks with reasonable rewards and punishments. Akrasia is an umbrella term for “something somewhere in the chain to actually getting to do things is stopping the process”, so it makes sense that one person’s “solution” to akrasia isn’t going to work for a lot of people.
I think it’s healthy to see these kinds of posts as procedural inspiration. As a reader it’s not about finding something that works for you, it’s about analysing the technique someone used to iterate on their first hint of a good idea until it became something that thoroughly helped them.
I’d say “fuck all the people who are harming nature” is black-red/rakdos’s view of white-green/selesnya. The “fuck X” attitude implies a certain passion that pure black would call wasted motion. Black is about power. It’s not adversarial per se, just mercenary/agentic. Meanwhile the judginess towards others is an admixture of white. Green is about appreciating what is, not endorsing or holding on to it.
Black’s view of green is “careless idiots, easy to take advantage of if you catch them by surprise”. When black meets green, black notices how the commune’s rules would allow someone to scam them of all their cash and how the charms they’re wearing cost 10 times less to produce than what they paid for it.
Black-red/Rakdos’ view of green is “tree huggers, weirdly in love with nature rather than everything else you can care about”. When rakdos meets green they’re inspired to throw a party in the woods, concluding it’s kinda lame without a lightshow or a proper toilet, and leaving tons of garbage when they return home.
Black’s view of white-green/selesnya is “people who don’t seem to grasp the tragedy of the commons who can get obnoxiously intrusive about it. Sure nature can be nice but it’s not worth wasting that much political capital on.” When black meets selesnya, it tries to find an angle by which selesnya can give them more power. Maybe a ecological development grant that has shoddy selection criteria or a lopsided political deal.
Meanwhile black-green/golgari is “It is natural for people to be selfish. Everyone chooses themselves eventually, that’s an evolutionary given. I will make selfish choices and appreciate the world, as any sane person would”. It views selesnya as a grift, green as passive, black as self-absorbed, and rakdos as irrational.
I would say ecofascism is white-green-black/Abzan. The hard agency of black, the sense of communal approach of white, and the appreciation of nature of green, but lacking the academic rigor of blue or the wild passion of red.
Fighting with tigers is red-green, or Gruul by MTG terminology. The passionate, anarchic struggle of nature red in tooth and claw. Using natural systems to stay alive even as it destroys is black-green, or Golgari. Rot, swarms, reckless consumption that overwhelms.
Pure green is a group of prehistoric humans sitting around a campfire sharing ghost stories and gazing at the stars. It’s a cave filled with handprints of hundreds of generations that came before. It’s cats louging in a sunbeam or birds preening their feathers. It’s rabbits huddling up in their dens until the weather is better, it’s capybaras and monkeys in hot springs, and bears lazily going to hibernate. These have intelligible justifications, sure, but what do these animals experience while engaging in these activities?
Most vertebrates seem to have a sense of green, of relaxation and watching the world flow by. Physiologically, when humans and other animals relax, the sympathetic nervious system is suppressed and the parasympathetic system stays/becomes active. This causes the muscles to relax and causes the blood stream to prioritize digestion. For humans at least, stress and the pressure to find solutions right now decrease and the mind wanders. Attention loses its focus but remains high-bandwidth. This green state is where people most often come up with ‘creative’ solutions that draw on a holistic understanding of the situation.
Green is the notion that you don’t have to strive towards anything, and the moment an animal does need to strive for something they mix in red, blue, black, or white, depending on what the situation calls for and the animal’s color evolved toolset.
The colors exist because no color on its own is viable. Green can’t keep you alive, and that’s okay, it isn’t meant to.
That doesn’t seem like a good idea. You’re ignoring long-term harms and benefits of the activity—otherwise cycling would be net positive—and you’re ignoring activity duration. People don’t commute to work by climbing Mount Everest or going skydiving.
The Demon King does not solely attack the Frozen Fortress to profit on prediction markets. The story tells us that the demons engage in regular large-scale attacks, large enough to serve as demon population control. There is no indication that these attacks decreased in size when they were accompanied with market manipulation (and if they did, that would be a win in and of itself).
So the prediction market’s counterfactual is not that the Demon King’s forces don’t attack, but that they attack at an indeterminate time with the same approximate frequency and strength. By letting the Demon King buy and profit from “demon attack on day X” shares, the Circular Citadel learns with decently high probability when these attacks take place and can allocate its resources more effectively. Hire mercenaries on days the probability is above 90%, focus on training and recruitment on days of low-but-typical probability, etc.
This ability to allocate resources more efficiently has value, which is why the Heroine organized the prediction market in the first place. The only thing that doesn’t go according to the Heroine’s liking is that the Circular Citadel buys that information from the Demon King rather than from ‘the invisible hand of the market’.
The Demon King would sell the information as soon as she thinks it is in her best interests, which is different from it being bad for the Circular Citadel. Especially considering the Circular Citadel doesn’t even have to pay the full cost of the information—everyone who bets is also paying.
It is very possible that the Demon King and the Circular Citadel both profit from the prediction market existing, while the demon ground forces and naive prediction market bettors lose.