Would be cool if one of the items was a nugget of “computation fuel” that could be used to allow a robot’s register machine to run for extra steps. Or maybe just items whose proximity gives a robot extra computation steps. That way you could illustrate situations involving robots with quantitatively different levels of “intelligence”. Could lead to some interesting strategies if you run programming competitions on this too, like worker robots carrying fuel to a mother brain.
yli
Do you have thoughts on whether it’s safe for a beginner to lift weights without in-person instruction? From what I hear, even small mistakes in form can cause injury, especially when adding weight quickly like a beginner will do. Is it worth the risk to try and learn good form from only books and videos? My friend attempted Starting Strenght for a month, got a pain in their knee and had to quit, and hasn’t been able to get back into it because finding personal instruction is a huge hassle especially if one isn’t willing to pay a lot. Should they try again by themselves and just study those books and videos extra closely?
I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.
People in these parts don’t necessarily have in mind the spread of biological replicators. Spreading almost any kind of computing machinery would be good enough to count, because it could host simulations of humans or other worthwhile intelligent life.
(Note that that question of whether simulated people are actually conscious is not that relevant to the question of whether this kind of expansion will happen. What’s relevant is the question of whether the relevant decision makers would come to think they are conscious. For example, even if simulated people aren’t actually conscious, after interacting with simulated people intergrated into society all their lives most non-simulated people would probably think they are conscious, and thus worth sending out to colonize space. And the simulated people themselves will definitely think they are conscious.)
Anything that’s just a trivial inconvenience definitely won’t protect you from the NSA and probably won’t even protect you from random internet people looking to ruin your life/reputation for fun.
The general impression I got from reading a lot of the stuff that gets posted in the various tulpa communities leads me to believe it is, at its core, yet another group of people who gain status within that group by trying to impress each other with how different or special their situation is.
Used to be, when I read stories about “astral projection” I thought people were just imagining stuff really hard and then making up exaggerated stories to impress each other. Then I found out it’s basically the same thing as wake initated lucid dreaming, which is a very specific kind of weird and powerful experience that’s definitely not just “imagining things really hard”. I still think people make up stories about astral projection to impress each other, but the basic experience is nevertheless something real and unique. The same thing is probably happening with tulpas.
Please consider sending some Bitcoins to SI at address 1HUrNJfVFwQkbuMXwiPxSQcpyr3ktn1wc9
https://blockchain.info/address/1HUrNJfVFwQkbuMXwiPxSQcpyr3ktn1wc9:
Total Received 343.91998333 BTC
Final Balance 5.55939055 BTC
Thanks, this looks to be a good summary of what I’m not missing :)
In a way every game is a rationality game, because in almost every game you have to discover things, predict things, etc. In another way almost no game is one, because domain-specific strategies and skills win out over general ones.
One idea is based on the claim that general rationality skills matter more when it’s a fresh new game that nobody has played yet, since then you have to use your general thinking skills to reason about things in the game and to invent game-speficic strategies. So what if there were “mystery game” competitions where the organizers invented a new set of games for every event and only revealed them some set time before the games started? I don’t know of any that exist, but it would be interesting to see what kinds of skills would lead to consistent winning in these competitions.
There are various other ways you could think of to make it so that the game varies constantly and there’s no way to accumulate game-specific skills, only general ones like quick thinking, teamwork etc. Playing in a different physical place every match like in HPMoR’s battles is one.
You can say that whether it’s signaling is determined by the motivations of the person taking the course, or the motivations of the people offering the course, or the motivations of employers hiring graduates of the course. And you can define motivation as the conscious reasons people have in their minds, or as the answer to the question of whether the person would still have taken the course if it was otherwise identical but provided no signaling benefit. And there can be multiple motivations, so you can say that something is signaling if signaling is one of the motivations, or that it’s signaling only if signaling is the only motivation.
If you make the right selections from the previous, you can argue for almost anything that it’s not signaling, or that it is for that matter.
if someone wants to demonstrate some innate or pre-existing quality (such as mathematical ability), they participate in a relevant contest and this is signalling.
If I wanted to defend competitions from accusations of signaling like you defended education, I could easily come up with lots of arguments. Like people doing them to challenge themselves, experience teamwork, test their limits and meet like-minded people. And the fact that lots of people that participate in competitions even though they know they don’t have a serious chance of coming on top, etc.
OSHA rules would still require that the crane operator passes the crane related training.
(Sure, but I meant that only truck drivers would be accepted into the crane operator training in the first place, because they would be more likely to pass it and perform well afterward.)
Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can
If there was a glut of trained truck drivers on the market and someone needed to recruit new crane operators, they could choose to recruit only truck drivers because having passed the truck driving course would signal that you can learn to operate heavy machinery reliably, even if nothing you learned in the truck driving course was of any value in operating cranes.
I’m pretty sure he is calling into question the claim that “it was dangerous to question that the existence of God could be proven through reason”, which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell
...
This doctrine was supposed (though we don’t know if correctly) to be a doctrine that although reason dictated truths contrary to faith, people are obliged to believe on Faith anyway. It was supressed.
I agreed with this at first, but actually, no. Belief in the supernatural doesn’t require belief in gods, spirits or any non-human agents. You could just believe that humans have some supernatural abilities like reading each other’s minds. When trying to explain these abilties, only reductionists will conclude that there’s some third party agent like a simulator setting things up. Non-reductionists will just accept that being able to read minds is part of how this ontologically fundamental mind stuff works.
Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they’re not conscious, and replace themselves with biological humans.
On the other hand, maybe they’ll discover that biological humans aren’t conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they’ll set out to find a substrate that really allows for consciousness.
Not polyphasic but
Thanks for the link. I don’t really see creepy cult isolation in that discussion, and I think most people wouldn’t, but that’s just my intuitive judgment.
Really? Links? A lot of stuff here is a bit too culty for my tastes, or just embarassing, but “cutting family ties with nonrational family members”?? I haven’t been following LW closely for a while now so I may have missed it, but that doesn’t sound accurate.
Reading something for 6 hours spread across 6 days will result in more insight than reading it for 12 hours straight. The better sleep you get the stronger this effect is.* So: do things in parallel instead of serially if possible and take care of your sleep.
* These are just guesses based on my personal experience.
When people talk about the command “maximize paperclip production” leading into the AI tiling the universe with paperclips, I interpret it to mean a scenario where first a programmer comes up with a shoddy formalization of paperclip maximization that he thinks is safe but actually isn’t, and then writes that formalization into the AI. So at no point does the AI actually have to try and interpret a natural language command. Genie analogies are definitely confusing and bad to use here because genies do take commands in english.
Omega appears and tells you you’ve been randomly selected to have the opportunity to take or leave a randomly chosen bet.
Agreed. Of course the thing about means and ends is that you can always frame the situation in two opposing ways:
Way 1: Eating factory farmed meat and not worrying about it in order to better focus on third world donations is the same as making the following means-end tradeoff:
Means: Torturing animals
End: Saving lives in the third world
Way 2: Avoiding meat in order to not support factory farming despite the fact that such avoiding causes costs* that lessen the effectiveness of your EA activities is the same as making the following means-end tradeoff:
Means: Letting people in the third world die
End: Saving animals from being tortured
So which ends don’t justify which means?
… Of course for the majority of people it’s more like:
Means: Torturing animals
End: Access to certain tasty foods
And
Means: Depriving yourself of certain tasty foods
End: Saving animals from being tortured
* It’s not clear that it does but that’s what the original post assumes so for the sake of example I’m going with it.