I see trading bots as a not unlikely source of human indifferent AI, but I don’t see how a transaction tax would help. Penalizing high frequency traders just incentivizes smarter trades over faster trades.
hylleddin
From my experience doing group study for classes, there don’t seem to be any major advantages or disadvantages for pairs vs small groups. The most relevant factor is how many eyeballs looking at something, but even that isn’t a huge effect. Both are more effective than working alone (as the article concludes).
For a lot of things, getting together IRL looks like it would work best, but the logistics there can be difficult. For people who have Lesswrong meetups nearby, those are an obvious way to potentially coordinate meatspace study groups.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Ah. I see what you mean. That makes sense.
As someone with personal experience with a tulpa, I agree with most of this.
I estimates it’s ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how “well-realized” they are.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
I have no idea what a tulpa’s moral status is, besides not less than a fictional character and not more than a typical human.
I estimate it’s power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
Or even a non-category theorist?
He didn’t actually synthesize a whole living thing. He synthesized a genome and put it into a cell. There’s still a lot of chemical machinery we don’t understand yet.
It doesn’t directly relate. I’m currently learning Korean and don’t want to try learning multiple languages at the same time. Also, I want a broader experience with languages before I try to make my own.
The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.
-- Tillaume, The Alloy of Law
In local parlance, “terminal” values are a decision maker’s ultimate values, the things they consider ends in themselves.
A decision maker should never want to change their terminal values.
For example, if a being has “wanting to be a music star” as a terminal value, than it should adopt “wanting to make music” as an instrumental value.
For humans, how these values feel psychologically is a different question from whether they are terminal or not.
See here for more information
We’re curious how you’ve used information theory in RPGs. It sounds like there are some interesting stories there.
It’s much more like choosing not to have kids when you’re in a situation where those kids’ lives will be horrible.
I think the easiest way to steelman the loneliness problem presented by the given scenario is to just have a third person, let’s say Jane, who stays around regardless of whether you kill Frank or not.
They could probably get a decent amount from fusing light elements as well.
I would have liked to see a proper DefectBot as well, however contenstant K defected every time and only one of the bots that cooperated with it would have defected against DefectBot, so it makes a fairly close proxy.
I like this plan. I’d be willing to run it, unless AlexMennan wants to.
Several of the bots using simulation also used multithreading to create timer processes so they can quit and defect against anyone who took to long to simulate.
I was also thinking of doing something similar, which was to infinite loop if the opposing programs code was small enough, since that probably meant something more complex was simulating it against something.
I checked the behavior of all the bots that cooperated with K, and all but two (T and Q) would have always cooperated with a defectBot. Specifically the defect bot:
(lambda (opp) 'D)
Sometimes they cooperated for different reasons. For example, U cooperates with K because it has “quine” in the code, while it cooperates with defectBot because it doesn’t have “quine”, “eval”, or “thread” in it.
Q, of course, acts randomly. T is the only one that doesn’t cooperate with defectBot but was tricked by K into cooperating. Though I’m having trouble figuring out why because I’m not sure what T is doing.
Anyway, it looks like K is reasonable proxy for how defectBot would have done here.
Of course, many works traditionally labeled fantasy also prefer to explore the consequences of worlds with different physics (HPMoR, for example). I’ve heard this called “Hard fantasy”.
I find that the internet is generally better indexed, though I suppose that if you can afford it, a large enough private library could give more easily accessible depth. I also suspect that, like me, most people here with many more books than they have read have libraries that are composed mostly of fiction, which is less useful for research purposes.
Unless you’re using timeless decision theory, if I understand TDT correctly (which I very well might not). In that case, the calculations by Zack show the amount of causal entanglement for which cooperation is a good choice. That is, P(others cooperate | I cooperate) and P(others defect | I defect) should be more than 0.8 for cooperation to be a good idea.
I do not think my decisions have that level of causal entanglement with other humans, so I defected.
Though, I just realized, I should have been basing my decision on my entanglement with lesswrong survey takers, which is probably substantially higher. Oh well.