No hypothesis is a prefix of another hypothesis.
Peter_de_Blanc
Shannon is a network hub. I spent some time at her previous house and made a lot of connections, including my current partners.
What happens when an antineutron interacts with a proton?
I now realise you might be asking “how does this demonstrate hyperbolic, as opposed to exponential, discounting”, which might be a valid point, but hyperbolic discounting does lead to discounting the future too heavily, so the player’s choices do sort of make sense.
That is what I was wondering. Actually, exponential discounting values the (sufficiently distant) future less than hyperbolic discounting. Whether this is too heavy depends on the your parameter (unless you think that any discounting is bad).
Another player with Hyperbolic Discounting went further: he treated cities, any city near him, while carrying 5 red city cards in his hand and pointing out, in response to entreaties to cure red, that red wasn’t much of an issue right now.
How does this demonstrate hyperbolic discounting?
What’s special about a mosquito is that it drinks blood.
Phil originally said this:
My point was that vampires were by definition not real—or at least, not understandable—because any time we found something real and understandable that met the definition of a vampire, we would change the definition to exclude it.
Note Phil’s use of the word “because” here. Phil is claiming that if vampires weren’t unreal-by-definition, then the audience would not have changed their definition whenever provided with a real example of a vampire as defined. It follows that the original definition would have been acceptable had it been augmented with the “not-real” requirement, and so this is the claim I was responding to with the unreal mosquito example.
I understand that Phil was not suggesting that all non-real things are vampires. That’s why my example was a mosquito that isn’t real, rather than, say, a Toyota that isn’t real.
My point was that vampires were by definition not real
So according to you, a mosquito that isn’t real is a vampire?
My fencing coach emphasizes modeling your opponent more accurately and setting up situations where you control when stuff happens. Both of these skills can substitute somewhat for having faster reflexes.
Sounds like you should do more Tae Kwon Do.
This argument does not show that.
I still don’t see why you would want to transform probabilities using a sigmoidal function. It seems unnatural to apply a sigmoidal function to something in the domain [0, 1] rather than the domain R. You would be reducing the range of possible values. The first sigmoidal function I think of is the logistic function. If you used that, then 0 would be transformed into 1⁄2.
I have no idea how something like this could be a standard “game design” thing to do, so I think we must not be understanding Chimera correctly.
The standard “game design” thing to do would be push the probabilities through a sigmoid function (to reward correct changes much more often than not, as well as punish incorrect choices more often than not).
I don’t understand. You’re applying a sigmoid function to probabilities… what are you doing with the resulting numbers?
The setting in my paper allows you to have any finite amount of background knowledge.
There are robots that look like humans, but if you want an upload to experience it as a human body, you would want it to be structured like a human body on the inside too, e.g. by having the same set of muscles.
Evolution.
You can’t use the mind that came up with your preferences if no such mind exists. That’s my point.
Why wouldn’t I just discard the preferences, and use the mind that came up with them to generate entirely new preferences
What makes you think a mind came up with them?
There was a specific set of algorithms that got me thinking about this topic, but now that I’m thinking about the topic I’d like to look at more stuff. I would proceed by identifying spaces of policies within a domain, and then looking for learning algorithms that deal with those sorts of spaces. For sequential decision-making problems in simple settings, dynamic bayesian networks can be used both as models of an agent’s environment and as action policies.
I’d be interested in talking. You can e-mail me at peter@spaceandgames.com.
I’m really excited about software similar to Anki, but with task-specialized user interfaces (vs. self-graded tasks) and better task-selection models (incorporating something like item response theory), ideally to be used for both training and credentialing.