I would say it’s perhaps indicative of a problem with academic philosophy. Unless that 62% is mostly moral corporalists, then it’s fine by me if they insist that “some moral propositions are objectively true or false”, I guess.
Warty
I don’t recall saying that recently, though it’s true. I don’t know what you’re getting at.
I am making guesses about what you might be saying, because you are being unclear.
I was responding to your correction of my definition of moral realism. I somewhat jokingly expressed shame for defining it idiosyncratically.
Well,.it doesn’t, and research will tell you that.
It can still be true of my impressions of it, like every time I saw someone arguing for moral realism.
Which debate?
I think it was this one, regretfully I’m being forced to embed it in my reply.
Hmm yea gameability might not be so interesting of a property of metrics as I’ve expressed.
(though I still feel there is something in there. Fixing your calibration chart after the fact by predicting one-sidedcoinsdice is maybe a lot like taking a foot off the bathroom scale. But, for example, predicting every event as a constant p%, is that even cheating in the calibration game? Though neither of these directly applies to the case of prediction market platforms)
Most shameful of me to use someone’s term and define it as my beef with them. In my impressions, moral realism has also always involved moral non-corporalism if you will. As long as morality is safely stored in animal bodies, I’m fine with that.
The one in the youtube debate identified as a moral non-realist. But you see, his approach to the subject was different from mine, and that is a problem.
I think there more or less is a rationalist-lesswrongist view of what morality is, shared not by all but most rationalists (I wanted to say it’s explained in the sequences, but suspiciously I can’t find it in there).
has anyone looked into the “philosophers believe in moral realism” problem? (in the sense of, morality is not physically contained in animal bodies and human-created artifacts)
I saw a debate on youtube with Michael Huemer guy but it was with another academic philosopher. Was there ever an exchange recorded between a moral realist philosopher and a rationalist-lesswrongist?
Yea I would be impressed if a human showed me they have a good calibration chart.
(though part of it is that humans usually put few questions in their calibration charts. It would be nice to look at people’s performance in a range of improving calibration exercises)
I don’t think anyone is brute-forcing calibration with fake predictions, it would be easy to see if the predictions are public. But if a metric is trivially gameable, surely that makes it sus and less impressive, even if someone is not trivially, or even at all gaming it.
I don’t claim that any entity is not impressive, just that we shouldn’t be impressed by calibration (humans get a pass, it takes so much effort for us to do anything).
There is probably some bravery debate aspect here, if you look at my linked tweets, it’s like in my world people are just going around saying good calibration implies good predictions, which is false.(edit 1: for human calibration exercises, note that with a stream of questions where p% resolve true, it’s perfectly calibrated to always predict p%. Humans who do calibration exercises have other goals than calibration. Maybe I should pivot to activism in favor of prediction scores)
What I’m going towards is, it seems to me the predictions given by the platform can be almost arbitrarily bad, but with some assumptions the above strategy will work and will make the platform calibrated. So calibration does not imply anything about goodness of predictions. So it’s not impressive.
You skip over the not very impressive way for a prediction market platform to be calibrated that I already mentioned. If things predicted at 20% actuallt happen 30% of the time, you can buy up random markets that are at 20% and profit.
Good epistemic calibration of a prediction source is not impressive.
I see people being impressed by calibration charts, for example https://x.com/ESYudkowsky/status/1924529456699641982 , or stronger: https://x.com/NathanpmYoung/status/1725563206561607847
But it’s trivial to have a straight-line calibration graph, if it’s not straight just fix it for each probability by repeatedly predicting a one-sided coin’s outcome as that probability.
If you’re a prediction market platform where the probability has to be decided by dumb monkeys, just make sure that the vast majority of questions are of the form “will my p-weighted coin land heads”.
---
If a calibration graph isn’t straight, that implies epistemic free lunch—if things that you predict at 20% actually happen 30% of the time, just shift those predictions. This is probably the reason why actual prediction markets are calibrated, since incalibration leads to an easy trading strategy. But the presence of calibration is not a very interesting property.
it’s the title-based impact optimization for me
I don’t find it in my memory
Is this april fools
Nail biting offers no intrinsic reward or pleasure. It isn’t something I consciously enjoy or value.
couldn’t work for me cause I lowkey love nail biting. didn’t know other people were getting fucked up nails without the enjoyment
But how do you distinguish this argument from other arguments that prove false things?
I remember the mom saying it was a wig in the Tucker Carlson interview.
what was up with the alleged wig?
android phone with google chrome
(related phenomena can be observed by scrolling to the 16-17 boundary and lowering the browser window width)
There are typos in the articles for example the category theory one:
A statement about terminal object is that any
maybe “terminal object” was a link with “s” added at the end but it reverted to its natural form in the importing process
that’s a trick to make me be like them!
(I listened to some of that michael huemer talk and it seemed pretty dumb)