I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
belief
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
risable
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
I think you’d probably consider the way we approach problems of this scale to be popperian in character. We open ourselves to lots of claims and talk about them and criticize them. We try to work them into a complete picture of all of the options and then pick the best one, according to a metric that is not quite utility, because the desire for a pet is unlikely to be an intrinsic one.
The gun to our head in this situation is our mortality or aging that will eventually close the window of opportunity to enjoying having a pet.
I’m not sure how to relate to this as an analytical epistemological method, though. Most of the work, for me, would involve sitting with my desire for a pet and interrogating it, knowing that it isn’t at all immediately clear why I want one. I would try to see if it was a malformed expression of hunting or farming instincts. If I found that it was, the desire would dissipate, the impulse would recognize that it wouldn’t get me closer to the thing that it wants.
Barring that, I would be inclined to focus on dogs, because I know that no other companion animal has evolved to live alongside humans and enjoy it in the same way that dogs have. I’m not sure where that resolution came from.
What I’m getting at is that most of the interesting parts of this problem are inarticulable. Looking for opportunities to apply analytic methods or articulable epistemic processes doesn’t seem interesting to me at all.
A reasonable person’s approach to solving most problems right now is to ask a a huge machine learning system that nobody understands to recommend some articles from an incomprehensibly huge set of people.
Real world examples of decisionmaking generally aren’t solvable, or reducible to optimal methods.
Do I believe in mathematics.. I can question the applicability of a mathematical model to a situation.
It’s probably worth mentioning that even mathematical claims aren’t beyond doubt, as mathematical claims can be arrived at in error (cosmic rays flipping bits) and it’s important that we’re able to notice and revert our position when that happens.
My impression from observed usage was that “risable” meant “spurious, inspiring anger”. Finding that the dictionary definition of a word disagrees with natural impressions of it is a very common experience for me. I could just stop using words I’ve never explicitly looked up the meaning of, but that doesn’t seem ideal. I’m starting to wonder if dictionaries are the problem. Maybe there aren’t supposed to be dictionaries. Maybe there’s something very unnatural about them and they’re preventing linguistic evolution that would have been useful. OTOH, there is also something very unnatural about a single language being spoken by like a billion humans or whatever it is, and English’s unnaturally large vocabulary should probably be celebrated.
To clarify, it makes me angry to see someone assuming moral realism with such confidence that they might declare the most important industrial safety project in the history of humanity to be a foolish waste of time. The claim that there could be single objectively correct human morality is not compatible with anthropology, human history, nor the present political reality. It could still be true, but it is not sufficed, there is not sufficient reason to act as if it is definitely true. There should be more humility here than there is.
My first impression of a person who then goes on to claim that there is an objective, not just human, but universal morality that could bring unaligned machines into harmony with humanity is that they are lying to sell books. This is probably not the case, lying about that would be very stupid, but it’s a hypothesis that I have to take seriously. When a person says something like that, it has the air of a preacher who is telling a nice lie that they think will endear them to people and bring together a pleasant congregation around that unifying myth of the objectively correct universal morality, and maybe it will, but they need to find a different myth, because this one will endanger everything they value.
I haven’t read BoI. I’ve been thinking about it.