Steven: “what if we thought faster, were smarter, were more like the people we wished we were”
- yes, I’m aware of this, but the first two act in essentially the same way—they cause simulees to more quickly come to factually correct beliefs, and the last is just a “consistent under reflection” condition.
These conditions make little difference to my concern: the algorithm will end up mixing my values (which I like) with values I hate (religious dogma, sharia law, Christian fundamentalism, the naturalistic fallacy/bioluddism … ), where my beliefs recieve a very small weighting, and those that I dislike receive a very large weighting.
Steven: “what if we thought faster, were smarter, were more like the people we wished we were”
- yes, I’m aware of this, but the first two act in essentially the same way—they cause simulees to more quickly come to factually correct beliefs, and the last is just a “consistent under reflection” condition.
These conditions make little difference to my concern: the algorithm will end up mixing my values (which I like) with values I hate (religious dogma, sharia law, Christian fundamentalism, the naturalistic fallacy/bioluddism … ), where my beliefs recieve a very small weighting, and those that I dislike receive a very large weighting.