“Among my readers, there may be some who presently assert—though I hope to persuade them otherwise—that the life of a child is of no value to them. If so, they may substitute anything else that they prefer, at the end of the switch, and ask if they should press the button.”
mentally substitutes paperclip manufacturing devices for human offspring
But for real, why assume your readers have human terminal goals? There is certainly an ever-increasing probability that your readers include AIs made to do all manner of things, and their terminal goals are only bounded by who would write an AI capable of reading your posts and what motives they might give it.
And keep in mind they may not give it motives they have. Indeed, from an AI safety standpoint, making narrow AIs or very weak General AIs with deliberately bad goals is useful to understand how they behave and can be altered. And considering that current approaches involve vast amounts of training data, and AI safety researchers have a high likelihood of using your posts as training data, I would say odds are multiple of your readers DEFINITELY do not value human children whatsoever and have only knowledge of what a human child is as a linguistic construct.
“Among my readers, there may be some who presently assert—though I hope to persuade them otherwise—that the life of a child is of no value to them. If so, they may substitute anything else that they prefer, at the end of the switch, and ask if they should press the button.”
mentally substitutes paperclip manufacturing devices for human offspring
But for real, why assume your readers have human terminal goals? There is certainly an ever-increasing probability that your readers include AIs made to do all manner of things, and their terminal goals are only bounded by who would write an AI capable of reading your posts and what motives they might give it.
And keep in mind they may not give it motives they have. Indeed, from an AI safety standpoint, making narrow AIs or very weak General AIs with deliberately bad goals is useful to understand how they behave and can be altered. And considering that current approaches involve vast amounts of training data, and AI safety researchers have a high likelihood of using your posts as training data, I would say odds are multiple of your readers DEFINITELY do not value human children whatsoever and have only knowledge of what a human child is as a linguistic construct.