There is an implicit moral realism that does not make any sense to me.
You have made a number of posts on paraconsistent logic. Now it’s time to walk the walk. For the purpose of this referee report, accept moral realism and use it explicitly to argue with your paper.
It’s not that simple. I can’t figure out what the proposition being defended is exactly. It shifts in ways I can’t predict in the course of arguments and discussions. If I tried to defend it, my defence would end up being too caricatural or too weak.
Is your goal to affect their point of view? Or is it something else? For example, maybe your true target audience is those who donate to your organization and you just want to have a paper published to show them that they are not wasting their money. In any case, the paper should target your real audience, whatever it may be.
I want a paper to point those who make the thoughtless “the AI will be smart, so it’ll be nice” argument to. I want a paper that forces the moral realists (using the term very broadly) to make specific counter arguments. I want to convince some of these people that AI is a risk, even if it’s not conscious or rational according to their definitions. I want something to build on to move towards convincing the AGI researchers. And I want a publication.
You have made a number of posts on paraconsistent logic. Now it’s time to walk the walk. For the purpose of this referee report, accept moral realism and use it explicitly to argue with your paper.
It’s not that simple. I can’t figure out what the proposition being defended is exactly. It shifts in ways I can’t predict in the course of arguments and discussions. If I tried to defend it, my defence would end up being too caricatural or too weak.
Is your goal to affect their point of view? Or is it something else? For example, maybe your true target audience is those who donate to your organization and you just want to have a paper published to show them that they are not wasting their money. In any case, the paper should target your real audience, whatever it may be.
I want a paper to point those who make the thoughtless “the AI will be smart, so it’ll be nice” argument to. I want a paper that forces the moral realists (using the term very broadly) to make specific counter arguments. I want to convince some of these people that AI is a risk, even if it’s not conscious or rational according to their definitions. I want something to build on to move towards convincing the AGI researchers. And I want a publication.