″ The first strategy involves sending a hypothetical example’s equivalent back in time and using the present knowledge of the outcome as a justification for the validity or not of the argument.”
It has basically the same problem as any “reasoning by analogy”-type argument. Reality is built from relatively simple components from the ground up and becomes complex quickly the “further up” you go. What you do is take a slice from the middle and compare it to some other slice from the middle and say X is like Y > Z applies to Y > Z thus also applies to X
In a perfect world you’d never even have to rely on reasonig by analogy because instead of comparing a slice of reality to some other slice you’d just explain something from the ground up. Often we can’t do that with sufficient detail let alone with enough time, so reasoning by analogy is not always the wrong way to do but the example you picked are too far apart and too different I think.
Here’s an example of reasoning by analogy I put once to a friend:
Your brain is like a vast field of wheat and the paths that lead through the field are like your neural connections. The more often you walk down the same path the deeper that path becomes ingreined and eventually habits form. Leaving the path and doing something you’ve never done before is like leaving the path and going through a thick patch of wheat—it requires a lot more energy from you, and it will for some time. But what you need to trust in is that—as you know—eventually there will be a new path in the field if you just walk it often enough. And in exactly the same fashion will your brain will develop new connections, you just have to trust that it will actually happen, just as you completely trust your intuition that eventually you’ll walk a deep path into the field.
An he replied: So what if I took a tractor and just mowed all the field down? WOW I never saw that one coming, I never even expected anyone could be missing the point so completely...
obviously I wasn’t claiming a brain IS LIKE a field in every way you can possibly think of. It just shares a few abstract features with fields and those were the features that I was interested it, so therefore it seemed like a reasonable analogy.
Coming back to your story: The strenght of an argument by analogy relies on how well you can actually connect the two and make a persuasive case that the two things work similar in those features / structural similarities that you actually try to compare. It’s not clear to me how your analogy helps your case. A superintelligent AI is the most intelligent thing you can(t) , imagine but could turn the universe into paperclips, which I don’t care much for, so I for one do not value intelligence above literlly ALL else.
I your friend says feature X is the most important thing we should value about humans, the obvious counterargument would be “perhaps it could be, yet there are many features that we also care about in humans apart from their intelligence and dealing with someone that is only intelligent but cannot/does not do Y, Z and W would not be good company for any normal human, so these other things must matter too.”
Alternativly you could try to transcend the whole argument and point out how meaningless it is. To explain how here’s a more mathematical approach: If “human value” is the outcome variable of a function, your friend rips out particular variable X and says this variable contributes most to the value outcome of human value. For him that may be true, or at least he may genuinely believe it, but the whole concept seems ridiculous we all know we care about a lot of things in other humans like loyalty and firendship and reciprocity and humor and whatnot.
To make this argument meaningful, he’s really the one to argue for how precisely does it help to decide focusing on only one part of very many parts that we clearly all value. What purpose does he think it would accomplish if he managed to make others believe it?
Now I don’t think he or most people actually care, it seems like a classic case of “here’s random crap I believe, and you should believe it too for some clever reason I’m making up on the fly—both so I can dominate you by making you give in and so we can better relate”.
There is a very obvious problem with [1] as well:
It has basically the same problem as any “reasoning by analogy”-type argument. Reality is built from relatively simple components from the ground up and becomes complex quickly the “further up” you go. What you do is take a slice from the middle and compare it to some other slice from the middle and say X is like Y > Z applies to Y > Z thus also applies to X
In a perfect world you’d never even have to rely on reasonig by analogy because instead of comparing a slice of reality to some other slice you’d just explain something from the ground up. Often we can’t do that with sufficient detail let alone with enough time, so reasoning by analogy is not always the wrong way to do but the example you picked are too far apart and too different I think.
Here’s an example of reasoning by analogy I put once to a friend:
Your brain is like a vast field of wheat and the paths that lead through the field are like your neural connections. The more often you walk down the same path the deeper that path becomes ingreined and eventually habits form. Leaving the path and doing something you’ve never done before is like leaving the path and going through a thick patch of wheat—it requires a lot more energy from you, and it will for some time. But what you need to trust in is that—as you know—eventually there will be a new path in the field if you just walk it often enough. And in exactly the same fashion will your brain will develop new connections, you just have to trust that it will actually happen, just as you completely trust your intuition that eventually you’ll walk a deep path into the field.
An he replied: So what if I took a tractor and just mowed all the field down? WOW I never saw that one coming, I never even expected anyone could be missing the point so completely...
obviously I wasn’t claiming a brain IS LIKE a field in every way you can possibly think of. It just shares a few abstract features with fields and those were the features that I was interested it, so therefore it seemed like a reasonable analogy.
Coming back to your story: The strenght of an argument by analogy relies on how well you can actually connect the two and make a persuasive case that the two things work similar in those features / structural similarities that you actually try to compare. It’s not clear to me how your analogy helps your case. A superintelligent AI is the most intelligent thing you can(t) , imagine but could turn the universe into paperclips, which I don’t care much for, so I for one do not value intelligence above literlly ALL else.
I your friend says feature X is the most important thing we should value about humans, the obvious counterargument would be “perhaps it could be, yet there are many features that we also care about in humans apart from their intelligence and dealing with someone that is only intelligent but cannot/does not do Y, Z and W would not be good company for any normal human, so these other things must matter too.”
Alternativly you could try to transcend the whole argument and point out how meaningless it is. To explain how here’s a more mathematical approach: If “human value” is the outcome variable of a function, your friend rips out particular variable X and says this variable contributes most to the value outcome of human value. For him that may be true, or at least he may genuinely believe it, but the whole concept seems ridiculous we all know we care about a lot of things in other humans like loyalty and firendship and reciprocity and humor and whatnot.
To make this argument meaningful, he’s really the one to argue for how precisely does it help to decide focusing on only one part of very many parts that we clearly all value. What purpose does he think it would accomplish if he managed to make others believe it?
Now I don’t think he or most people actually care, it seems like a classic case of “here’s random crap I believe, and you should believe it too for some clever reason I’m making up on the fly—both so I can dominate you by making you give in and so we can better relate”.
You seem to be misunderstanding the argument structure. It is not an analogy. I am using an equivalent example from the past.