I would like to suggest that I do not identify the problems of “values” and the “poor predictions” as potentially resolvable problems. It is because,
Among humans there are infants, younger children and growing adults too who at least (for the sake of brevity for construct) develop at maximum till 19 years of age to their naturally physical and mental potentials. Holding thus, it remains no longer a logical validity to constitute the “values problems” as a problem for developing an AI/Oracle AI because before 19 years of age the values cannot be known, by the virtue of the development stage being at onset. Apart from being ideal theoretically, it might prove dangerous to assign or align values to humans for the sake of natural development of the human civilization.
Holding the current status quo of “Universal Basic Education” and the above(1.) “values development” argument, it is not a logical argument that humans would be able to predict AI/Oracle AI behaviour at a time when not even AI researchers can predict with full guarantee the potentials of an Oracle AI or an AI developing itself into an AGI (a meagre case but cannot be held as holding no potential for now). Thus, holding the “poor predictions” case to be logically irresolvable as a problem.
But, to halt the development, if, of the two mentioned cases, especially the “poor predictions” one, is not logical for academic purpose.
I would like to suggest that I do not identify the problems of “values” and the “poor predictions” as potentially resolvable problems. It is because,
Among humans there are infants, younger children and growing adults too who at least (for the sake of brevity for construct) develop at maximum till 19 years of age to their naturally physical and mental potentials. Holding thus, it remains no longer a logical validity to constitute the “values problems” as a problem for developing an AI/Oracle AI because before 19 years of age the values cannot be known, by the virtue of the development stage being at onset. Apart from being ideal theoretically, it might prove dangerous to assign or align values to humans for the sake of natural development of the human civilization.
Holding the current status quo of “Universal Basic Education” and the above(1.) “values development” argument, it is not a logical argument that humans would be able to predict AI/Oracle AI behaviour at a time when not even AI researchers can predict with full guarantee the potentials of an Oracle AI or an AI developing itself into an AGI (a meagre case but cannot be held as holding no potential for now). Thus, holding the “poor predictions” case to be logically irresolvable as a problem.
But, to halt the development, if, of the two mentioned cases, especially the “poor predictions” one, is not logical for academic purpose.