True, however, they didn’t blow up their entire financial system and in turn that, seemingly, of the entire planet.
I’m not sure if this should be taken literally, but see wiki:Japanese asset price bubble and wiki:Carry trade#Currency.
True, however, they didn’t blow up their entire financial system and in turn that, seemingly, of the entire planet.
I’m not sure if this should be taken literally, but see wiki:Japanese asset price bubble and wiki:Carry trade#Currency.
Surely there is a transform that would convert the “hard” space into the terms of the “easy” space, so that the size of the targets could be compared apples to apples.
But isn’t this the same as computing a different measure (i.e. not the counting measure) on the “hard” space? If so, you could normalize this to a probability measure, and then compute its Kullback-Leibler divergence to obtain a measure of information gain.
Rational choice theory is probably the closest analogue to teleological thinking in modern academic research. Regarding all such reasoning as fallacious seems to be an extreme position; to what extent do they regard the “three fallacies” of teleology as genuine fallacies of reasoning as opposed to useful heuristics?
Many philosophers are convinced that because you can in-principle construct a prior that updates to any given conclusion on a stream of evidence, therefore, Bayesian reasoning must be “arbitrary”, and the whole schema of Bayesianism flawed, because it relies on “unjustifiable” assumptions, and indeed “unscientific”, because you cannot force any possible journal editor in mindspace to agree with you.
Could you clarify what you mean here? From the POV of your own argument, Bayesian updating is simply one of many possible belief-revision systems. What’s the difference between calling Bayesian reasoning an “engine of accuracy” because of its information-theoretic properties as you’ve done in the past and saying that any argument based on it ought to be universally compelling?
Why is Eliezer assuming that sustainable cycles of self-improvement are necessary in order to build an UberTool that will take over most industries? The Japanese Fifth Generation Computing Project was a credible attempt to build such an UberTool, but it did not much rely on recursive self-improvement (apart from such things as using current computer systems to design next-generation electronics). Contrary to common misconceptions, it did not even rely on human level AI, let alone superhuman intelligence.
If this was a credible project (check the contemporary literature and you’ll find extensive discussions about its political implications and the like) why not Douglas Engelbart’s set of tools?