Eliezer, I have to second Hopefully, Recovering, et al.: good points (as almost always), but the Science versus Bayescraft rhetoric is a disaster. Lone autodidacts railing against the failings of Mainstream Science are almost always crackpots—that you’re probably right, doesn’t mean you can expect people to ignore that likelihood ratio when deciding whether or not to pay attention to you. “Meaning does not excuse impact!”
Concerning the qualitative vs. quantitative Bayescraft issue: taking qualitative lessons like Conservation of Expected Evidence from probability theory is clearly fruitful, but I wonder if we shouldn’t be a little worried about Solmonoff induction. Take the example of Maxwell’s equations being a simpler computer program than anger. Even though we have reason to suppose that it’s possible in principle to make a computer program simulating anger-in-general—anger runs on brains; brains run on physics; physics is computable (isn’t it?)--I don’t wonder if it shouldn’t make us a bit nervous that we really have no idea how to even begin writing such a program (modulo that “No One Knows What Science,” &c.). The obvious response would be to say that all we need is “just” a computer program that duplicates whatever angry human brains do, but I don’t think that counts as a solution if we don’t know exactly how to reduce anger-in-general to math. A convincing knockdown of dualism doesn’t make the Hard Problem any less confusing.
Maybe all this is properly answered by repeating that the math is out there, whether or not we actually know how to do the calculation. After all, given that there is a program for anger, it would obviously be longer than the one for electromagnetism. Still, I worry about putting too much trust in a formalism that is not just computationally intractible, but that we don’t really know how to use, for if anyone really knew in concrete detail how to reduce thought to computation in any but the most trivial of cases, she’d essentially have solved the AGI problem, right?
Or take Pascal’s Mugging. If I recall correctly from the discussion at the February meetup, the current best solution to the problem is that given a universe big enough to contain 3^^^^3 minds, the prior probability of any one causal node exerting so much influence is low enough to overcome the vast disutility of the mugger’s threat. Eliezer noted that that this would imply that you’re not allowed to believe the mugger even if she takes you out of the Matrix and shows you the hardware. This seems much like ruling out the mugger’s claim a priori—which I guess is the result we “want,” but it seems far too convenient.
Of course, it is possible that I simply don’t know enough math to see that everything I just said is actually nonsense. Sorry for the long comment.
but the Science versus Bayescraft rhetoric is a disaster.
What’s wrong with you? It’s true that people who don’t already have a reason to pay attention to Eliezer could point to this and say, “Ha! An anti-science crank! We should scorn him and laugh!”, and it’s true that being on the record saying things that look bad can be instrumentally detrimental towards achieving one’s other goals.
But all human progress depends on someone having the guts to just do things that make sense or say things that are true in clear language even if it looks bad if your head is stuffed with the memetic detritus of the equilibrium of the crap that everyone else is already doing and saying. Eliezer doesn’t need your marketing advice.
But you probably won’t understand what I’m talking about for another eight years, ten months.
Eliezer, I have to second Hopefully, Recovering, et al.: good points (as almost always), but the Science versus Bayescraft rhetoric is a disaster. Lone autodidacts railing against the failings of Mainstream Science are almost always crackpots—that you’re probably right, doesn’t mean you can expect people to ignore that likelihood ratio when deciding whether or not to pay attention to you. “Meaning does not excuse impact!”
Concerning the qualitative vs. quantitative Bayescraft issue: taking qualitative lessons like Conservation of Expected Evidence from probability theory is clearly fruitful, but I wonder if we shouldn’t be a little worried about Solmonoff induction. Take the example of Maxwell’s equations being a simpler computer program than anger. Even though we have reason to suppose that it’s possible in principle to make a computer program simulating anger-in-general—anger runs on brains; brains run on physics; physics is computable (isn’t it?)--I don’t wonder if it shouldn’t make us a bit nervous that we really have no idea how to even begin writing such a program (modulo that “No One Knows What Science,” &c.). The obvious response would be to say that all we need is “just” a computer program that duplicates whatever angry human brains do, but I don’t think that counts as a solution if we don’t know exactly how to reduce anger-in-general to math. A convincing knockdown of dualism doesn’t make the Hard Problem any less confusing.
Maybe all this is properly answered by repeating that the math is out there, whether or not we actually know how to do the calculation. After all, given that there is a program for anger, it would obviously be longer than the one for electromagnetism. Still, I worry about putting too much trust in a formalism that is not just computationally intractible, but that we don’t really know how to use, for if anyone really knew in concrete detail how to reduce thought to computation in any but the most trivial of cases, she’d essentially have solved the AGI problem, right?
Or take Pascal’s Mugging. If I recall correctly from the discussion at the February meetup, the current best solution to the problem is that given a universe big enough to contain 3^^^^3 minds, the prior probability of any one causal node exerting so much influence is low enough to overcome the vast disutility of the mugger’s threat. Eliezer noted that that this would imply that you’re not allowed to believe the mugger even if she takes you out of the Matrix and shows you the hardware. This seems much like ruling out the mugger’s claim a priori—which I guess is the result we “want,” but it seems far too convenient.
Of course, it is possible that I simply don’t know enough math to see that everything I just said is actually nonsense. Sorry for the long comment.
What’s wrong with you? It’s true that people who don’t already have a reason to pay attention to Eliezer could point to this and say, “Ha! An anti-science crank! We should scorn him and laugh!”, and it’s true that being on the record saying things that look bad can be instrumentally detrimental towards achieving one’s other goals.
But all human progress depends on someone having the guts to just do things that make sense or say things that are true in clear language even if it looks bad if your head is stuffed with the memetic detritus of the equilibrium of the crap that everyone else is already doing and saying. Eliezer doesn’t need your marketing advice.
But you probably won’t understand what I’m talking about for another eight years, ten months.
What do you expect to happen in January 2026, and why? (And why then?)
Also, are you the same person[1] as the “Z. M. Davis” you are replying to?
[1] Adopting the usual rather broad notion of “same person”.
I think the current-day ZMD is talking to his past self (8 years and 10 months from the replied-to post).
D’oh!