My guess is #2.
lmm
The whole point of acausal trading is that it doesn’t require any causal link. I don’t think there’s any rule that says it’s inherently hard to model people a long way away.
Imagine being an AI running on some high-quality silicon hardware that splits itself into two halves, and one half falls into a rotating black hole (but has engines that let it avoid the singularity, at least for a while). The two are now causally disconnected (well, the one outside can send messages to the one inside, but not vice versa) but still have very accurate models of each other.
This sounds like an XY problem—what are you trying to achieve by reducing the number of apps?
I’m not convinced on the international diversification example, particularly if the best argument is “some hard-to-measure risks”. Most of the time the things you want to buy are in your own country, so any diversification is taking on a large foreign exchange risk.
Maybe be more specific/detailed?
Not quite—rather the everyday usage of “real” refers to the model with the currently-best predictive ability. http://lesswrong.com/lw/on/reductionism/ - we would all say “the aeroplane wings are real”.
I’ve known plenty of cases where people’s programs were more agentive than they expected. And we don’t have a good track record on predicting which parts of what people do are hard for computers—we thought chess would be harder than computer vision, but the opposite turned out to be true.
Is there a difference between “x is y” and “assuming that x is y generates more accurate predictions than the alternatives”? What else would “is” mean?
I’m a professional software engineer, feel free to get technical.
Why are you so confident your program is a nonagent? Do you have some formula for nonagent-ness? Do you have a program that you can feed some source code to and it will output whether that source code forms an agent or not?
Does an amoeba want anything? Does a fly? A dog? A human?
You’re right, of course, that we have better models for a calculator than as an agent. But that’s only because we understand calculators and they have a very limited range of behaviour. As a program gets more complex and creative it becomes more predictive to think of it as wanting things (or rather, the alternative models become less predictive).
A program designed to answer a question necessarily wants to answer that question. A superintelligent program trying to answer that particular question runs the risk of acting as a paperclip maximizer.
Suppose you build a superintelligent program that is designed to make precise predictions, by being more creative and better at predictions than any human would. Why are you confident that one of the creative things this program does to make itself better at predictions isn’t turning the matter of the Earth into computronium as step 1?
we shouldn’t wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal’s Muggings
How confident are you that this is false?
It’s about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting.
If you’re looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you’ve asked at the end of a long holiday for any students here.
I’d rather people actually said “Do you want to come back to my room for sex?” rather than “Do you want to come back to my room for coffee?” where coffee is a euphemism for sex, because some people will take coffee at face value, which can lead to either uncomfortable situations, including fear of assault, or lead to people missing opportunities because they are bad at reading between the lines.
I’d rather that too, and I’ve had it go wrong in both directions. But the whole point of much of this site is that outcomes are more important than principles. Saying “do you want to come back to my room for sex?” is not going to change society, it’s just going to make you personally come off as a creep.
Ah, sorry to get your hopes up, it’s a degenerate approach: http://pastebin.com/Jee2P6BD
Thank you for publishing. Before this I think the best public argument from the AI side was Khoth’s, which was… not very convincing, although it apparently won once.
I still don’t believe the result. But I’ll accept (unlike with nonpublic iterations) that it seems to be a real one, and that I am confused.
I want to talk about the group (well, cluster of people) that calls itself “rationalists”. What should I call it if not that?
You seem a very enthusiastic participant here, despite a lot of downmodding. I admire that—on here. In real life my fear would be that that translated into clinginess—wanting to come to all my parties, wanting to talk forever, and the like. (And perhaps that it reflects being socially unpopular, and that there might be a reason for that). So I’d lean slightly to avoid.