I don’t think central planning vs. distributed decision-making is relevant though, because it seems to me that either way humans make decisions similarly much: the question is just whether it is a large or a small number making decisions, and who decides what.
I usually think of the situation as there being a collection of (fairly) goal-directed humans, each with different amounts of influence, and a whole lot of noise that interferes with their efforts to do anything. These days humans can lose control in the sense that the noise might overwhelm their decision-making (e.g. if a lot of what happens is unintended consequences due to nobody knowing what’s going on), but in the future humans might lose control in the sense that their influence as a fraction of the goal-directed efforts becomes very small. Similarly, you might lose control of your life because you are disorganized, or because you sell your time to an employer. So while I concede that we lack control already in the first sense, it seems we might also lose it in the second sense, which I think is what Bostrom is pointing to (though now I come to spell it out, I’m not sure how similar his picture is to mine).
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )
Could we count will (motivation) of today’s superpowers = megacorporations as human’s or not? (and in which level could they control economy?)
In other worlds: Is Searle’s chinese room intelligent? (in definition which The Book use for (super)intelligence)
And if it is then it is human or alien mind?
And could be superintelligent?
What arguments we could use to prove that none of today’s corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?
And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?
Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)
I have to agree with rlsj here—I think we’re at the point where humans can no longer cope with the pace of economic conditions—we already have hyper low latency trading systems making most of the decisions that underly the current economy. Presumably the limit of economic growth will be linked to “global intelligence” - we seem to be at the point where with human intelligence is the limiting factor (currently we seem to be unable to sustain economic growth without killing people and the planet!)
Excuse me? What makes you think it’s in control? Central Planning lost a lot of ground in the Eighties.
Good question.
I don’t think central planning vs. distributed decision-making is relevant though, because it seems to me that either way humans make decisions similarly much: the question is just whether it is a large or a small number making decisions, and who decides what.
I usually think of the situation as there being a collection of (fairly) goal-directed humans, each with different amounts of influence, and a whole lot of noise that interferes with their efforts to do anything. These days humans can lose control in the sense that the noise might overwhelm their decision-making (e.g. if a lot of what happens is unintended consequences due to nobody knowing what’s going on), but in the future humans might lose control in the sense that their influence as a fraction of the goal-directed efforts becomes very small. Similarly, you might lose control of your life because you are disorganized, or because you sell your time to an employer. So while I concede that we lack control already in the first sense, it seems we might also lose it in the second sense, which I think is what Bostrom is pointing to (though now I come to spell it out, I’m not sure how similar his picture is to mine).
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )
Could we count will (motivation) of today’s superpowers = megacorporations as human’s or not? (and in which level could they control economy?)
In other worlds: Is Searle’s chinese room intelligent? (in definition which The Book use for (super)intelligence)
And if it is then it is human or alien mind?
And could be superintelligent?
What arguments we could use to prove that none of today’s corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?
And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?
Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)
The economy is a group of people making decisions based on the actions of others. Its a non centrally regulated hive mind.
I have to agree with rlsj here—I think we’re at the point where humans can no longer cope with the pace of economic conditions—we already have hyper low latency trading systems making most of the decisions that underly the current economy. Presumably the limit of economic growth will be linked to “global intelligence” - we seem to be at the point where with human intelligence is the limiting factor (currently we seem to be unable to sustain economic growth without killing people and the planet!)