I think that if SAIs will have social part we need to think altruisticaly about them.
It could be wrong (and dangerous too) think that they will be just slaves.
We need to start thinking positively about our children. :)
I think that if SAIs will have social part we need to think altruisticaly about them.
It could be wrong (and dangerous too) think that they will be just slaves.
We need to start thinking positively about our children. :)
Just a little idea:
In one advertising I saw interesting pyramid with these levels (from top to bottom): vision → mission → goals → strategy → tactics → daily planning.
I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, …) work on (vision → mision → goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)
I am afraid that humanity has not properly defined/analysed nor vision nor mission. And more groups and individuals has more contradictory vision, mission and goals.
One big problem with SAI is not SAI but that we will have BIG POWER and we still dont know what we really want. (and what we really want to want)
Bostrom’s book seems to have paradigm that goal is something on top, rigid and stable. Could not be dynamic and flexible like vision. Probably it could be true that one stupidly defined goal (paperclipper) could be unchangeable and ultimate. But we probably have more possibilities to define SAI’s personality.
toto som myslel: https://neurokernel.github.io/faq.html
Ale je to asi kustik nedorobenejsie nez som si predpokladal
Dalsie zdroje info : http://www.cell.com/current-biology/abstract/S0960-9822%2810%2901522-8 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3704784/ http://www.flycircuit.tw/ https://en.wikipedia.org/wiki/Drosophila_connectome
I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.
One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.
In this moment I just wanted to improve our view at probability of only one SI in starting period.
Think prisoner’s dilemma!
What would aliens do?
Is selfish (self centered) reaction really best possibitlity?
What will do superintelligence which aliens construct?
(no discussion that humans history is brutal and selfish)
Let us try to free our mind from associating AGIs with machines.
Very good!
But be honest! Aren’t we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?
When I was thinking about past discussions I was realized something like:
(selfish) gene → meme → goal.
When Bostrom is thinking about singleton’s probability I am afraid he overlook possibility to run more ‘personalities’ on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble’s telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas.
When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction—we could think modesty and self sustainability.
Sauron’s ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).
We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.
Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino).
If you are thinking about money making—you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)
In AI sector there is much higher probability of phase-transition (=explosion). I think that’s the diference.
How?
Possibility: There could be probably enough HW and we just wait for spark of new algorithm.
Possibility: If we count agriculture revolution as explosion—we could also count massive change in productivity from AI (which is probably obvious).
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
Ants are probably good example how could organisational intelligence (?) be advantage.
According to wiki ″Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.″. See also google answer, wiki table or stackexchange.
Although we have to think careful—apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
Problem of humanity is not only global replacer—something which erase all other lifeforms. It could be enough to replace us in our niche. Something which globally (from life viewpoint ) means nothing.
And we dont need to be totally erased to meet huge disaster. Decline population to several milions or tousands… (pets or AI) … is also unwanted.
We are afraid not decisive strategic advance over ants but over humans.
It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.
We could test it in thought experiment.
Chess game human-grandmaster against AI.
it is not rapid (not checkmate in begining).
We could also suppose one move per year to slow it down. It bring to AI next advantage because it’s ability to concentrate so long time.
capabilities
a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore’s law)
b) human lose (step by step) positional and material capabilities during the game. And it is expected
Could we still talk about decisive advantage if it is not rapid and not unexpected? I think so. At least if we won’t break the rules.
One possibility to prevent smaller group gain strategic advantage is something like operation Opera.
And it was only about nukes (see Elon Musk statement)...
Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)
Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)
Next theorem is obvious :)
This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be “cheated”.
One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.
Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.
But also 1.5x more discoveries could bring 10x bigger profit!
We could not suppose only linear dependencies in such a complex problems.
Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot?
Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)
And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster.
What about ten time faster philosophic growth?
Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act… (like slowly loosing pieces in chess game)
If strong entities in our world will (are?) driving by poorly designed goals—for example “maximize profit” then they could really be very dangerous to humanity.
I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities.
We have not underestimate any form and not underestimate any method of our possible doom.
With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter.
Algorithms are not independent from entities which have enough money or enough political power to use it.
BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by programmer. I am afraid that in our society it will be manager or politician who will/is design goal. (we have find way that there be also philosopher and mathematician)
In my oppinion first (if not singleton) superintelligence will be (or is) most probably ‘mixed form’. Some group of well organized people (dont forget lawyers) with big database and supercomputer.
Next stages after intelligence explosion could have any other forms.
This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer.
But I think in this moment we could (and have) analyse also trained “horses” with trained “raiders”. And also trained “pairs” (or groups?)
Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)
So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).
Positive emotions are useful too. :)