Need to use complexity theory to try and see how powerful the super-intelligence has to be. It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set. As intuition pump, and I said that before, a computing system that is to mankind as mankind is to 1 amoeba should be expected to predict the weather for about two times longer than mankind can—that’s given perfect information—given limited information the gain may be quite small and entirely unimportant. For the coarse brain scans to detailed brain state, one has to somehow run simulation in reverse (or worse yet, bruteforce the state), which I think explodes even worse than forward butterfly effect. I’d say we don’t know enough about brain to be able to tell what it takes to do this kind of thing, but what we do know about chaotic non-linear systems in general, does not inspire optimism.
The jump that if might be possible in principle, it would therefore be doable by super-intelligence, only works if you think theological thoughts about AI.
It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set.
Yeah, I’m aware of this and said as much in a previous comment. I appreciate you giving a more detailed explanation, but wish you hadn’t also included a sentence implying that I “think theological thoughts about AI”.
As you say, this needs to be investigated more using complexity theory, but my guess is that we won’t reach any definitive conclusions due to the difficulty of the problem. (For example we can’t even prove the hardness of inverting cryptographic hash functions specifically designed to be secure, so how can we expect to prove the hardness of doing this kind of brain reconstruction?) What we should do in that case isn’t clear, but it seems worth taking a chance if the cost of (sufficiently detailed) neuroimaging is low enough. What do you think?
I just thought of a good analogy: if you have a hash of random gibberish you don’t need recovered (thermal fluctuations amplified by neurons) combined with the data you want to recover (personality), if the random gibberish is larger than hash you’ll probably not be able to recover useful data. That leaves open the question how much random noise gets hashed into result.
Nah, I don’t think you think theological thoughts, but I do admit I have that opinion of some other people. Theological thoughts are strangely attractive to minds :/ . That’s how we got religion in first place. Can’t be too careful about recreating religion.
With the hash functions, there seems to be a very strong disparity—a definition of a hash can fit in 1 page, but our entire might can not solve it. And that’s with us trying to use fewer iterations.
Here we are speaking of a hash, definition of which is trillions times larger, and which introduces random thermal noise. Complexity theory is not only tool… there’s also the Lyapunov’s exponent considerations; it may well be that too big range of minds are compatible with the dataset, even though the dataset has enough data—if the thermal noise got hashed in. Then there’s can be future ethical considerations against any form of brute force process that creates minds only to destroy them.
On whenever it is worth a try, for me the strategic considerations apply—if I am to deem something like this worth a try, I will end up buying snake oil. Also, the money, for most of us, can be spent on living better life now.
Need to use complexity theory to try and see how powerful the super-intelligence has to be. It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set. As intuition pump, and I said that before, a computing system that is to mankind as mankind is to 1 amoeba should be expected to predict the weather for about two times longer than mankind can—that’s given perfect information—given limited information the gain may be quite small and entirely unimportant. For the coarse brain scans to detailed brain state, one has to somehow run simulation in reverse (or worse yet, bruteforce the state), which I think explodes even worse than forward butterfly effect. I’d say we don’t know enough about brain to be able to tell what it takes to do this kind of thing, but what we do know about chaotic non-linear systems in general, does not inspire optimism.
The jump that if might be possible in principle, it would therefore be doable by super-intelligence, only works if you think theological thoughts about AI.
Yeah, I’m aware of this and said as much in a previous comment. I appreciate you giving a more detailed explanation, but wish you hadn’t also included a sentence implying that I “think theological thoughts about AI”.
As you say, this needs to be investigated more using complexity theory, but my guess is that we won’t reach any definitive conclusions due to the difficulty of the problem. (For example we can’t even prove the hardness of inverting cryptographic hash functions specifically designed to be secure, so how can we expect to prove the hardness of doing this kind of brain reconstruction?) What we should do in that case isn’t clear, but it seems worth taking a chance if the cost of (sufficiently detailed) neuroimaging is low enough. What do you think?
I just thought of a good analogy: if you have a hash of random gibberish you don’t need recovered (thermal fluctuations amplified by neurons) combined with the data you want to recover (personality), if the random gibberish is larger than hash you’ll probably not be able to recover useful data. That leaves open the question how much random noise gets hashed into result.
Nah, I don’t think you think theological thoughts, but I do admit I have that opinion of some other people. Theological thoughts are strangely attractive to minds :/ . That’s how we got religion in first place. Can’t be too careful about recreating religion.
With the hash functions, there seems to be a very strong disparity—a definition of a hash can fit in 1 page, but our entire might can not solve it. And that’s with us trying to use fewer iterations.
Here we are speaking of a hash, definition of which is trillions times larger, and which introduces random thermal noise. Complexity theory is not only tool… there’s also the Lyapunov’s exponent considerations; it may well be that too big range of minds are compatible with the dataset, even though the dataset has enough data—if the thermal noise got hashed in. Then there’s can be future ethical considerations against any form of brute force process that creates minds only to destroy them.
On whenever it is worth a try, for me the strategic considerations apply—if I am to deem something like this worth a try, I will end up buying snake oil. Also, the money, for most of us, can be spent on living better life now.