It may end up that it will need to be substantially bigger than Jupiter Brain (cubed, squared, to 100th power?) to construct anything relevant to self preservation, from such a coarse data set.
Yeah, I’m aware of this and said as much in a previous comment. I appreciate you giving a more detailed explanation, but wish you hadn’t also included a sentence implying that I “think theological thoughts about AI”.
As you say, this needs to be investigated more using complexity theory, but my guess is that we won’t reach any definitive conclusions due to the difficulty of the problem. (For example we can’t even prove the hardness of inverting cryptographic hash functions specifically designed to be secure, so how can we expect to prove the hardness of doing this kind of brain reconstruction?) What we should do in that case isn’t clear, but it seems worth taking a chance if the cost of (sufficiently detailed) neuroimaging is low enough. What do you think?
I just thought of a good analogy: if you have a hash of random gibberish you don’t need recovered (thermal fluctuations amplified by neurons) combined with the data you want to recover (personality), if the random gibberish is larger than hash you’ll probably not be able to recover useful data. That leaves open the question how much random noise gets hashed into result.
Nah, I don’t think you think theological thoughts, but I do admit I have that opinion of some other people. Theological thoughts are strangely attractive to minds :/ . That’s how we got religion in first place. Can’t be too careful about recreating religion.
With the hash functions, there seems to be a very strong disparity—a definition of a hash can fit in 1 page, but our entire might can not solve it. And that’s with us trying to use fewer iterations.
Here we are speaking of a hash, definition of which is trillions times larger, and which introduces random thermal noise. Complexity theory is not only tool… there’s also the Lyapunov’s exponent considerations; it may well be that too big range of minds are compatible with the dataset, even though the dataset has enough data—if the thermal noise got hashed in. Then there’s can be future ethical considerations against any form of brute force process that creates minds only to destroy them.
On whenever it is worth a try, for me the strategic considerations apply—if I am to deem something like this worth a try, I will end up buying snake oil. Also, the money, for most of us, can be spent on living better life now.
Yeah, I’m aware of this and said as much in a previous comment. I appreciate you giving a more detailed explanation, but wish you hadn’t also included a sentence implying that I “think theological thoughts about AI”.
As you say, this needs to be investigated more using complexity theory, but my guess is that we won’t reach any definitive conclusions due to the difficulty of the problem. (For example we can’t even prove the hardness of inverting cryptographic hash functions specifically designed to be secure, so how can we expect to prove the hardness of doing this kind of brain reconstruction?) What we should do in that case isn’t clear, but it seems worth taking a chance if the cost of (sufficiently detailed) neuroimaging is low enough. What do you think?
I just thought of a good analogy: if you have a hash of random gibberish you don’t need recovered (thermal fluctuations amplified by neurons) combined with the data you want to recover (personality), if the random gibberish is larger than hash you’ll probably not be able to recover useful data. That leaves open the question how much random noise gets hashed into result.
Nah, I don’t think you think theological thoughts, but I do admit I have that opinion of some other people. Theological thoughts are strangely attractive to minds :/ . That’s how we got religion in first place. Can’t be too careful about recreating religion.
With the hash functions, there seems to be a very strong disparity—a definition of a hash can fit in 1 page, but our entire might can not solve it. And that’s with us trying to use fewer iterations.
Here we are speaking of a hash, definition of which is trillions times larger, and which introduces random thermal noise. Complexity theory is not only tool… there’s also the Lyapunov’s exponent considerations; it may well be that too big range of minds are compatible with the dataset, even though the dataset has enough data—if the thermal noise got hashed in. Then there’s can be future ethical considerations against any form of brute force process that creates minds only to destroy them.
On whenever it is worth a try, for me the strategic considerations apply—if I am to deem something like this worth a try, I will end up buying snake oil. Also, the money, for most of us, can be spent on living better life now.