I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Incidentally and unrelatedly, I’m not nearly as committed as you sound to preserving our current ignorance of one another’s perspective in this new architecture.
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives. I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.
I’m really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn’t even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we’re just using the parametric functions to model the properties of individual neurons—that’s fine).
Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another’s perspectives would overlap. Also, that brings us back to the original question I was raising about living forever—what exactly is it that we value and want to preserve?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Oh, and agreed that we would change if we were capable of sharing one another’s perspectives.
I’m not particularly interested in preserving my current cognitive isolation from other humans, though… I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.