But do you accept that “what you experience when you see red” has a cogent physical explanation?
Yes, so much so that I think
whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
This doesn’t make qualia magical or even all that important.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.
Yes, so much so that I think
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
Doubtful, unless it’s useful to you for future reference.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.