hmm, I don’t understand something, but we are closer to the crux :)
You say:
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I don’t see how it does? It just suggests that a possible approach by which the meta-problem could be solved in the future.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
And it seems all the other bricks are “easily” implementable.
hmm, I don’t understand something, but we are closer to the crux :)
Yeah I think there’s some mutual incomprehension going on :)
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
For me “the default assumption” is anything with more than 50% probability. In this case, my default assumption has around 70% probability.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Sorry, I don’t understand this. What question am I dodging? If you mean the question of “would I update”, what update do you have in mind? (Of course, if I previously gave an event 70% probability and then it comes true, I’ll update from 70% to ~100% probability of that event happening. But it seems pretty trivial to say that if an event happens then I will update to believing that the event has happened, so I assume you mean some more interesting update.)
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I may have misinterpreted you; I took you to be saying “if you expect to see this happening, then you might as well immediately update to what you’d believe after you saw it happen”. Which would have directly contradicted “Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs”.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
Okay. But that seems more like an intuition than even a sketch of a proof to me. After all, part of the standard argument for the hard problem is that even if you explained all of the observable functions of consciousness, the hard problem would remain. So just the fact that we can build individual bricks of the pyramid isn’t significant by itself—a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem. What would you say to them to convince them otherwise?
Thank you for clarifying your perspective. I understand you’re saying that you expect the experiment to resolve to “yes” 70% of the time, making you 70% eliminativist and 30% uncertain. You can’t fully update your beliefs based on the hypothetical outcome of the experiment because there are still unknowns.
For myself, I’m quite confident that the meta-problem and the easy problems of consciousness will eventually be fully solved through advancements in AI and neuroscience. I’ve written extensively about AI and path to autonomous AGI here, and I would ask people: “Yo, what do you think AI is not able to do? Creativity? Ok do you know....”. At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs. I’d put my confidence level for this at around 95%. Once we reach that point, I agree I think it will become increasingly difficult to argue that the hard problem of consciousness is still unresolved, even if part of my intuition remains somewhat perplexed. Maintaining a belief in epiphenomenalism while all the “easy” problems have been solved is a tough position to defend—I’m about 90% confident of this.
So while I’m not a 100% committed eliminativist, I’m at around 90% (when I was at 40% in chapter 6 in the story). Yes, even after considering the ghost argument, there’s still a small part of my thinking that leans towards Chalmers’ view. However, the more progress we make in solving the easy and meta-problems through AI and neuroscience, the more untenable it seems to insist that the hard problem remains unaddressed.
a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem.
I actually think a non-eliminativist would concede that building the whole pyramid does solve the meta-problem. That’s the crucial aspect. If we can construct the entire pyramid, with the final piece being the ability to independently rediscover the hard problem in an experimental setup like the one I described in the post, then I believe even committed non-materialists would be at a loss and would need to substantially update their views.
hmm, I don’t understand something, but we are closer to the crux :)
You say:
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
And it seems all the other bricks are “easily” implementable.
Yeah I think there’s some mutual incomprehension going on :)
For me “the default assumption” is anything with more than 50% probability. In this case, my default assumption has around 70% probability.
Sorry, I don’t understand this. What question am I dodging? If you mean the question of “would I update”, what update do you have in mind? (Of course, if I previously gave an event 70% probability and then it comes true, I’ll update from 70% to ~100% probability of that event happening. But it seems pretty trivial to say that if an event happens then I will update to believing that the event has happened, so I assume you mean some more interesting update.)
I may have misinterpreted you; I took you to be saying “if you expect to see this happening, then you might as well immediately update to what you’d believe after you saw it happen”. Which would have directly contradicted “Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs”.
Okay. But that seems more like an intuition than even a sketch of a proof to me. After all, part of the standard argument for the hard problem is that even if you explained all of the observable functions of consciousness, the hard problem would remain. So just the fact that we can build individual bricks of the pyramid isn’t significant by itself—a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem. What would you say to them to convince them otherwise?
Thank you for clarifying your perspective. I understand you’re saying that you expect the experiment to resolve to “yes” 70% of the time, making you 70% eliminativist and 30% uncertain. You can’t fully update your beliefs based on the hypothetical outcome of the experiment because there are still unknowns.
For myself, I’m quite confident that the meta-problem and the easy problems of consciousness will eventually be fully solved through advancements in AI and neuroscience. I’ve written extensively about AI and path to autonomous AGI here, and I would ask people: “Yo, what do you think AI is not able to do? Creativity? Ok do you know....”. At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs. I’d put my confidence level for this at around 95%. Once we reach that point, I agree I think it will become increasingly difficult to argue that the hard problem of consciousness is still unresolved, even if part of my intuition remains somewhat perplexed. Maintaining a belief in epiphenomenalism while all the “easy” problems have been solved is a tough position to defend—I’m about 90% confident of this.
So while I’m not a 100% committed eliminativist, I’m at around 90% (when I was at 40% in chapter 6 in the story). Yes, even after considering the ghost argument, there’s still a small part of my thinking that leans towards Chalmers’ view. However, the more progress we make in solving the easy and meta-problems through AI and neuroscience, the more untenable it seems to insist that the hard problem remains unaddressed.
I actually think a non-eliminativist would concede that building the whole pyramid does solve the meta-problem. That’s the crucial aspect. If we can construct the entire pyramid, with the final piece being the ability to independently rediscover the hard problem in an experimental setup like the one I described in the post, then I believe even committed non-materialists would be at a loss and would need to substantially update their views.