Saying that we’ll figure out an answer in the future when we have better data isn’t actually giving an answer now.
Okay, fair enough, but I predict this would happen: in the same way that AlphaGo rediscovered all of chess theory, it seems to me that if you just let the AIs grow, you can create a civilization of AIs. Those AIs would have to create some form of language or communication, and some AI philosopher would get involved and then talk about the hard problem.
I’m curious how you answer those two questions:
Let’s say we implement this simulation in 10 years and everything works the way I’m telling you now. Would you update?
What is the probability that this simulation is possible at all?
If you expect to update in the future, just update now.
To me, this thought experiment solves the meta-problem and so dissolves the hard problem.
Let’s say we implement this simulation in 10 years and everything works the way I’m telling you now. Would you update?
Well, it’s already my default assumption that something like this would happen, so the update would mostly just be something like “looks like I was right”.
2. What is the probability that this simulation is possible at all?
You mean one where AIs that were trained with no previous discussion of the concept of consciousness end up reinventing the hard problem on their own? 70% maybe.
If you expect to update in the future, just update now.
… for every expectation of evidence, there is an equal and opposite expectation of counterevidence.
If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.
To me, this thought experiment solves the meta-problem and so dissolves the hard problem.
I don’t see how it does? It just suggests that a possible approach by which the meta-problem could be solved in the future.
Suppose you told me that you had figured out how to create cheap and scalable source of fusion power. I’d say oh wow great! What’s your answer? And you said that, well, you have this idea for a research program that might, in ten years, produce an explanation of how to create cheap and scalable fusion power.
I would then be disappointed because I thought you had an explanation that would let me build fusion power right now. Instead, you’re just proposing another research program that hopes to one day achieve fusion power. I would say that you don’t actually have it figured it out yet, you just think you have a promising lead.
Likewise, if you tell me that you have a solution to the meta-problem, then I would expect an explanation that lets me understand the solution to the meta-problem today. Not one that lets me do it ten years in the future, when we investigate the logs of the AIs to see what exactly it was that made them think the hard problem was a thing.
I also feel like this scenario is presupposing the conclusion—you feel that the right solution is an eliminativist one, so you say that once we examine the logs of the AIs, we will find out what exactly made them believe in the hard problem in a way that solves the problem. But a non-eliminativist might just as well claim that once we examine the logs of the AIs, we will eventually be forced to conclude that we can’t find an answer there, and that the hard problem still remains mysterious.
Now personally I do lean toward thinking that examining the logs will probably give us an answer, but that’s just my/your intuition against the non-eliminativist’s intuition. Just having a strong intuition that a particular experiment will prove us right isn’t the same as actually having the solution.
hmm, I don’t understand something, but we are closer to the crux :)
You say:
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I don’t see how it does? It just suggests that a possible approach by which the meta-problem could be solved in the future.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
And it seems all the other bricks are “easily” implementable.
hmm, I don’t understand something, but we are closer to the crux :)
Yeah I think there’s some mutual incomprehension going on :)
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
For me “the default assumption” is anything with more than 50% probability. In this case, my default assumption has around 70% probability.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Sorry, I don’t understand this. What question am I dodging? If you mean the question of “would I update”, what update do you have in mind? (Of course, if I previously gave an event 70% probability and then it comes true, I’ll update from 70% to ~100% probability of that event happening. But it seems pretty trivial to say that if an event happens then I will update to believing that the event has happened, so I assume you mean some more interesting update.)
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I may have misinterpreted you; I took you to be saying “if you expect to see this happening, then you might as well immediately update to what you’d believe after you saw it happen”. Which would have directly contradicted “Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs”.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
Okay. But that seems more like an intuition than even a sketch of a proof to me. After all, part of the standard argument for the hard problem is that even if you explained all of the observable functions of consciousness, the hard problem would remain. So just the fact that we can build individual bricks of the pyramid isn’t significant by itself—a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem. What would you say to them to convince them otherwise?
Thank you for clarifying your perspective. I understand you’re saying that you expect the experiment to resolve to “yes” 70% of the time, making you 70% eliminativist and 30% uncertain. You can’t fully update your beliefs based on the hypothetical outcome of the experiment because there are still unknowns.
For myself, I’m quite confident that the meta-problem and the easy problems of consciousness will eventually be fully solved through advancements in AI and neuroscience. I’ve written extensively about AI and path to autonomous AGI here, and I would ask people: “Yo, what do you think AI is not able to do? Creativity? Ok do you know....”. At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs. I’d put my confidence level for this at around 95%. Once we reach that point, I agree I think it will become increasingly difficult to argue that the hard problem of consciousness is still unresolved, even if part of my intuition remains somewhat perplexed. Maintaining a belief in epiphenomenalism while all the “easy” problems have been solved is a tough position to defend—I’m about 90% confident of this.
So while I’m not a 100% committed eliminativist, I’m at around 90% (when I was at 40% in chapter 6 in the story). Yes, even after considering the ghost argument, there’s still a small part of my thinking that leans towards Chalmers’ view. However, the more progress we make in solving the easy and meta-problems through AI and neuroscience, the more untenable it seems to insist that the hard problem remains unaddressed.
a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem.
I actually think a non-eliminativist would concede that building the whole pyramid does solve the meta-problem. That’s the crucial aspect. If we can construct the entire pyramid, with the final piece being the ability to independently rediscover the hard problem in an experimental setup like the one I described in the post, then I believe even committed non-materialists would be at a loss and would need to substantially update their views.
Thank you for the kind words!
Okay, fair enough, but I predict this would happen: in the same way that AlphaGo rediscovered all of chess theory, it seems to me that if you just let the AIs grow, you can create a civilization of AIs. Those AIs would have to create some form of language or communication, and some AI philosopher would get involved and then talk about the hard problem.
I’m curious how you answer those two questions:
Let’s say we implement this simulation in 10 years and everything works the way I’m telling you now. Would you update?
What is the probability that this simulation is possible at all?
If you expect to update in the future, just update now.
To me, this thought experiment solves the meta-problem and so dissolves the hard problem.
Well, it’s already my default assumption that something like this would happen, so the update would mostly just be something like “looks like I was right”.
You mean one where AIs that were trained with no previous discussion of the concept of consciousness end up reinventing the hard problem on their own? 70% maybe.
That sounds like it would violate conservation of expected evidence:
I don’t see how it does? It just suggests that a possible approach by which the meta-problem could be solved in the future.
Suppose you told me that you had figured out how to create cheap and scalable source of fusion power. I’d say oh wow great! What’s your answer? And you said that, well, you have this idea for a research program that might, in ten years, produce an explanation of how to create cheap and scalable fusion power.
I would then be disappointed because I thought you had an explanation that would let me build fusion power right now. Instead, you’re just proposing another research program that hopes to one day achieve fusion power. I would say that you don’t actually have it figured it out yet, you just think you have a promising lead.
Likewise, if you tell me that you have a solution to the meta-problem, then I would expect an explanation that lets me understand the solution to the meta-problem today. Not one that lets me do it ten years in the future, when we investigate the logs of the AIs to see what exactly it was that made them think the hard problem was a thing.
I also feel like this scenario is presupposing the conclusion—you feel that the right solution is an eliminativist one, so you say that once we examine the logs of the AIs, we will find out what exactly made them believe in the hard problem in a way that solves the problem. But a non-eliminativist might just as well claim that once we examine the logs of the AIs, we will eventually be forced to conclude that we can’t find an answer there, and that the hard problem still remains mysterious.
Now personally I do lean toward thinking that examining the logs will probably give us an answer, but that’s just my/your intuition against the non-eliminativist’s intuition. Just having a strong intuition that a particular experiment will prove us right isn’t the same as actually having the solution.
hmm, I don’t understand something, but we are closer to the crux :)
You say:
To the question, “Would you update if this experiment is conducted and is successful?” you answer, “Well, it’s already my default assumption that something like this would happen”.
To the question, “Is it possible at all?” You answer 70%.
So, you answer 99-ish% to the first question and 70% to the second question, this seems incoherent.
It seems to me that you don’t bite the bullet for the first question if you expect this to happen. Saying, “Looks like I was right,” seems to me like you are dodging the question.
Hum, it seems there is something I don’t understand; I don’t think this violates the law.
I agree I only gave the skim of the proof, it seems to me that if you can build the pyramid, brick by brick, then this solved the meta-problem.
for example, when I give the example of meta-cognition-brick, I say that there is a paper that already implements this in an LLM (and I don’t find this mysterious because I know how I would approximately implement a database that would behave like this).
And it seems all the other bricks are “easily” implementable.
Yeah I think there’s some mutual incomprehension going on :)
For me “the default assumption” is anything with more than 50% probability. In this case, my default assumption has around 70% probability.
Sorry, I don’t understand this. What question am I dodging? If you mean the question of “would I update”, what update do you have in mind? (Of course, if I previously gave an event 70% probability and then it comes true, I’ll update from 70% to ~100% probability of that event happening. But it seems pretty trivial to say that if an event happens then I will update to believing that the event has happened, so I assume you mean some more interesting update.)
I may have misinterpreted you; I took you to be saying “if you expect to see this happening, then you might as well immediately update to what you’d believe after you saw it happen”. Which would have directly contradicted “Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs”.
Okay. But that seems more like an intuition than even a sketch of a proof to me. After all, part of the standard argument for the hard problem is that even if you explained all of the observable functions of consciousness, the hard problem would remain. So just the fact that we can build individual bricks of the pyramid isn’t significant by itself—a non-eliminativist might be perfectly willing to grant that yes, we can build the entire pyramid, while also holding that merely building the pyramid won’t tell us anything about the hard problem nor the meta-problem. What would you say to them to convince them otherwise?
Thank you for clarifying your perspective. I understand you’re saying that you expect the experiment to resolve to “yes” 70% of the time, making you 70% eliminativist and 30% uncertain. You can’t fully update your beliefs based on the hypothetical outcome of the experiment because there are still unknowns.
For myself, I’m quite confident that the meta-problem and the easy problems of consciousness will eventually be fully solved through advancements in AI and neuroscience. I’ve written extensively about AI and path to autonomous AGI here, and I would ask people: “Yo, what do you think AI is not able to do? Creativity? Ok do you know....”. At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs. I’d put my confidence level for this at around 95%. Once we reach that point, I agree I think it will become increasingly difficult to argue that the hard problem of consciousness is still unresolved, even if part of my intuition remains somewhat perplexed. Maintaining a belief in epiphenomenalism while all the “easy” problems have been solved is a tough position to defend—I’m about 90% confident of this.
So while I’m not a 100% committed eliminativist, I’m at around 90% (when I was at 40% in chapter 6 in the story). Yes, even after considering the ghost argument, there’s still a small part of my thinking that leans towards Chalmers’ view. However, the more progress we make in solving the easy and meta-problems through AI and neuroscience, the more untenable it seems to insist that the hard problem remains unaddressed.
I actually think a non-eliminativist would concede that building the whole pyramid does solve the meta-problem. That’s the crucial aspect. If we can construct the entire pyramid, with the final piece being the ability to independently rediscover the hard problem in an experimental setup like the one I described in the post, then I believe even committed non-materialists would be at a loss and would need to substantially update their views.