It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc.
I agree, though, that philosophers often make too much of the distinction between “logically impossible” and “physically impossible.” There’s probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.
So basically, build a second ladder out of some other material that’s feasible (unlike steel), and then just tie the steel ladder to it so it doesn’t have to bear any weight.
I think that often “logically possible” means “possible if you don’t think too hard about it”. Which is exactly Dennett’s point in context: the idea that you are a brain in a vat is only conceivable if you don’t think about the computing power that would be necessary for a convincing simulation.
Which is exactly Dennett’s point in context: the idea that you are a brain in a vat is only conceivable if you don’t think about the computing power that would be necessary for a convincing simulation.
Dreams can be quite convincing simulations that don’t need that much computing power.
The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren’t just imagining it?
The way I like to think about it is that convincingness is a 2-place function—a simulation is convincing to a particular mind/brain. If there’s a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it’s cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.
Dennett’s point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.
The way I like to think about it is that convincingness is a 2-place function—a simulation is convincing to a particular mind/brain. If there’s a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more)
The 5 senses are brain events. There aren’t input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense.
You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception.
If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange.
He might even get angry when someone tells him that the thing isn’t an orange because it obviously tastes like an orange.
it’s cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
You don’t need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren’t “brains in a vat”.
Doubting your own perception is an incredibly hard cognitive task.
There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm.
If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.
Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does.
You don’t need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.
It’s the same with suggesting a brain in the vat that it’s acting in the real world. The brain makes up the information that’s missing to provide for an experience of being in the real world.
To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don’t need to provide every detail. Missing details get made up by the brain.
Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up
information to fill the blind spot.
No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called ‘figment’ to mock this notion.
His position is that nothing within the brain’s narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn’t exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken.
For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.
For a hallucination to be strong, does not require the hallucination to have great internal complexity.
That’s a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as:
Another conclusion it seems that we can draw from this is that strong hallucinations are simply impossible! By a strong hallucination I mean a hallucination of an apparently concrete and persisting three-dimensional object in the real world — as contrasted to flashes, geometric distortions, auras, afterimages, fleeting phantom-limb experiences, and other anomalous sensations. A strong hallucination would be, say, a ghost that talked back, that permitted you to touch it, that resisted with a sense of solidity, that cast a shadow, that was visible from any angle so that you might walk around it and see what its back looked like.
Given that definition, Dennett just seems wrong.
He continues saying:
Reports of very strong hallucinations are rare
I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum has plenty of peope who manage to have strong hallucinations of Tulpa’s.
The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
I think I must be misreading you. I’m puzzled that you believe this about hallucinations—that it’s possible for the brain to devote enough processing power to create a “strong” hallucination in the Dennettian sense—but upthread, you seemed to be saying that dreams did not require such processing power.
Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain’s horsepower; no conflicting signal source.
If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise—it seems to be revealed—that there never was a dream. There was only a little voice making language-like statements to itself—“now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are”.
I think Dennett’s position is just this; that there never was a dream, only a series of answers to spurious questions, which don’t have to be consistent because nothing was awake to demand consistency.
Do you think he’s wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling?
If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
Ok, so I flat out don’t believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation.
there is an interim moment when I realise—it seems to be revealed—that there never was a dream. There was only a little voice making language-like statements to itself
I don’t think I’ve ever had an experience quite like that. I’ve perhaps had experiences that are transitional between images and propositions—I’m thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I’m on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I’ve never felt a sudden revelation that my mental images were ‘only a little voice making language-like statements to itself’, à la Dennett’s suggestion that all experiences are just judgments.
Perhaps we’re conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn’t be surprised if that’s a little bit true, but I think it’s a small factor compared to Dennett’s philosophical commitments.)
A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology.
He is known to be a wine connoisseur. Sidney Shoemaker once asked him why he doesn’t just read the label..
I’ve occasionally had dreams where elements have backstories—I just know something about something in my dream, without having any way of having found it out.
This is common, I think, or at least I’ve seen other people discuss it before ( http://adamcadre.livejournal.com/172934.html ), and it fits my own experience as well. From which I had the rather obvious-in-hindsight insight that the experience of knowledge is itself just another sort of experience, just another type of qualia, just like color or sound.
In dreams knowledge doesn’t need to have an origin-via-discovery, same way that dream images don’t need to originate in our eyes, and dream sounds don’t need to originate in vibrations of our ear drums...
Probably way too old here, but I had multible experiences relevant to the thread.
Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.
I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.
I would say the backstories in dreams are different in that they can be clearly nonsensical. E.g. I hold and look at a glass relief, there is no movement at all, and I know it to be a movie. I know nothing of its content, and I dont believe the image of the relief to be in the movie.
It’s hard to be sure, but I think dream elements have less of a feeling of context for me. On the other hand, is the feeling of context the side effect of having more connections to my web of memories, or is it just another tag?
(nods) Me too. I’ve also had the RPG-esque variation where I’ve had a split awareness of the dream… I am aware of the broader narrative context, but I am also experiencing being a character in the narrative who is not aware. E.g., I know that there’s something interesting behind that door, and I’m walking around the room, but I can’t just go and open that door because I don’t actually know that in my walking-around-the-room capacity.
I’m puzzled that you believe this about hallucinations—that it’s possible for the brain to devote enough processing power to create a “strong” hallucination in the Dennettian sense—but upthread, you seemed to be saying that dreams did not require such processing power.
It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people’s) dreams tend to exhibit sub-realistic mental imagery.
I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule.
Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can’t share it that way?
You are quite right that it’s consistent. It’s just that it surprised my model, which was saying “if realistic mental imagery is going to happen anywhere, surely it’s going to be dreams, that seems obviously the time-of-least-contention-for-visual-workspace.”
I’m beginning to wonder whether any useful phenomenology at all survives the Typical Mind Fallacy. Right now, if somebody turned up claiming that their inner monologue was made of butterscotch and unaccountably lapsed into Klingon from three to five PM on weekdays, I’d be all “cool story bro”.
and unaccountably lapsed into Klingon from three to five PM on weekdays
Hmmm. Well, I don’t speak Klingon, but I am bilingual (English/Afrikaans); my inner monologue runs in English all the time in general but, after reading this, I decided to try running it in Afrikaans for a bit. Just to see what happens. Now, my Afrikaans is substantially poorer than my English (largely, I suspect, due to lack of practice).
My inner monologue switches languages very quickly on command; however, there are some other interesting differences that happen. First of all, my inner monologue is rather drastically slowed down. I have a definite sense of having to wait for my brain to look up the right word to describe the concept I mean; that is, there is a definite sense that I know what I am thinking before I wrap it in the monologue. (This is absent when my internal monologue is in the default English; possibly because my English monologue is fast enough that I don’t notice the delay). I think that that delay is the first time that I’ve noticed anticipatory thinking in my own head without the monologue.
There’s also grammatical differences between the two languages; an English sentence translated to Afrikaans will come out with a different word order (most of the time). This has its effect on my internal monologue as well; there’s a definite sense of the meanings being delivered to my language centres (or at least to the word-looking-up part thereof) in the order that would be correct for an English sentence, and the language centre having to hold certain meanings in a temporary holding space (or something) until I get to the right part of the sentence.
I also notice that my brain slips easily back into the English monologue; that’s no doubt due mainly to force of habit, and did not come as a surprise.
Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can’t share it that way?
That’s odd, it works on three different browsers and two different machines for me. I guess there’s some geographical restriction. Here’s a PDF instead then, I was citing what’s page 45 by the book’s page numbering and page 60 by the PDF’s.
Curiously, the first time I clicked the Google Books link, I got the “Yksi sormus hallitsemaan niitä kaikkia...” message (not an exact transcription), but the second time, it let me in.
There’s probably no in principle possible way to hollow out the Earth significantly while retaining its structure;
My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees.
For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don’t start floating. Edit: This is incorrect, stupid Tulpa. More like Kardashev F!
I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.
You’re correct. There’s other ways to guard against collapse of an empty shell, it’s a similar scenario to guarding against collapse of a Dyson sphere.
Hey, that’s a great idea—lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn’t collapse in on itself. I think we can build this ladder, after all!
edit: I think this falls prey to the shell theorem if they’re in a geodesic orbit, but not if they’re using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn’t touch the Earth.
It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc.
I agree, though, that philosophers often make too much of the distinction between “logically impossible” and “physically impossible.” There’s probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.
So basically, build a second ladder out of some other material that’s feasible (unlike steel), and then just tie the steel ladder to it so it doesn’t have to bear any weight.
I think that often “logically possible” means “possible if you don’t think too hard about it”. Which is exactly Dennett’s point in context: the idea that you are a brain in a vat is only conceivable if you don’t think about the computing power that would be necessary for a convincing simulation.
Dreams can be quite convincing simulations that don’t need that much computing power.
The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren’t just imagining it?
The way I like to think about it is that convincingness is a 2-place function—a simulation is convincing to a particular mind/brain. If there’s a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it’s cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.
Dennett’s point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.
The 5 senses are brain events. There aren’t input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense.
You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception.
If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange. He might even get angry when someone tells him that the thing isn’t an orange because it obviously tastes like an orange.
You don’t need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren’t “brains in a vat”.
Doubting your own perception is an incredibly hard cognitive task.
There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm. If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.
I suggest you read the opening chapter of Consciousness Explained. Someone’s posted it online here.
Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does.
You don’t need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.
It’s the same with suggesting a brain in the vat that it’s acting in the real world. The brain makes up the information that’s missing to provide for an experience of being in the real world.
To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don’t need to provide every detail. Missing details get made up by the brain.
No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called ‘figment’ to mock this notion.
His position is that nothing within the brain’s narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn’t exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken.
For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.
That’s a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as:
Given that definition, Dennett just seems wrong.
He continues saying:
I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum has plenty of peope who manage to have strong hallucinations of Tulpa’s.
The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
I think I must be misreading you. I’m puzzled that you believe this about hallucinations—that it’s possible for the brain to devote enough processing power to create a “strong” hallucination in the Dennettian sense—but upthread, you seemed to be saying that dreams did not require such processing power. Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain’s horsepower; no conflicting signal source.
If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise—it seems to be revealed—that there never was a dream. There was only a little voice making language-like statements to itself—“now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are”.
I think Dennett’s position is just this; that there never was a dream, only a series of answers to spurious questions, which don’t have to be consistent because nothing was awake to demand consistency.
Do you think he’s wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling?
Ok, so I flat out don’t believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation.
I’ll go read about it, though.
I don’t think I’ve ever had an experience quite like that. I’ve perhaps had experiences that are transitional between images and propositions—I’m thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I’m on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I’ve never felt a sudden revelation that my mental images were ‘only a little voice making language-like statements to itself’, à la Dennett’s suggestion that all experiences are just judgments.
Perhaps we’re conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn’t be surprised if that’s a little bit true, but I think it’s a small factor compared to Dennett’s philosophical commitments.)
He is known to be a wine connoisseur. Sidney Shoemaker once asked him why he doesn’t just read the label..
I’ve occasionally had dreams where elements have backstories—I just know something about something in my dream, without having any way of having found it out.
This is common, I think, or at least I’ve seen other people discuss it before ( http://adamcadre.livejournal.com/172934.html ), and it fits my own experience as well. From which I had the rather obvious-in-hindsight insight that the experience of knowledge is itself just another sort of experience, just another type of qualia, just like color or sound.
In dreams knowledge doesn’t need to have an origin-via-discovery, same way that dream images don’t need to originate in our eyes, and dream sounds don’t need to originate in vibrations of our ear drums...
Is this any different from how it feels to know something in waking life, in cases where you’ve forgotten where you learned it?
Probably way too old here, but I had multible experiences relevant to the thread.
Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.
I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.
I would say the backstories in dreams are different in that they can be clearly nonsensical. E.g. I hold and look at a glass relief, there is no movement at all, and I know it to be a movie. I know nothing of its content, and I dont believe the image of the relief to be in the movie.
It’s hard to be sure, but I think dream elements have less of a feeling of context for me. On the other hand, is the feeling of context the side effect of having more connections to my web of memories, or is it just another tag?
(nods) Me too. I’ve also had the RPG-esque variation where I’ve had a split awareness of the dream… I am aware of the broader narrative context, but I am also experiencing being a character in the narrative who is not aware. E.g., I know that there’s something interesting behind that door, and I’m walking around the room, but I can’t just go and open that door because I don’t actually know that in my walking-around-the-room capacity.
It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people’s) dreams tend to exhibit sub-realistic mental imagery.
I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule.
Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can’t share it that way? You are quite right that it’s consistent. It’s just that it surprised my model, which was saying “if realistic mental imagery is going to happen anywhere, surely it’s going to be dreams, that seems obviously the time-of-least-contention-for-visual-workspace.”
I’m beginning to wonder whether any useful phenomenology at all survives the Typical Mind Fallacy. Right now, if somebody turned up claiming that their inner monologue was made of butterscotch and unaccountably lapsed into Klingon from three to five PM on weekdays, I’d be all “cool story bro”.
Hmmm. Well, I don’t speak Klingon, but I am bilingual (English/Afrikaans); my inner monologue runs in English all the time in general but, after reading this, I decided to try running it in Afrikaans for a bit. Just to see what happens. Now, my Afrikaans is substantially poorer than my English (largely, I suspect, due to lack of practice).
My inner monologue switches languages very quickly on command; however, there are some other interesting differences that happen. First of all, my inner monologue is rather drastically slowed down. I have a definite sense of having to wait for my brain to look up the right word to describe the concept I mean; that is, there is a definite sense that I know what I am thinking before I wrap it in the monologue. (This is absent when my internal monologue is in the default English; possibly because my English monologue is fast enough that I don’t notice the delay). I think that that delay is the first time that I’ve noticed anticipatory thinking in my own head without the monologue.
There’s also grammatical differences between the two languages; an English sentence translated to Afrikaans will come out with a different word order (most of the time). This has its effect on my internal monologue as well; there’s a definite sense of the meanings being delivered to my language centres (or at least to the word-looking-up part thereof) in the order that would be correct for an English sentence, and the language centre having to hold certain meanings in a temporary holding space (or something) until I get to the right part of the sentence.
I also notice that my brain slips easily back into the English monologue; that’s no doubt due mainly to force of habit, and did not come as a surprise.
That’s odd, it works on three different browsers and two different machines for me. I guess there’s some geographical restriction. Here’s a PDF instead then, I was citing what’s page 45 by the book’s page numbering and page 60 by the PDF’s.
Curiously, the first time I clicked the Google Books link, I got the “Yksi sormus hallitsemaan niitä kaikkia...” message (not an exact transcription), but the second time, it let me in.
Agreed
My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees.
For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don’t start floating. Edit: This is incorrect, stupid Tulpa. More like Kardashev F!
I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.
It could counteract the gravitational pull which would cause the surface of a hollow Earth to collapse otherwise. Edit: It would not :-(
A spherically symmetric shell has no effect on the gravitational field inside. It will not pull the surface of a hollow Earth outwards.
You’re correct. There’s other ways to guard against collapse of an empty shell, it’s a similar scenario to guarding against collapse of a Dyson sphere.
Hey, that’s a great idea—lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn’t collapse in on itself. I think we can build this ladder, after all!
edit: I think this falls prey to the shell theorem if they’re in a geodesic orbit, but not if they’re using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn’t touch the Earth.