Whilst I really, really like the last picture—it seems a little odd to include it in the article.
Isn’t this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn’t the picture sort of act against that—by being slightly sci-fi and weird?
Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it’d read more smoothly if it was smaller, along the lines of the header images in “Philosophy by Humans” or (as an extreme on the high end) “The Cognitive Science of Rationality”.
Agreed, especially since it is presented with no explanation or context. If the aim was “here’s a picture of what we might achieve,” I would personally aim for more of a Shock Level 2 image rather than an SL3 one—presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.
SL0 people think “hacker” refers to a special type of dangerous criminal and don’t know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the “x-risks community” are above those levels—asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.
Not only is the picture slightly sci-fi and weird, it’s also wrong. I mean, my thought processes on seeing it went something like this: “Oh, hey, it’s a ringworld. Presumably this is meant to hint at the glorious future that might be ahead of us if we don’t get wiped out, and therefore the importance of not getting wiped ou … no, wait a moment, it’s kinda like a ringworld but it’s really really really small. Much smaller than the earth. What the hell’s the point of that?”
With more physics and attention, one could produce better numbers, but as a crude ballpark (using data from wikipedia):
Surface area of the Earth: 510,072,000 km^2
Circumference of ring, if it’s placed at 1 AU: 2 * pi AU = 939,951,956 km
So, if the ring is a little over a half a kilometer in width, it has the same surface area as the Earth—and could be smaller still, if we just compare habitable area.
Agreed on this. The ringworld thing comes out of nowhere and doesn’t clearly follow from the content of the article.
Unless the point is to wink-wink-nudge-nudge at the idea that we might have to do some weird-looking and weird-sounding things in order to save the world… in which case I still don’t like the picture.
I’m in favor of including the last picture as part of the article, because it shows the possible world we gain by averting existential risk. I don’t believe that “context” is necessary, the image is self-explanatory.
Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear.
“Sci-fi” of today becomes “reality” of tomorrow. Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the biggest tasks that we have. Otherwise people are just stuck with what they see right in front of their eyes.
For a big picture issue like existential risk, it fits that one would want to also introduce a vague sketch of the possibilities of the big picture future.
Suggesting that the Earth picture itself doesn’t belong in the post shows some kind of general bias against visuals, or something. You think that a picture about saving human life on earth isn’t appropriately paired with a picture of the Earth? What image could be more appropriate than that?
I didn’t understand it. It didn’t self-explain to me.
Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the big tasks we have.
Woah! That’s quite a leap! But hold on a second! This isn’t meant to be literature, is it? It doesn’t seem to me that an explanation of this kind benefits from having hidden meanings and whatnot, especially ideological ones like that.
Nitpicking on ringworld vs. stanford torus is not relevant, or interesting.
Agreed.
Suggesting that the Earth picture itself doesn’t belong in the post shows some kind of general bias against visuals, or something.
This is a Fully General Counterargument that you could use on objections to any image, no matter what the image is, and no matter what the objection is.
As for me, I’m not really Blue or Green on whether to keep the image. It’s really pretty, but the relevance is dubious at best and nonexistent at worst.
I’m a genius transhumanist who likes sci-fi, and the connotations and message of the image were not clear to me. I wasn’t even sure what it was supposed to be a picture of (my first guess was something from the Halo games, though I couldn’t imagine the relevance). Is this more something that would be clear to the general populace and not folks like me, and thus should be included in a post to appeal to the general populace?
Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.
I would count myself among “general people”. I didn’t get it at all. In fact, having read the comments, I’m still not sure I get it. It’s a pretty picture and all, but why is it there?
The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia.
The last picture shows a beatiful utopia. I thought it’s there to give a message of the type: “If everything goes well, we can still achieve a very good future.” That is, while the first picture symbolises the threat of a dystopia, the last one symbolises the hope and possibility of an utopia.
Of course, this is merely my interpretation. There are very many ways one can inerprent these pictures.
You think that a picture about saving human life on earth isn’t appropriately paired with a picture of the Earth? What image could be more appropriate than that?
Well, how about a picture of human life? Or even a picture of human life being saved; it might not be a bad idea to suggest a similarity between a doctor saving a patient’s life and an x-risk-reduction policy saving many peoples’ lives.
Well, or something like that but a little more subtle as a metaphor.
Needlessly distracting. Most people have enough trouble appreciating the scale of existential risk that their minds often shut down when thinking about it, or just try to change the subject. Adding into it other ideas which are larger scale and even more controversial is not a recipe for getting them to pay attention.
Whilst I really, really like the last picture—it seems a little odd to include it in the article.
Isn’t this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn’t the picture sort of act against that—by being slightly sci-fi and weird?
Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it’d read more smoothly if it was smaller, along the lines of the header images in “Philosophy by Humans” or (as an extreme on the high end) “The Cognitive Science of Rationality”.
Agreed, especially since it is presented with no explanation or context. If the aim was “here’s a picture of what we might achieve,” I would personally aim for more of a Shock Level 2 image rather than an SL3 one—presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.
I thought this article was for SL0 people—that would give it the widest audience possible, which I thought was the point?
If it’s aimed at the SL0′s, then we’d be wanting to go for an SL1 image.
SL0 people think “hacker” refers to a special type of dangerous criminal and don’t know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the “x-risks community” are above those levels—asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.
Not only is the picture slightly sci-fi and weird, it’s also wrong. I mean, my thought processes on seeing it went something like this: “Oh, hey, it’s a ringworld. Presumably this is meant to hint at the glorious future that might be ahead of us if we don’t get wiped out, and therefore the importance of not getting wiped ou … no, wait a moment, it’s kinda like a ringworld but it’s really really really small. Much smaller than the earth. What the hell’s the point of that?”
The picture is of a Stanford torus.
Don’t those have to be fully enclosed?
Yes. The part that looks like a sky in the picture is some transparent material that holds the atmosphere in.
Faster build, reduced cost, not such heavy stresses placed on the materials.
I meant “what’s the point of that, as opposed to not bothering?”. Not “what’s the point of that, as opposed to building a full-sized ringworld?”.
Not much smaller than the earth at all!
With more physics and attention, one could produce better numbers, but as a crude ballpark (using data from wikipedia):
Surface area of the Earth: 510,072,000 km^2
Circumference of ring, if it’s placed at 1 AU: 2 * pi AU = 939,951,956 km
So, if the ring is a little over a half a kilometer in width, it has the same surface area as the Earth—and could be smaller still, if we just compare habitable area.
The scale of curvature there makes it clear it’s not 1 AU in radius.
Fair enough, I suppose. But then it’s not really a ring world so much as a… what? Space station?
Yeah, pretty much. If it were bigger, I might call it a Culture orbital).
Agreed on this. The ringworld thing comes out of nowhere and doesn’t clearly follow from the content of the article.
Unless the point is to wink-wink-nudge-nudge at the idea that we might have to do some weird-looking and weird-sounding things in order to save the world… in which case I still don’t like the picture.
I read it as “but there’s still hope for a big wonderful future”, but this is tentative.
In any case, thanks for the exposure to Richard Fraser’s art.
Or, apparently, a small wonderful future. Look how tiny that ring is!
Also, I’d say both of those pictures seem to have the effect of inducing far mode.
I’m in favor of including the last picture as part of the article, because it shows the possible world we gain by averting existential risk. I don’t believe that “context” is necessary, the image is self-explanatory.
Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear.
“Sci-fi” of today becomes “reality” of tomorrow. Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the biggest tasks that we have. Otherwise people are just stuck with what they see right in front of their eyes.
For a big picture issue like existential risk, it fits that one would want to also introduce a vague sketch of the possibilities of the big picture future.
Suggesting that the Earth picture itself doesn’t belong in the post shows some kind of general bias against visuals, or something. You think that a picture about saving human life on earth isn’t appropriately paired with a picture of the Earth? What image could be more appropriate than that?
I didn’t understand it. It didn’t self-explain to me.
Woah! That’s quite a leap! But hold on a second! This isn’t meant to be literature, is it? It doesn’t seem to me that an explanation of this kind benefits from having hidden meanings and whatnot, especially ideological ones like that.
Agreed.
This is a Fully General Counterargument that you could use on objections to any image, no matter what the image is, and no matter what the objection is.
As for me, I’m not really Blue or Green on whether to keep the image. It’s really pretty, but the relevance is dubious at best and nonexistent at worst.
I’m a genius transhumanist who likes sci-fi, and the connotations and message of the image were not clear to me. I wasn’t even sure what it was supposed to be a picture of (my first guess was something from the Halo games, though I couldn’t imagine the relevance). Is this more something that would be clear to the general populace and not folks like me, and thus should be included in a post to appeal to the general populace?
Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.
I would count myself among “general people”. I didn’t get it at all. In fact, having read the comments, I’m still not sure I get it. It’s a pretty picture and all, but why is it there?
The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia.
The last picture shows a beatiful utopia. I thought it’s there to give a message of the type: “If everything goes well, we can still achieve a very good future.” That is, while the first picture symbolises the threat of a dystopia, the last one symbolises the hope and possibility of an utopia.
Of course, this is merely my interpretation. There are very many ways one can inerprent these pictures.
Note: “interesting”, “clear”, and perhaps even “relevant” are 2-place words.
Well, how about a picture of human life? Or even a picture of human life being saved; it might not be a bad idea to suggest a similarity between a doctor saving a patient’s life and an x-risk-reduction policy saving many peoples’ lives.
Well, or something like that but a little more subtle as a metaphor.
Needlessly distracting. Most people have enough trouble appreciating the scale of existential risk that their minds often shut down when thinking about it, or just try to change the subject. Adding into it other ideas which are larger scale and even more controversial is not a recipe for getting them to pay attention.
The squid-shaped dingbats are pretty bad, too.