I assign much lower value than a lot of people here to some vast expansionist future… and I suspect that even if I’m in the minority, I’m not the only one.
Can you be more explicit about the arithmetic? Would increasing the probability of civilization existing 1000 years from now from 10^{-7} to 10^{-6} be worth more or less to you than receiving a billion dollars right now?
Do I get any information about what kind of civilization I’m getting, and/or about what it would be doing during the 1000 years or after the 1000 years?
On edit: Removed the “by how much” because I figured out how to read the notation that gave the answer.
I guess by “civilization” I meant “civilization whose main story is still being meaningfully controlled by humans who are individually similar to modern humans”. Other than that, I just mean your current expectations about what that civilization is like, conditioned on it existing.
(It seems like you could be disagreeing with “a lot of people here” about what those futures look like or how valuable they are or both—I’d be happy to get clarification on either front.)
Hmm. I should have asked what the alternative to civilization was going to be.
Nailing it down to a very specific question, suppose my alternatives are...
I get a billion dollars. My life goes on as normal otherwise. Civilization does whatever it’s going to do; I’m not told what. Omega tells me that everybody will suddenly drop dead at some time within 1000 years, for reasons I don’t get to know, with probability one minus one in ten million.
… versus...
I do not get a billion dollars. My life goes on as normal otherwise. Civilization does whatever it’s going to do; I’m not told what. Omega tells me that everybody will suddenly drops dead at some time within 1000 years, for reasons I don’t get to know, with probability one minus one in one million.
… then I don’t think I take the billion dollars. Honestly the only really interesting thing I can think of to do with that kind of money would be to play around with the future of civilization anyway.
I think that’s probably the question you meant to ask.
However, that’s a very, very specific question, and there are lots other hypotheticals you could come up with.
The “civilization whose main story is still being meaningfully controlled by humans etc.” thing bothers me. If a utopian godlike friendly AI were somehow on offer, I would actively pay money to take control away from humans and hand it to that AI… especially if I or people I personally care about had to live in that world. And there could also be valuable modes of human life other than civilization. Or even nonhuman things that might be more valuable. If those were my alternatives, and I knew that to be the case, then my answer might change.
For that matter, even if everybody were somehow going to die, my answer could depend on how civilization was going to end and what it was going to do before ending. A jerkass genie Omega might be withholding information and offering me a bum deal.
Suppose I knew that civilization would end because everybody had agreed, for reasons I cannot at this time guess, that the project was in some sense finished, all the value extracted, so they would just stop reproducing and die out quietly… and, perhaps implausibly, that conclusion wasn’t the result of some kind of fucked up mind control. I wouldn’t want to second-guess the future on that.
Similarly, what if I knew civilization would end because the alternative was some also as yet unforeseen fate worse than death? I wouldn’t want to avoid x-risk by converting it into s-risk.
In reality, of course, nobody’s offering me clearcut choices at all. I kind of bumble along, and thereby I (and of course others) sculpt my future light cone into some kind of “work of art” in some largely unpredictable way.
Basically what I’m saying is that, insofar as I consciously control that work of art, pure size isn’t the aesthetic I’m looking for. Beyond a certain point, size might be a negative. 1000 years is one thing, but vast numbers of humans overrunning galaxy after galaxy over billions of years, while basically doing the same old stuff, seems pointless to me.
Thanks for all the detail, and for looking past my clumsy questions!
It sounds like one disagreement you’re pointing at is about the shape of possible futures. You value “humanity colonizes the universe” far less than some other people do. (maybe rob in particular?) That seems sane to me.
The near-term decision questions that brought us here were about how hard to fight to “solve the alignment problem,” whatever that means. For that, the real question is about the difference in total value of the future conditioned on “solving” it and conditioned on “not solving” it. You think there are plausible distributions on future outcomes so that 1 one-millionth of the expected value of those futures is worth more to you than personally receiving 1 billion dollars.
Putting these bits together, I would guess the amount of value at stake is not really the thing driving disagreement here, but about the level of futility? Say you think humanity overall has about a 1% chance of succeeding with a current team of 1000 full-time-equivalents working on the problem. Do you want to join the team in that case? What if we have a one-in-one-thousand chance and a current team of 1 million? Do these seem like the right units to talk about the disagreement in?
(Another place that I thought there might be a disagreement: do you think solving the alignment problem increases or decreases s-risk? Here “solving the alignment problem” is the thing that we’re discussing giving up on because it’s too futile.)
In some philosophical sense, you have to multiply the expected value by the estimated chance of success. They both count. But I’m not sitting there actually doing multiplication, because I don’t think you can put good enough estimates on either one to make the result meaningful.
In fact, I guess that there’s a better than 1 percent chance of avoiding AI catastrophe in real life, although I’m not sure I’d want to (a) put a number on it, (b) guess how much of the hope is in “solving alignment” versus the problem just not being what people think it will be, (c) guess how much influence my or anybody else’s actions would have on moving the probability[edited from “property”...], or even (d) necessarily commit to very many guesses about which actions would move the probability in which directions. I’m just generally not convinced that the whole thing is predictable down to 1 percent at all.
In any case, I am not in fact working on it.
I don’t actually know what values I would put on a lot of futures, even the 1000 year one. Don’t get hung up on the billion dollars, because I also wouldn’t take a billion dollars to singlemindedly dedicate the remainder of my life , or even my “working time”, to anything in particular unless I enjoyed it. Enjoying life is something you can do with relative certainty, and it can be enough even if you then die. That can be a big enough “work of art”. Everybody up to this point has in fact died, and they did OK.
For that matter, I’m about 60 years old, so I’m personally likely to die before any of this stuff happens… although I do have a child and would very much prefer she didn’t have to deal with anything too awful.
I guess I’d probably work on it if I thought I had a large, clear contribution to make to it, but in fact I have absolutely no idea at all how to do it, and no reason to expect I’m unusually talented at anything that would actually advance it.
do you think solving the alignment problem increases or decreases s-risk
If you ended up enacting a serious s-risk, I don’t understand how you could say you’d solved the alignment problem. At least not unless the values you were aligning with were pretty ugly ones.
I will admit that sometimes I think other people’s ideas of good outcomes sound closer to s-risks than I would like, though. If you solved the problem of aligning with those people, I might see it as an increase.
Have you considered local movement building? Perhaps, something simple like organising dinners or a reading group to discuss these issues? Maybe no-one would come, but it’s hard to say unless you give it a go and, in any case, a small group of two or three thoughtful people is more valuable than a much larger group of people who are just there to pontificate without really thinking through anything deeply.
I assign much lower value than a lot of people here to some vast expansionist future… and I suspect that even if I’m in the minority, I’m not the only one.
It’s not an arithmetic error.
Can you be more explicit about the arithmetic? Would increasing the probability of civilization existing 1000 years from now from 10^{-7} to 10^{-6} be worth more or less to you than receiving a billion dollars right now?
Do I get any information about what kind of civilization I’m getting, and/or about what it would be doing during the 1000 years or after the 1000 years?
On edit: Removed the “by how much” because I figured out how to read the notation that gave the answer.
I guess by “civilization” I meant “civilization whose main story is still being meaningfully controlled by humans who are individually similar to modern humans”. Other than that, I just mean your current expectations about what that civilization is like, conditioned on it existing.
(It seems like you could be disagreeing with “a lot of people here” about what those futures look like or how valuable they are or both—I’d be happy to get clarification on either front.)
Hmm. I should have asked what the alternative to civilization was going to be.
Nailing it down to a very specific question, suppose my alternatives are...
I get a billion dollars. My life goes on as normal otherwise. Civilization does whatever it’s going to do; I’m not told what. Omega tells me that everybody will suddenly drop dead at some time within 1000 years, for reasons I don’t get to know, with probability one minus one in ten million.
… versus...
I do not get a billion dollars. My life goes on as normal otherwise. Civilization does whatever it’s going to do; I’m not told what. Omega tells me that everybody will suddenly drops dead at some time within 1000 years, for reasons I don’t get to know, with probability one minus one in one million.
… then I don’t think I take the billion dollars. Honestly the only really interesting thing I can think of to do with that kind of money would be to play around with the future of civilization anyway.
I think that’s probably the question you meant to ask.
However, that’s a very, very specific question, and there are lots other hypotheticals you could come up with.
The “civilization whose main story is still being meaningfully controlled by humans etc.” thing bothers me. If a utopian godlike friendly AI were somehow on offer, I would actively pay money to take control away from humans and hand it to that AI… especially if I or people I personally care about had to live in that world. And there could also be valuable modes of human life other than civilization. Or even nonhuman things that might be more valuable. If those were my alternatives, and I knew that to be the case, then my answer might change.
For that matter, even if everybody were somehow going to die, my answer could depend on how civilization was going to end and what it was going to do before ending. A jerkass genie Omega might be withholding information and offering me a bum deal.
Suppose I knew that civilization would end because everybody had agreed, for reasons I cannot at this time guess, that the project was in some sense finished, all the value extracted, so they would just stop reproducing and die out quietly… and, perhaps implausibly, that conclusion wasn’t the result of some kind of fucked up mind control. I wouldn’t want to second-guess the future on that.
Similarly, what if I knew civilization would end because the alternative was some also as yet unforeseen fate worse than death? I wouldn’t want to avoid x-risk by converting it into s-risk.
In reality, of course, nobody’s offering me clearcut choices at all. I kind of bumble along, and thereby I (and of course others) sculpt my future light cone into some kind of “work of art” in some largely unpredictable way.
Basically what I’m saying is that, insofar as I consciously control that work of art, pure size isn’t the aesthetic I’m looking for. Beyond a certain point, size might be a negative. 1000 years is one thing, but vast numbers of humans overrunning galaxy after galaxy over billions of years, while basically doing the same old stuff, seems pointless to me.
Thanks for all the detail, and for looking past my clumsy questions!
It sounds like one disagreement you’re pointing at is about the shape of possible futures. You value “humanity colonizes the universe” far less than some other people do. (maybe rob in particular?) That seems sane to me.
The near-term decision questions that brought us here were about how hard to fight to “solve the alignment problem,” whatever that means. For that, the real question is about the difference in total value of the future conditioned on “solving” it and conditioned on “not solving” it.
You think there are plausible distributions on future outcomes so that 1 one-millionth of the expected value of those futures is worth more to you than personally receiving 1 billion dollars.
Putting these bits together, I would guess the amount of value at stake is not really the thing driving disagreement here, but about the level of futility? Say you think humanity overall has about a 1% chance of succeeding with a current team of 1000 full-time-equivalents working on the problem. Do you want to join the team in that case? What if we have a one-in-one-thousand chance and a current team of 1 million? Do these seem like the right units to talk about the disagreement in?
(Another place that I thought there might be a disagreement: do you think solving the alignment problem increases or decreases s-risk? Here “solving the alignment problem” is the thing that we’re discussing giving up on because it’s too futile.)
In some philosophical sense, you have to multiply the expected value by the estimated chance of success. They both count. But I’m not sitting there actually doing multiplication, because I don’t think you can put good enough estimates on either one to make the result meaningful.
In fact, I guess that there’s a better than 1 percent chance of avoiding AI catastrophe in real life, although I’m not sure I’d want to (a) put a number on it, (b) guess how much of the hope is in “solving alignment” versus the problem just not being what people think it will be, (c) guess how much influence my or anybody else’s actions would have on moving the probability[edited from “property”...], or even (d) necessarily commit to very many guesses about which actions would move the probability in which directions. I’m just generally not convinced that the whole thing is predictable down to 1 percent at all.
In any case, I am not in fact working on it.
I don’t actually know what values I would put on a lot of futures, even the 1000 year one. Don’t get hung up on the billion dollars, because I also wouldn’t take a billion dollars to singlemindedly dedicate the remainder of my life , or even my “working time”, to anything in particular unless I enjoyed it. Enjoying life is something you can do with relative certainty, and it can be enough even if you then die. That can be a big enough “work of art”. Everybody up to this point has in fact died, and they did OK.
For that matter, I’m about 60 years old, so I’m personally likely to die before any of this stuff happens… although I do have a child and would very much prefer she didn’t have to deal with anything too awful.
I guess I’d probably work on it if I thought I had a large, clear contribution to make to it, but in fact I have absolutely no idea at all how to do it, and no reason to expect I’m unusually talented at anything that would actually advance it.
If you ended up enacting a serious s-risk, I don’t understand how you could say you’d solved the alignment problem. At least not unless the values you were aligning with were pretty ugly ones.
I will admit that sometimes I think other people’s ideas of good outcomes sound closer to s-risks than I would like, though. If you solved the problem of aligning with those people, I might see it as an increase.
Have you considered local movement building? Perhaps, something simple like organising dinners or a reading group to discuss these issues? Maybe no-one would come, but it’s hard to say unless you give it a go and, in any case, a small group of two or three thoughtful people is more valuable than a much larger group of people who are just there to pontificate without really thinking through anything deeply.