When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I’m wondering what is the value of this kind of speculation—other than amusing oneself with a picture of “what would this button do” on a simulation of Earth under one’s hands.
As I see it, there’s no point in thinking about these kind of “large scale” interventions that are closely interweaved with politics. Better to focus on what relatively small groups of people can do (this includes, e.g. influencing a few other AGI development teams to work on FAI), and in this context, I think out best hope is in deeply understanding the mechanics of intelligence and thus having at least a chance at creating FAI before some team that doesn’t care the least about safety dooms us all—and there will be such teams, regardless of what we do today, just take a look at some of the “risks from AI” interviews...
When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over
I’m not sure whether that’s true. Government officials who are tasked with researching future trend might right the article.
Just because you yourself have no influence on politics doesn’t mean that the same is true for everyone who reads the article.
Even if you think that at the moment nobody with policial power reads LessWrong, it’s valuable to signal status. If you want to convince a billionaire to fund your project it might be be benefitial to speak about options that require a high amount of resources to pull off.
In early stages is not easy to focus directly in the organizations x or y, mostly because a good amount of researchers are working in projects who could end in a AGI expert in numerous specific domains. Futhermore, large scale coordination is important too, even if not a top priority. Slowing down a project or funding another is a guided intervetion who could gain some time while technical problems remain unsolved.
When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I’m wondering what is the value of this kind of speculation—other than amusing oneself with a picture of “what would this button do” on a simulation of Earth under one’s hands.
As I see it, there’s no point in thinking about these kind of “large scale” interventions that are closely interweaved with politics. Better to focus on what relatively small groups of people can do (this includes, e.g. influencing a few other AGI development teams to work on FAI), and in this context, I think out best hope is in deeply understanding the mechanics of intelligence and thus having at least a chance at creating FAI before some team that doesn’t care the least about safety dooms us all—and there will be such teams, regardless of what we do today, just take a look at some of the “risks from AI” interviews...
I’m not sure whether that’s true. Government officials who are tasked with researching future trend might right the article. Just because you yourself have no influence on politics doesn’t mean that the same is true for everyone who reads the article.
Even if you think that at the moment nobody with policial power reads LessWrong, it’s valuable to signal status. If you want to convince a billionaire to fund your project it might be be benefitial to speak about options that require a high amount of resources to pull off.
In early stages is not easy to focus directly in the organizations x or y, mostly because a good amount of researchers are working in projects who could end in a AGI expert in numerous specific domains. Futhermore, large scale coordination is important too, even if not a top priority. Slowing down a project or funding another is a guided intervetion who could gain some time while technical problems remain unsolved.