The Republicans are effectively pro Russia in that with all the US support, Ukraine is holding or marginally winning. Were US support reduced or not increased significantly, the outcome of this war will be the theft of a significant chunk of Ukraine by Russia, about 20 percent of the territory.
It is possible that if the republicans regain control of both houses and the presidency they will evolve their views to full support for Ukraine, they may be feigning concern over the cost as a negotiating tactic.
The issue with AI/AGI research is there are reasons for a very strong, pro AGI group to exist. If for no other reason that if international rivals refuse any meaningful agreements to slow or pause AGI research (what I think is the 90 percent outcome), the USA will have no choice but to keep up.
Whether this continues as a bunch of private companies or a centralized national defense effort I don’t know.
In addition, shareholders of tech companies, state governments—there are many parties who will financially benefit if AGIs are built and deployed at full scale. They want to see the 100 or 1000x returns that are theoretically possible, and can spend a lot of money to manipulate the refs here. They will probably demand evidence that the technology is too dangerous to make them rich instead of just speculation/models of the future we have now.
The Republicans are effectively pro Russia in that with all the US support, Ukraine is holding or marginally winning. Were US support reduced or not increased significantly, the outcome of this war will be the theft of a significant chunk of Ukraine by Russia, about 20 percent of the territory.
I think the framing of the question plays a big role here. If your claim was added as an implication for example, I expect the answer would look very differently. There are other issues as well, where there is bipartisan support, these were just the first two that came readily to my mind.
The issue with AI/AGI research is there are reasons for a very strong, pro AGI group to exist.
Yes, but I do not think Eliezer going on a conservative podcast and talking about the issue will increase the reasons / likelihood.
Sure. I was simply explaining there are two factions and there are several outcomes that are likely. The thing about the pro AGI side is it offers the opportunity to make an immense amount of money. The terra-dollars get a vote in US politics, obviously Chinese decision making, I am unsure how the eu makes rules but probably there also.
What even funds the anti AGI side? What money is there to be made impeding a technology? Union dues? Individual donations from citizens concerned about imminent existential threats? As a business case it doesn’t pencil in and potentially there could be bipartisan support for massive investment in AI, instead, where neither political party takes the side of the views here on lesswrong. But one party might be in favor of more regulations than the other.
https://www.pewresearch.org/short-reads/2023/06/15/more-than-four-in-ten-republicans-now-say-the-us-is-providing-too-much-aid-to-ukraine/
The Republicans are effectively pro Russia in that with all the US support, Ukraine is holding or marginally winning. Were US support reduced or not increased significantly, the outcome of this war will be the theft of a significant chunk of Ukraine by Russia, about 20 percent of the territory.
It is possible that if the republicans regain control of both houses and the presidency they will evolve their views to full support for Ukraine, they may be feigning concern over the cost as a negotiating tactic.
The issue with AI/AGI research is there are reasons for a very strong, pro AGI group to exist. If for no other reason that if international rivals refuse any meaningful agreements to slow or pause AGI research (what I think is the 90 percent outcome), the USA will have no choice but to keep up.
Whether this continues as a bunch of private companies or a centralized national defense effort I don’t know.
In addition, shareholders of tech companies, state governments—there are many parties who will financially benefit if AGIs are built and deployed at full scale. They want to see the 100 or 1000x returns that are theoretically possible, and can spend a lot of money to manipulate the refs here. They will probably demand evidence that the technology is too dangerous to make them rich instead of just speculation/models of the future we have now.
I think the framing of the question plays a big role here. If your claim was added as an implication for example, I expect the answer would look very differently. There are other issues as well, where there is bipartisan support, these were just the first two that came readily to my mind.
Yes, but I do not think Eliezer going on a conservative podcast and talking about the issue will increase the reasons / likelihood.
Sure. I was simply explaining there are two factions and there are several outcomes that are likely. The thing about the pro AGI side is it offers the opportunity to make an immense amount of money. The terra-dollars get a vote in US politics, obviously Chinese decision making, I am unsure how the eu makes rules but probably there also.
What even funds the anti AGI side? What money is there to be made impeding a technology? Union dues? Individual donations from citizens concerned about imminent existential threats? As a business case it doesn’t pencil in and potentially there could be bipartisan support for massive investment in AI, instead, where neither political party takes the side of the views here on lesswrong. But one party might be in favor of more regulations than the other.