I’d also add that historically I believe about two-thirds of LTFF’s money has also come from OpenPhil, so LTFF doesn’t represent a fully independent funder (though the decisionmaking around grants is pretty independent).
Peter Wildeford
This is a great poll and YouGov is a highly reputable pollster, but there is a significant caveat to note about the pause finding.
The way the question is framed provides information about “1000 technology leaders” who have signed a letter in favor of the pause but does not mention any opposition to the pause. I think this would push respondents to favor the pause. Ideal question construction would present more neutrally with both support and oppose statements.
Good to know you won’t be out of a job just yet!
I asked GPT4 the same prompt and here is what it said:
Of course! Here’s a new contra dance for you:
Contra Dance
Whirlwind Delight
By ChatGPT
(Duple improper)A1
(8) Circle left 3⁄4
(8) Neighbor swingA2
(8) Long lines forward and back
(8) Ladies chainB1
(8) Star right 1x
(8) Gents aleman left 1.5B2
(8) Half hey, ladies pass right shoulders to start
(8) Partner balance and swing, end facing down in a line of fourEnjoy dancing Whirlwind Delight!
How did it do?
Yep! I was wrong and this is false!
If we want to know what arguments resonate with New York Times articles we can actually use surveys, message testing, and focus groups to check and we don’t need to guess! (Disclaimer: My company sells these services.)
- 25 Mar 2023 5:32 UTC; 3 points) 's comment on The Overton Window widens: Examples of AI risk in the media by (
Cool—I’ll follow up when I’m back at work.
That makes a lot of sense. We can definitely test a lot of different framings. I think the problem with a lot of these kinds of problems is that they are low saliency, and thus people tend not to have opinions already, and thus they tend to generate an opinion on the spot. We have a lot of experience polling on low saliency issues though because we’ve done a lot of polling on animal farming policy which has similar framing effects.
I’ll shill here and say that Rethink Priorities is pretty good at running polls of the electorate if anyone wants to know what a representative sample of Americans think about a particular issue such as this one. No need to poll Uber drivers or Twitter when you can do the real thing!
Yeah, it came from a lawyer. The point being that if you confess to something bad, we may be legally required to repot that, so be careful.
Feel free to skip questions if you feel they aren’t applicable to you.
Vaguely interested in Effective Altruism? Please Take the Official 2022 EA Survey
Does the chance evolution got really lucky cancel out with the chance that evolution got really unlucky? So maybe this doesn’t change the mean but does increase the variance?as for how much to increase the variance, maybe like an additional +/-1 OOM tacked on to the existing evolution anchor?
I’m kinda thinking there’s like a 10% chance you’d have to increase it by 10x and a 10% chance you’d have to decrease it by 10x. But maybe I’m not thinking about this right?
There are a lot of different ways you can talk about “efficiency” here. The main thing I am thinking about with regard to the key question “how much FLOP would we expect transformative AI to require?” is whether, when using a neural net anchor (not evolution) to add a 1-3 OOM penalty to FLOP needs due to 2022-AI systems being less sample efficient than humans (requiring more data to produce the same capabilities) and with this penalty decreasing over time given expected algorithmic progress. The next question would be how much more efficient potential AI (e.g., 2100-AI not 2022-AI) could be given fundamentals of silicon vs. neurons, so we might know how much algorithmic progress could affect this.
I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.
Yeah ok 80%. I also do concede this is a very trivial thing, not like some “gotcha look at what stupid LMs can’t do no AGI until 2400”.
This is admittedly pretty trivial but I am 90% sure that if you prompt GPT4 with “Q: What is today’s date?” it will not answer correctly. I think something like this would literally be the least impressive thing that GPT4 won’t be able to do.
Thanks!
Is it ironic that the link to “All the posts I will never write” goes to a 404 page?
Does it get better at Metaculus forecasting?
This was very helpful for you to put together—thank you!