P(GPT-5 Release) What is the probability that OpenAI will release GPT-5 before the end of 2024?
Singularity By what year do you think the Singularity will occur?
Tangential AI questions we currently have:
P(Global catastrophic risk) What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?
P(Simulation) What is the probability that our universe is a simulation?
I think Singularity basically gets at your AI timelines question, though not in a lot of detail. You said themes; are you hoping for a subsection of multiple questions trying to ask about that from different angles, or one good question for each?
I’d be tempted to reword the open-source LLM toward something like “How would you describe your opinion on open-source LLMs? [pro-open-source, lean open-source, neutral, lean regulated, pro-regulation]” or something along those lines. I also have an instinct to define LLM with context in the question (“How would you describe your opinion on open-source Artificial Intelligence such as LLMs” perhaps) but maybe that’s unnecessary here.
Thinking in terms of “the Singularity” might not be the most effective way to frame the timelines. I prefer “When will AI be able to do all tasks that expert human knowledge workers currently do?” seems like it’s better at getting people to give a timeline.
Do you have a stance on “all the tasks that expert human knowledge workers currently do?” vs “all the intellectual tasks that humans currently do?” I ask because “expert human knowledge workers” is an uncommon phrase. Like many uncommon phrases we have more latitude for a LessWrong census than for general population, but it’s not LW specific jargon either.
Questions themes I would like:
Should open-source LLM’s be allowed or regulated out of existence?
What are your AI timelines?
AI questions we currently have:
P(GPT-5 Release)
What is the probability that OpenAI will release GPT-5 before the end of 2024?
Singularity
By what year do you think the Singularity will occur?
Tangential AI questions we currently have:
P(Global catastrophic risk)
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?
P(Simulation)
What is the probability that our universe is a simulation?
I think Singularity basically gets at your AI timelines question, though not in a lot of detail. You said themes; are you hoping for a subsection of multiple questions trying to ask about that from different angles, or one good question for each?
I’d be tempted to reword the open-source LLM toward something like “How would you describe your opinion on open-source LLMs? [pro-open-source, lean open-source, neutral, lean regulated, pro-regulation]” or something along those lines. I also have an instinct to define LLM with context in the question (“How would you describe your opinion on open-source Artificial Intelligence such as LLMs” perhaps) but maybe that’s unnecessary here.
Thinking in terms of “the Singularity” might not be the most effective way to frame the timelines. I prefer “When will AI be able to do all tasks that expert human knowledge workers currently do?” seems like it’s better at getting people to give a timeline.
Do you have a stance on “all the tasks that expert human knowledge workers currently do?” vs “all the intellectual tasks that humans currently do?” I ask because “expert human knowledge workers” is an uncommon phrase. Like many uncommon phrases we have more latitude for a LessWrong census than for general population, but it’s not LW specific jargon either.