I thought the criticism on that specific quote was that the “higher intelligence” group, while taking more time, did solve the hard questions correctly, as opposed to not solving them correctly at all.
Diziet
I wonder how much the survey paying a few RMB and asking for a wechat ID influenced the results. Great work working and sharing this poll though!
While this survey’s responses are anonymous, respondents did submit their WeChat IDs so that they could be remunerated for participation.
Top 10 donations in 2023, since the html page offers no sorting and is sorted by date:
$2,800,000 Cooperative AI Foundation General support $1,846,000 Alignment Research Center General support for ARC Evals Team $1,733,000 Center for Applied Rationality General support for Lightcone Infrastructure $1,327,000 Center on Long-Term Risk General support $1,241,000 Manifold for Charity General support for Manifold Markets $1,159,000 Alliance to Feed the Earth in Disasters General support $1,000,000 Carnegie Mellon University Foundations of Cooperative AI Lab $1,000,000 Massachusetts Institute of Tech Gift to the Tegmark research group at MIT for General Support $1,000,000 Meridian Prime General support $909,000 Center for Artificial Intelligence Safety General support
The current link to the podcast is available here: https://munkdebates.com/podcast/the-rise-of-thinking-machines
It seems one could convince this hypothetical emperor to invest into industrialization of technology by offering to build things other than a steam engine, or outlining how a steam engine leads to them—telegraph, or semaphore towers to send news of invasions or changes in distant towns or provinces, better manufacturing capability for tools and weapons, food storage and transport mechanisms, etc.
I looked over a bit of David’s public facing work, eg: https://www.youtube.com/watch?v=I7hJggz41oU
I think there is a fundamental difference between robust, security minded alignment and tweaking smaller language models to produce output that “looks” correct. It seems David is very optimistic about how easy these problems are to solve.
I tracked down the exact quote where Prof Marcus was talking about timelines with regards to jobs. He mentioned 20-100 years (right before the timestamp) and then went on to say: https://youtu.be/TO0J2Yw7usM?t=2438
“In the long run, so called AGI really will replace a large fraction of human jobs. We’re not that close to AGI, despite all the media hype and so forth … in 20 years people will laugh at this … but when we get to AGI, let’s say it is 50 years that is really going to have profound effects on labor...”
Christina Montgomery is explicitly asked “Should we have one” [referring to a new agency] by Senator Lindsey Graham and says “I don’t think so” at https://youtu.be/TO0J2Yw7usM?t=4920
Couple of more takeaways I jotted down:
PaLM2 followed closely [to] Chinchilla optimal scaling. No explicit mention of number of parameters, data withheld. Claim performance is generally equivalent to GPT-4. Chain-of-thought reasoning is called out explicitly quite a bit.
Claims of longer context length, but no specific size in the technical report. From the api page: “75+ tokens per second and a context window of 8,000 tokens,”
“The largest model in the PaLM 2 family, PaLM 2-L, is significantly smaller than the largest PaLM model but uses more training compute” “The pre-training corpus is significantly larger than the corpus used to train PaLM [which was 780B tokens]”
I was somewhat disturbed by the enthusiastic audience applause to dire serious warnings. What are techniques or ways to anchor conversations like this to keep them more serious?
As a new user—is it ok and acceptable to create a new post? I have read the discussions in this community in logged-out-mode for quite some time, but never contributed.
I wanted to make a post titled “10 Questions and Prompts that only an AGI or ASI could answer”
A bit of feedback: the “We get a second chance at building AGI” outcome should not be an outcome or perhaps rephrased.