Knowledge Seeker https://lorenzopieri.com/
lorepieri
Interesting paradox.
As other commented, I see multiple flaws:
We believe to seem to know that there is a reality that exists. I doubt we can conceive reality, but only a vague understanding of it. Moreover we have no experience of “not existing”, so it’s hard to argue that we have a strong grasp on deeply understanding that there is a reality that exists.
Biggest issue is here imho (this is a very common misunderstanding): math is just a tool which we use to describe our universe, it is not (unless you take some approach like the mathematical universe) our universe. The fact that it works well is selection bias. We use math that works well to describe our universe, we discard the rest (see e.g. negative solution to the equation of motion in newtonian mechanics). Math by itself is infinite, we just use a small subset to describe our universe. Also we take insipiration from our universe to build math.
Basic Post Scarcity Q&A
Not conclusive, but still worth doing in my view due to the relative easiness. Create the spreadsheet, make it public and let’s see how it goes.
I would add the actual year in which you think it will happen.
Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.
I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky. Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.
I fear that measuring modifications it’s like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications. Also it’s not clear how hard some modifications will be without actually carrying out those modifications.
Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals?
I’m working on this topic too, I will PM you.
Also feel free to reach out if topic is of interest.
Other useful references:
-On the Measure of Intelligence https://arxiv.org/abs/1911.01547
-S. Legg and M. Hutter, A collection of definitions of intelligence, Frontiers in Artificial Intelligence and applications, 157 (2007),
-S. Legg and M. Hutter, Universal intelligence: A definition of machine intelligence, Minds and Machines, 17 (2007), pp. 391-444. https://arxiv.org/pdf/0712.3329.pdf
-P. Wang, On Defining Artificial Intelligence, Journal of Artificial General Intelligence, 10 (2019), pp. 1-37.
-J. Hernández-Orallo, The measure of all minds: evaluating natural and artificial intelligence, Cambridge University Press, 2017.
This is the most likely scenario, with AGI getting heavily regulated, similarly to nuclear. It doesn’t get much publicity because it’s “boring”.
Nice link, thanks for sharing.
The 1 million prize problem should be “clearly define the AI alignement problem”. I’m not even joking, actually understanding the problem and enstablising that there is a problem in the first place may give us hints to the solution.
In research there are a lot of publications, but few stand the test of time. I would suggest to you to look at the architectures which brought significant changes and ideas, those are still very relevant as they:
- often form the building block of current solutions
- they help you build intuition on how architectures can be improved
- it is often assumed in the field that you know about them
- they are often still useful, especially when having low resources
You should not need to look at more than 1-2 architectures per year in each field (computer vision, NLP, RL). Only then I would focus on SOTA.
You may want to check https://fullstackdeeplearning.com/spring2021/ it should have enough historic material to get you covered and expand from there, while also going quickly to modern topics.
Thanks for the link, I will check it out.
ARC is a nice attempt. I also participated in the original challenge on Kaggle. The issue is that the test can be gamed (as anyone on Kaggle did) brute forcing over solution strategies.
An open-ended or interactive version of ARC may solve this issue.
I’m working on these lines to create an easy to understand numeric evaluation scale for AGIs. The dream would be something like: “Gato is AGI level 3.5, while the average human is 8.7.” I believe the scale should factor in that no single static test can be a reliable test of intelligence (any test can be gamed and overfitted).
A good reference on the subject is “The Measure of All Minds” by Orallo.
Happy to share a draft, send me a DM if interested.
When you say “switching” it reminds me of the “big switch” approach of https://en.wikipedia.org/wiki/General_Problem_Solver.
Regarding to how they do it, I believe the relevant passage to be:
Because distinct tasks within a domain can share identical embodiments, observation formats and action specifications, the model sometimes needs further context to disambiguate tasks. Rather than providing e.g. one-hot task identifiers, we instead take inspiration from (Brown et al., 2020; Sanh et al., 2022; Wei et al., 2021) and use prompt conditioning.
I guess it should be possible to locate the activation paths for different tasks, as the tasks are pretty well separated. Something on the lines of https://github.com/jalammar/ecco
Fair analysis, I agree with the conclusions. The main contribution seems to be a proof that transformers can handle many tasks at the same time.
Not sure if you sorted the tests in order of relevance, but I also consider the “held-out” test as being the more revealing. Besides finetuning, it would be interesting to test the zero-shot capabilities.
Understanding Gato’s Supervised Reinforcement Learning
A single network is solving 600 different tasks spanning different areas. 100+ of the tasks are solved at 100% human performance. Let that sink in.
While not a breaktrough in arbitrary scalable generality, the fact that so many tasks can be fitted into one architecture is surprising and novel. For many real life applications, being good in 100-1000 tasks makes an AI general enough to be deployed as an error tollerant robot, say in a warehouse.
The main point imho is that this architecture may be enough to be scaled (10-1000x parameters) in few years to a useful proto-AGI product.
Pretty disappointing and unexpected to hear this in 2022, after all the learnings from the pandemic.
What’s stopping the companies from hiring a new researcher? People are queueing for tech jobs.
Yes, the code is open source: https://gitlab.com/postscarcity/map