Thanks for sharing, just played my first round and it was a lot of fun!
MaxRa
Make bets here? I expect many people should be willing to bet against an AI winter. Would additionally give you some social credit if you win. I’d be interested in seeing some concrete proposals.
Really enjoyed reading this. The section on “AI pollution” leading to a loss of control about the development of prepotent AI really interested me.
Avoiding [the risk of uncoordinated development of Misaligned Prepotent AI] calls for well-deliberated and respected assessments of the capabilities of publicly available algorithms and hardware, accounting for whether those capabilities have the potential to be combined to yield MPAI technology. Otherwise, the world could essentially accrue “AI-pollution” that might eventually precipitate or constitute MPAI.
I wonder how realistic it is to predict this e.g. would you basically need the knowledge to build it to have a good sense for that potential?
I also thought the idea of AI orgs dropping all their work once the potential for this concentrates in another org is relevant here—are there concrete plans when this happens?
Are there discussion about when AI orgs might want to stop publishing things? I only know of MIRI, but would they advise others like OpenAI or DeepMind to follow their example?
Thanks a lot for the elaboration!
in particular I still can’t really put myself in the head of Friston, Clark, etc. so as to write a version of this that’s in their language and speaks to their perspective.
Just a sidenote, one of my profs is part of the Bayesian CogSci crowd and was fairly frustrated with and critical of both Friston and Clark. We read one of Friston’s papers in our journal club and came away thinking that Friston is reinventing a lot of wheels and using odd terms for known concepts.
For me, this paper by Sam Gershman helped a lot in understanding Friston’s ideas, and this one by Laurence Aitchison and Máté Lengyel was useful, too.
I would say that the generative models are a consortium of thousands of glued-together mini-generative-models
Cool, I like that idea, I previously thought about the models as fairly separated and bulky entities, that sounds much more plausible.
That’s really interesting, I haven’t thought about this much, but it seems very plausible and big if true (though I am likely biased as a Cognitive Science student). Do you think this might be turned into a concrete question to forecast for the Metaculus crowd, i.e. “Reverse-engineering neocortex algorithms will be the first way we get AGI”? The resolution might get messy if an org like DeepMind, with their fair share of computational neuroscientists, will be the ones who get there first, right?
As a (maybe misguided) side comment, model sketches like yours make me intuitively update for shorter AI timelines, because they give me a sense of a maturing field of computational cognitive science. Would be really interested in what others think about that.
That’s super fascinating. I’ve dabbled a bit in all of those parts of your picture and seeing them put together like this feels really illuminating. I’d wish some predictive coding researcher would be so kind to give it a look, maybe somebody here knows someone?
During reading, I was a bit confused about the set of generative models or hypotheses. Do you have an example how this could concretely look like? For example, when somebody tosses me an apple, is there a generative model for different velocities and weights, or one generative model with an uncertainty distribution over those quantities? In the latter case, one would expect another updating-process acting “within” each generative model, right?
That was really interesting!:)
Your idea of subcortical spider detection reminded me of this post by Kaj Sotala, discussing the argument that it’s more about „peripheral“ attentional mechanisms having evolved to attend to spiders etc., and consequently being easier learned as dangerous.
These results suggest that fear of snakes and other fear-relevant stimuli is learned via the same central mechanisms as fear of arbitrary stimuli. Nevertheless, if that is correct, why do phobias so often relate to objects encountered by our ancestors, such as snakes and spiders, rather than to objects such as guns and electrical sockets that are dangerous now [10]? Because peripheral, attentional mechanisms are tuned to fear-relevant stimuli, all threat stimuli attract attention, but fear-relevant stimuli do so without learning (e.g., [56]). This answer is supported by evidence from conditioning experiments demonstrating enhanced attention to fear-relevant stimuli regardless of learning (Box 2), studies of visual search [57–59], and developmental psychology [60,61]. For example, infants aged 6–9 months show a greater drop in heart rate – indicative of heightened attention rather than fear – when they watch snakes than when they watch elephants [62].
From a Nature news article last week:
One study7of 143 people with COVID-19 discharged from a hospital in Rome found that 53% had reported fatigue and 43% had shortness of breath an average of 2 months after their symptoms started. A study of patients in China showed that 25% had abnormal lung function after 3 months, and that 16% were still fatigued8.
I haven’t read Stanovichs’ papers you refer to, but in his book “Rationality and the reflective mind” he proposes a seperation of Type 2 processing into 1) serial associative cognition with a focal bias and 2) fully decoupled simulations for alternative hypothesis. (Just noting it because I found it useful for my own thinking.)
In fact, an exhaustive simulation of alternative worlds would guarantee correct responding in the [Wason selection] task. Instead [...] subjects accept the rule as given, assume it is true, and simply describe how they would go about verifying it. They reason from a single focal model— systematically generating associations from this focal model but never constructing another model of the situation. This is what I would term serial associative cognition with a focal bias. It is how I would begin to operationalize the satisficing bias in Type 2 processing posited in several papers by Evans (2006b; Evans, Over, & Handley, 2003). ”
I always understood bias to mean systematic deviations from the correct response (as in the bias-variance decomposition [1], e.g. a bias to be more overconfident, or the bias of being anchored to arbitrary numbers). I read your and Evans’ interpretation of it more like bias meaning incorrect in some areas. As Type 2 processing seems to be very flexible and unconstrained, I thought that it might not necessarily be biased but simply sufficiently unconstrained and high variance to cause plenty of errors in many domains.
[1] https://miro.medium.com/max/2567/1*CgIdnlB6JK8orFKPXpc7Rg.png
PS: Thanks for your writing, I really enjoy it a lot.
LessWrong Darmstadt Meetup
LessWrong Meetup Darmstadt
Welcome to LessWrong Darmstadt
Since one month I do some sort of productivity gamification: I rate my mornings on a 1-5 scale with regard to
1) time spend doing something useful and
2) degree of distraction.
Plus if I get out of bed immediately after waking up, I get a plus point.
For every point that I don’t achieve on these scales, I pay 50 cents to a charity.
A random morning of mine:
1) time was well spent, I started working early and kept at it until lunch → 5⁄5
2) I had some problems focussing while reading → 3⁄5
+1 because I got out of bed immediately
The major noticeable impact it has so far is that I get out of bed in the morning. Plus it gives me a chance to review, e.g. one hypotheses I made on the basis of the example: coffee decreases ability to focus.
Thanks, I find your neocortex-like AGI approach really illuminating.
Random thought:
I was wondering if this is necessarily the best „everything is unpalatable“ policy. I could imagine that the best fallback option could also be something like „preserve your options while gathering information, strategizing and communicating with relevant other agents“, assuming that this is not unpalatable, too. I guess we may not yet trust the AGI to do this, option preservation might cause much more harm than doing nothing. But I still wonder if there are cases in which every option is unpalatable but doing nothing is clearly worse.