The location is confirmed :)
wilm
ACX Schelling Meetup
Sith
small typo
Location: (49.0142290, 8.4038544) https://maps.app.goo.gl/qrzC1mseUChj2cvFA?g_st=ic
Picnic in Schlossgarten
Karlsruhe, Germany – ACX Meetups Everywhere Spring 2023
ACX Schelling Meetup
Karlsruhe Rationality Meetup: Online Hangout
Karlsruhe Rationality Meetup: Inadequate Equilibria pt. 4/4
Karlsruhe Rationality Meetup: Inadequate Equilibria pt3
Karlsruhe Rationality Meetup: Inadequate Equilibria pt2
Karlsruhe Rationality Meetup: Nomic
Karlsruhe Rationality Meetup: The moments that matter
Hike with Freiburg Rationality
Karlsruhe Rationality Meetup: Longtermism
I’ve unfortunately been quite distracted, but better a late reply than no reply.
With capabilities I mean how well a system accomplishes different tasks. This is potentially high dimensional (there can be many tasks that two systems are not equally good at). Also it can be more and less general (optical character recognition is very narrow because it can only be used for one thing, generating / predicting text is quite general). Also, systems without agency can have strong and general capabilities (a system might generate text or images without being agentic).This is quite different from the definition by Legg and Hutter, which is more specific to agents. However, since last week I have updated on strongly and generally capable non-agentic systems being less likely to actually be built (especially before agentic systems). In consequence, the difference between my notion of capabilities and a more agent related notion of intelligence is less important than I thought.
Thanks for your replies. I think our intuitions regarding intelligence and agency are quite different. I deliberately mostly stickest to the word ‘capabilities’, because in my intuition you can have systems with very strong and quite general capabilities, that are not agentic.
One very interesting point is that you : “Presumably the problem happens somewhere between “the smartest animal we know” and “our intelligence”, and once we are near that, recursive self-improvement will make the distinction moot”. Can you explain this position more? In my intuition building and improving intelligent systems is far harder than that.
I hope to later come back to your answer to information about the real world.
Capability and Agency as Cornerstones of AI risk — My current model
I’m already there, wearing a red T-shirt
Careful: this event page is a duplicate of https://www.lesswrong.com/events/JWWaknqcT3zJHrdNu/acx-schelling-meetup-6