A few of us got together in the pub after the friendly AI meet and agreed we should have a meetup for those of us familiar with lesswrong/bostrom etc. This is a post for discussion of when/where.
Venue: London still seems the nexus. I might be able to be convinced to go to Oxford. A starbucks type place is okay, although it’d be nice to have a white board or other presentation systems.
Date/Time: Weekends is fine with me, and I suspect most people. Julian suggested after the next UKTA meeting. Will that be April at the humanityplus thing? It would depend whether we are still mentally fresh after it, and whether any of our group are attending the dinner after.
Activities: I think it would be a good idea to have some structure or topics to discuss, so that we don’t just fall back into the “what do you do” types of discussion too much. Maybe mini presentations.
My duo of current interests
1)Evidence for intelligence explosion: I don’t want to rehash what we know already, but I would like to try and figure out what experiments we can do (safely) or proofs we can make to increase or decrease our belief that it will occur. This is more of a brainstorming session.
2) The nature of the human brain: Specifically it doesn’t appear to have a goal (in the decision theory sense) built in, although it can become a goal optimizer to a greater or lesser extent. How might it do this? As we aren’t neuroscientists, a more fruitful question might be what the skeleton of such a computer system that can do this might be look like, even if we can’t fill in all the interesting details. I’d discuss this with regards to akrasia, neural enhancement, volition extraction and non-exploding AI scenarios. I can probably pontificate on this for a while, if I prepare myself.
I think Ciphergoth wanted to talk about consequentialist ethics.
Shout in the comments if you have a topic you’d like to discuss, or would rather not discuss.
Perhaps we should also look at the multiplicity of AI bias, where people seem to naturally assume there will be multiple AIs even when talking about super intelligent singularity scenarios (many questions had this property at the meeting). I suspect that this could be countered by reading a Fire upon the Deep somewhat.
Lesswrong UK planning thread
A few of us got together in the pub after the friendly AI meet and agreed we should have a meetup for those of us familiar with lesswrong/bostrom etc. This is a post for discussion of when/where.
Venue: London still seems the nexus. I might be able to be convinced to go to Oxford. A starbucks type place is okay, although it’d be nice to have a white board or other presentation systems.
Date/Time: Weekends is fine with me, and I suspect most people. Julian suggested after the next UKTA meeting. Will that be April at the humanityplus thing? It would depend whether we are still mentally fresh after it, and whether any of our group are attending the dinner after.
Activities: I think it would be a good idea to have some structure or topics to discuss, so that we don’t just fall back into the “what do you do” types of discussion too much. Maybe mini presentations.
My duo of current interests
1)Evidence for intelligence explosion: I don’t want to rehash what we know already, but I would like to try and figure out what experiments we can do (safely) or proofs we can make to increase or decrease our belief that it will occur. This is more of a brainstorming session.
2) The nature of the human brain: Specifically it doesn’t appear to have a goal (in the decision theory sense) built in, although it can become a goal optimizer to a greater or lesser extent. How might it do this? As we aren’t neuroscientists, a more fruitful question might be what the skeleton of such a computer system that can do this might be look like, even if we can’t fill in all the interesting details. I’d discuss this with regards to akrasia, neural enhancement, volition extraction and non-exploding AI scenarios. I can probably pontificate on this for a while, if I prepare myself.
I think Ciphergoth wanted to talk about consequentialist ethics.
Shout in the comments if you have a topic you’d like to discuss, or would rather not discuss.
Perhaps we should also look at the multiplicity of AI bias, where people seem to naturally assume there will be multiple AIs even when talking about super intelligent singularity scenarios (many questions had this property at the meeting). I suspect that this could be countered by reading a Fire upon the Deep somewhat.