Each organiser on the team are allowed to accept projects independently. So far Remmelt hasn’t accepted any projects that I would have rejected, so I’m not sure how his unorthodox views could have affected project quality.
Do you think people are avoiding AISC because of Remmelt? I’d be surprised if that was a significant effect.
After we accept projects, the project is pretty much in the hands of each research lead, with very lite involvement from the organisers.
I’d be interested to learn more about in what way you think or have heard that the program have gotten worse.
Do you think people are avoiding AISC because of Remmelt? I’d be surprised if that was a significant effect.
Absolutely, I have heard at least 3-4 conversations where I’ve seen people consider AISC, or talked about other people considering AISC, but had substantial hesitations related to Remmelt. I certainly would recommend someone not participate because of Remmelt, and my sense is this isn’t a particularly rare opinion.
I currently would be surprised if I could find someone informed who I have an existing relationship with for whom it wouldn’t be in their top 3 considerations on whether to attend or participate.
I have heard from many people near AI Safety camp that they also have judged AI safety camp to have gotten worse as a result of this.
Hm. This does give me serious pause. I think I’m pretty close to the camps but I haven’t heard this. If you’d be willing to share some of what’s been relayed to you here or privately, that might change my decision. But what I’ve seen of the recent camps still just seemed very obviously good to me?
I don’t think Remmelt has gone more crank on the margin since I interacted with him in AISC6. I thought AISC6 was fantastic and everything I’ve heard about the camps since then still seemed pretty great.
I am somewhat worried about how it’ll do without Linda. But I think there’s a good shot Robert can fill the gap. I know he has good technical knowledge, and from what I hear integrating him as an organiser seems to have worked well. My edition didn’t have Linda as organiser either.
I think I’d rather support this again than hope something even better will come along to replace it when it dies. Value is fragile.
Hopefully there is enough funding to onboard a third person for next camp. Running AISC at the current scale is a three person job. But I need to take a break from organising.
Lucius, the text exchanges I remember us having during AISC6 was about the question whether ‘ASI’ could control comprehensively for evolutionary pressures it would be subjected to. You and I were commenting on a GDoc with Forrest. I was taking your counterarguments against his arguments seriously – continuing to investigate those counterarguments after you had bowed out.
You held the notion that ASI would be so powerful that it could control for any of its downstream effects that evolution could select for. This is a common opinion held in the community. But I’ve looked into this opinion and people’s justifications for it enough to consider it an unsound opinion.[1]
I respect you as a thinker, and generally think you’re a nice person. It’s disappointing that you wrote me off as a crank in one sentence. I expect more care, including that you also question your own assumptions.
A shortcut way of thinking about this: The more you increase ‘intelligence’ (as a capacity in transforming patterns in data), the more you have to increase the number of underlying information-processing components. But the corresponding increase in the degrees of freedom those components have in their interactions with each other and their larger surroundings grows faster.
This results in a strict inequality between:
the space of possible downstream effects that evolution can select across; and
the subspace of effects that the ‘ASI’ (or any control system connected with/in ASI) could detect, model, simulate, evaluate, and correct for.
The hashiness model is a toy model for demonstrating this inequality (incl. how the mismatch between 1. and 2. grows over time). Anders Sandberg and two mathematicians are working on formalising that model at AISC.
There’s more that can be discussed in terms of why and how this fully autonomous machinery is subjected to evolutionary pressures. But that’s a longer discussion, and often the researchers I talked with lacked the bandwidth.
It’s disappointing that you wrote me off as a crank in one sentence. I expect more care, including that you also question your own assumptions.
I think it is very fair that you are disappointed. But I don’t think I can take it back. I probably wouldn’t have introduced the word crank myself here. But I do think there’s a sense in which Oliver’s use of it was accurate, if maybe needlessly harsh. It does vaguely point at the right sort of cluster in thing-space.
It is true that we discussed this and you engaged with a lot of energy and in good faith. But I did not think Forrest’s arguments were convincing at all, and I couldn’t seem to manage to communicate to you why I thought that. Eventually, I felt like I wasn’t getting through to you, Quintin Pope also wasn’t getting through to you, and continuing started to feel draining and pointless to me.
I emerged from this still liking you and respecting you, but thinking that you are wrong about this particular technical matter in a way that does seem like the kind of thing people imagine when they hear ‘crank’.
Your response is also emblematic of what I find concerning here, which is that you are not offering a clear argument of why something does not make sense to you before writing ‘crank’.
Writing that you do not find something convincing is not an argument – it’s a statement of conviction, which could as much be a reflection of a poor understanding of an argument or of not taking the time to question one’s own premises. Because it’s not transparent about one’s thinking, but still comes across like there must be legit thinking underneath, this can be used as a deflection tactic (I don’t think you are, but others who did not engage much ended the discussion on that note). Frankly, I can’t convince someone if they’re not open to the possibility of being convinced.
I explained above why your opinion was flawed – that ASI would be so powerful that it could cancel all of evolution across its constituent components (or at least anything that through some pathway could build up to lethality).
I similarly found Quintin’s counter-arguments (eg. hinging on modelling AGI as trackable internal agents) to be premised on assumptions that considered comprehensively looked very shaky.
I relate why discussing this feels draining for you. But it does not justify you writing ‘crank’, when you have not had the time to examine the actual argumentation (note: you introduced the word ‘crank’ in this thread; Oliver wrote something else).
Overall, this is bad for community epistemics. It’s better if you can write what you thought was unsound about my thinking, and I can write what I found unsound about yours. Barring that exchange, some humility that you might be missing stuff is well-placed.
Each organiser on the team are allowed to accept projects independently. So far Remmelt hasn’t accepted any projects that I would have rejected, so I’m not sure how his unorthodox views could have affected project quality.
Do you think people are avoiding AISC because of Remmelt? I’d be surprised if that was a significant effect.
After we accept projects, the project is pretty much in the hands of each research lead, with very lite involvement from the organisers.
I’d be interested to learn more about in what way you think or have heard that the program have gotten worse.
Absolutely, I have heard at least 3-4 conversations where I’ve seen people consider AISC, or talked about other people considering AISC, but had substantial hesitations related to Remmelt. I certainly would recommend someone not participate because of Remmelt, and my sense is this isn’t a particularly rare opinion.
I currently would be surprised if I could find someone informed who I have an existing relationship with for whom it wouldn’t be in their top 3 considerations on whether to attend or participate.
Hm. This does give me serious pause. I think I’m pretty close to the camps but I haven’t heard this. If you’d be willing to share some of what’s been relayed to you here or privately, that might change my decision. But what I’ve seen of the recent camps still just seemed very obviously good to me?
I don’t think Remmelt has gone more crank on the margin since I interacted with him in AISC6. I thought AISC6 was fantastic and everything I’ve heard about the camps since then still seemed pretty great.
I am somewhat worried about how it’ll do without Linda. But I think there’s a good shot Robert can fill the gap. I know he has good technical knowledge, and from what I hear integrating him as an organiser seems to have worked well. My edition didn’t have Linda as organiser either.
I think I’d rather support this again than hope something even better will come along to replace it when it dies. Value is fragile.
I vouch for Robert as a good replacement for me.
Hopefully there is enough funding to onboard a third person for next camp. Running AISC at the current scale is a three person job. But I need to take a break from organising.
Lucius, the text exchanges I remember us having during AISC6 was about the question whether ‘ASI’ could control comprehensively for evolutionary pressures it would be subjected to. You and I were commenting on a GDoc with Forrest. I was taking your counterarguments against his arguments seriously – continuing to investigate those counterarguments after you had bowed out.
You held the notion that ASI would be so powerful that it could control for any of its downstream effects that evolution could select for. This is a common opinion held in the community. But I’ve looked into this opinion and people’s justifications for it enough to consider it an unsound opinion.[1]
I respect you as a thinker, and generally think you’re a nice person. It’s disappointing that you wrote me off as a crank in one sentence. I expect more care, including that you also question your own assumptions.
A shortcut way of thinking about this:
The more you increase ‘intelligence’ (as a capacity in transforming patterns in data), the more you have to increase the number of underlying information-processing components. But the corresponding increase in the degrees of freedom those components have in their interactions with each other and their larger surroundings grows faster.
This results in a strict inequality between:
the space of possible downstream effects that evolution can select across; and
the subspace of effects that the ‘ASI’ (or any control system connected with/in ASI) could detect, model, simulate, evaluate, and correct for.
The hashiness model is a toy model for demonstrating this inequality (incl. how the mismatch between 1. and 2. grows over time). Anders Sandberg and two mathematicians are working on formalising that model at AISC.
There’s more that can be discussed in terms of why and how this fully autonomous machinery is subjected to evolutionary pressures. But that’s a longer discussion, and often the researchers I talked with lacked the bandwidth.
I think it is very fair that you are disappointed. But I don’t think I can take it back. I probably wouldn’t have introduced the word crank myself here. But I do think there’s a sense in which Oliver’s use of it was accurate, if maybe needlessly harsh. It does vaguely point at the right sort of cluster in thing-space.
It is true that we discussed this and you engaged with a lot of energy and in good faith. But I did not think Forrest’s arguments were convincing at all, and I couldn’t seem to manage to communicate to you why I thought that. Eventually, I felt like I wasn’t getting through to you, Quintin Pope also wasn’t getting through to you, and continuing started to feel draining and pointless to me.
I emerged from this still liking you and respecting you, but thinking that you are wrong about this particular technical matter in a way that does seem like the kind of thing people imagine when they hear ‘crank’.
I kinda appreciate you being honest here.
Your response is also emblematic of what I find concerning here, which is that you are not offering a clear argument of why something does not make sense to you before writing ‘crank’.
Writing that you do not find something convincing is not an argument – it’s a statement of conviction, which could as much be a reflection of a poor understanding of an argument or of not taking the time to question one’s own premises. Because it’s not transparent about one’s thinking, but still comes across like there must be legit thinking underneath, this can be used as a deflection tactic (I don’t think you are, but others who did not engage much ended the discussion on that note). Frankly, I can’t convince someone if they’re not open to the possibility of being convinced.
I explained above why your opinion was flawed – that ASI would be so powerful that it could cancel all of evolution across its constituent components (or at least anything that through some pathway could build up to lethality).
I similarly found Quintin’s counter-arguments (eg. hinging on modelling AGI as trackable internal agents) to be premised on assumptions that considered comprehensively looked very shaky.
I relate why discussing this feels draining for you. But it does not justify you writing ‘crank’, when you have not had the time to examine the actual argumentation (note: you introduced the word ‘crank’ in this thread; Oliver wrote something else).
Overall, this is bad for community epistemics. It’s better if you can write what you thought was unsound about my thinking, and I can write what I found unsound about yours. Barring that exchange, some humility that you might be missing stuff is well-placed.
Besides this point, the respect is mutual.
Is this because they think it would hurt their reputation, or because they think Remmelt would make the program a bad experience for them?