I mean, I’m not sure if an intervention is necessary—I do in fact engage with people who share my viewpoint, or at least understand it well; many of them are at CHAI. It just doesn’t happen on LW/AF.
Yeah, I figured as much, which is why I said I’d prefer having an online place for such discussions so that I would be able to listen in on these discussions. :) Another advantage is to encourage more discussions across organizations and from independent researchers, students, and others considering going into the field.
Maybe I more mean that there’s an emphasis that any particular idea must have a connection via a sequence of logical steps to a full solution to AI safety.
It’s worth noting that many MIRI researchers seem to have backed away from this (or clarified that they didn’t think this in the first place). This was pretty noticeable at the research retreat and also reflected in their recent writings. I want to note though how scary it is that almost nobody has a good idea how their current work logically connects to a full solution to AI safety.
Note that I’m not saying I disagree with all of these points; I’m trying to point at a cluster of beliefs / modes of thinking that I tend to see in people who have viewpoint X.
I’m curious what your strongest disagreements are, and what bugs you the most, as far as disincentivizing you to participate on LW/AF.
It’s worth noting that many MIRI researchers seem to have backed away from this (or clarified that they didn’t think this in the first place).
Agreed that this is reflected in their writings. I think this usually causes them to move towards trying to understand intelligence, as opposed to proposing partial solutions. (A counterexample: Non-Consequentialist Cooperation?) When others propose partial solutions, I’m not sure whether or not this belief is reflected in their upvotes or engagement through comments. (As in, I actually am uncertain—I can’t see who upvotes posts, and for the most part MIRI researchers don’t seem to engage very much.)
I want to note though how scary it is that almost nobody has a good idea how their current work logically connects to a full solution to AI safety.
Agreed.
I’m curious what your strongest disagreements are, and what bugs you the most, as far as disincentivizing you to participate on LW/AF.
I don’t think any of those features strongly disincentivize me from participating on LW/AF; it’s more the lack of people close to my own viewpoint that disincentivizes me from participating.
Maybe the focus on exact precision instead of robustness to errors is a disincentive, as well as the focus on expected utility maximization with simple utility functions. A priori I assign somewhat high probability that I will not find useful a critical comment on my work from anyone holding that perspective, but I’ll feel obligated to reply anyway.
Certainly those two features are the ones I most disagree with; the other three seem pretty reasonable in moderation.
I don’t think any of those features strongly disincentivize me from participating on LW/AF; it’s more the lack of people close to my own viewpoint that disincentivizes me from participating.
I see. Hopefully the LW/AF team is following this thread and thinking about what to do, but in the meantime I encourage you to participate anyway, as it seems good to get ideas from your viewpoint “out there” even if no one is currently engaging with them in a way that you find useful.
as well as the focus on expected utility maximization with simple utility functions
I don’t think anyone talks about simple utility functions? Maybe you mean explicit utility functions?
A priori I assign somewhat high probability that I will not find useful a critical comment on my work from anyone holding that perspective, but I’ll feel obligated to reply anyway.
If this feature request of mine were implemented, you’d be able to respond to such comments with a couple of clicks. In the meantime it seems best to just not feel obligated to reply.
I encourage you to participate anyway, as it seems good to get ideas from your viewpoint “out there” even if no one is currently engaging with them in a way that you find useful.
Yeah, that’s the plan.
I don’t think anyone talks about simple utility functions? Maybe you mean explicit utility functions?
Yes, sorry. I said that because they feel very similar to me: any utility function that can be explicitly specified must be reasonably simple. But I agree “explicit” is more accurate.
In the meantime it seems best to just not feel obligated to reply.
That seems right, but also hard to do in practice (for me).
Yeah, I figured as much, which is why I said I’d prefer having an online place for such discussions so that I would be able to listen in on these discussions. :) Another advantage is to encourage more discussions across organizations and from independent researchers, students, and others considering going into the field.
It’s worth noting that many MIRI researchers seem to have backed away from this (or clarified that they didn’t think this in the first place). This was pretty noticeable at the research retreat and also reflected in their recent writings. I want to note though how scary it is that almost nobody has a good idea how their current work logically connects to a full solution to AI safety.
I’m curious what your strongest disagreements are, and what bugs you the most, as far as disincentivizing you to participate on LW/AF.
Agreed that this is reflected in their writings. I think this usually causes them to move towards trying to understand intelligence, as opposed to proposing partial solutions. (A counterexample: Non-Consequentialist Cooperation?) When others propose partial solutions, I’m not sure whether or not this belief is reflected in their upvotes or engagement through comments. (As in, I actually am uncertain—I can’t see who upvotes posts, and for the most part MIRI researchers don’t seem to engage very much.)
Agreed.
I don’t think any of those features strongly disincentivize me from participating on LW/AF; it’s more the lack of people close to my own viewpoint that disincentivizes me from participating.
Maybe the focus on exact precision instead of robustness to errors is a disincentive, as well as the focus on expected utility maximization with simple utility functions. A priori I assign somewhat high probability that I will not find useful a critical comment on my work from anyone holding that perspective, but I’ll feel obligated to reply anyway.
Certainly those two features are the ones I most disagree with; the other three seem pretty reasonable in moderation.
I see. Hopefully the LW/AF team is following this thread and thinking about what to do, but in the meantime I encourage you to participate anyway, as it seems good to get ideas from your viewpoint “out there” even if no one is currently engaging with them in a way that you find useful.
I don’t think anyone talks about simple utility functions? Maybe you mean explicit utility functions?
If this feature request of mine were implemented, you’d be able to respond to such comments with a couple of clicks. In the meantime it seems best to just not feel obligated to reply.
Yeah, that’s the plan.
Yes, sorry. I said that because they feel very similar to me: any utility function that can be explicitly specified must be reasonably simple. But I agree “explicit” is more accurate.
That seems right, but also hard to do in practice (for me).