The way I tend to think of ‘simulators’ is in simulating a distribution over worlds (i.e., latent variables) that increasingly collapses as prompt information determines specific processes with higher probability.
I agree this is the correct interpretation of the original post. It just doesn’t match typical usage of the world simulation imo. (I’m sorry my post is making such a narrow pedantic point).
I probably agree that simulators improved the thinking of people on lesswrong on average.
I don’t disagree that there aren’t people who came away with the wrong impression (though they’ve been at most a small minority of people I’ve talked to, you’ve plausibly spoken to more people). But I think that might be owed more to generative models being confusing to think about intrinsically. Speaking of them purely as predictive models probably nets you points for technical accuracy, but I’d bet it would still lead to a fair number of people thinking about them the wrong way.
I agree this is the correct interpretation of the original post. It just doesn’t match typical usage of the world simulation imo. (I’m sorry my post is making such a narrow pedantic point).
I probably agree that simulators improved the thinking of people on lesswrong on average.
I don’t disagree that there aren’t people who came away with the wrong impression (though they’ve been at most a small minority of people I’ve talked to, you’ve plausibly spoken to more people). But I think that might be owed more to generative models being confusing to think about intrinsically. Speaking of them purely as predictive models probably nets you points for technical accuracy, but I’d bet it would still lead to a fair number of people thinking about them the wrong way.