Most long term [sic] users on Less Wrong understand the concept of optimization power and how a system can be called intelligent if it can restrict the future in significant ways.
Probably harder than you expect—try to define that (more) formally. There is only one actual future, and I know of no way of defining optimization power if you only say that a system is “intelligent” without assuming enough about its goal (defining when a system is powerful would be simpler—by the extent to which it wreaks havoc, but it doesn’t follow that in so doing it furthers its goal, unless the havoc was carefully designed to hit the exact target it wants to hit; and to get the havoc rolling, it only needs to be intelligent enough not to self-destruct quickly).
Perhaps reasoning about something like ‘causal measure’ would work, where you can just talk about ‘havoc’ as ‘large effect on agents at whatever seems the most germane level of organization’. ‘Intuitively intelligent’ but not goal-optimizing things will at least have a lot of causal significance, which I think is sufficient for this exercise. (Which is moving towards less formality, not more, so I’m not disputing your comment in any way.)
Probably harder than you expect—try to define that (more) formally. There is only one actual future, and I know of no way of defining optimization power if you only say that a system is “intelligent” without assuming enough about its goal (defining when a system is powerful would be simpler—by the extent to which it wreaks havoc, but it doesn’t follow that in so doing it furthers its goal, unless the havoc was carefully designed to hit the exact target it wants to hit; and to get the havoc rolling, it only needs to be intelligent enough not to self-destruct quickly).
Perhaps reasoning about something like ‘causal measure’ would work, where you can just talk about ‘havoc’ as ‘large effect on agents at whatever seems the most germane level of organization’. ‘Intuitively intelligent’ but not goal-optimizing things will at least have a lot of causal significance, which I think is sufficient for this exercise. (Which is moving towards less formality, not more, so I’m not disputing your comment in any way.)