I think this is a mistake to import “democracy” at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. “If you want to kill any idea in the world, get a committee working on it.” Also, Deutsch was writing about this in “The Beginning of Infinity” in the chapter about democracy.
We should aggregate desiderata and preferences (see “Preference Aggregation as Bayesian Inference”), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture.
Democracy is a mistake, for all of the obvious reasons. As is the belief amongst engineers that every problem is an engineering problem :P
We have a whole bunch of tools going mostly unused and unnoticed that could, plausibly, enable a great deal more trust and collaboration than is currently possible.
We have a whole bunch of people both thinking about and working on the polycrisis already.
My proposal is that we’re far more likely to achieve our ultimate goal—a future we’d like to live in—if we simply do our best to empower, rather than direct, others.
I expect attempts to direct, no matter how brilliant the plan or the mind(s) behind it, are likely to fail. For all the obvious reasons.
(caveat: yes AGI changes this, but it changes everything. My whole point is that we need to keep the ship from sinking long enough for AGI to take the wheel)
Democracy is a mistake, for all of the obvious reasons.
As is the belief amongst engineers that every problem is an engineering problem :P
We have a whole bunch of tools going mostly unused and unnoticed that could, plausibly, enable a great deal more trust and collaboration than is currently possible.
We have a whole bunch of people both thinking about and working on the polycrisis already.
My proposal is that we’re far more likely to achieve our ultimate goal—a future we’d like to live in—if we simply do our best to empower, rather than direct, others.
I expect attempts to direct, no matter how brilliant the plan or the mind(s) behind it, are likely to fail. For all the obvious reasons.
(caveat: yes AGI changes this, but it changes everything. My whole point is that we need to keep the ship from sinking long enough for AGI to take the wheel)