This topic made me wonder if we can’t make things, or we make create things that are harder to put your finger on—better ideas. Another way of putting that is what we have conceptual OCD, and compulsively spend our time straightening up our ideas.
Instead of paper clip maximizers, we’re good idea maximizers.
Most definitely. That’s sort of what I meant by conceptual OCD. We internally survey our ideas, they seem a mess, and we feel a compulsion to tidy them up. Once we’re tidy internally, we feel ok, and take a nap.
We are good idea maximizers, once all the data is stuffed into our heads. But the application of the ideas to make a change outside ourselves is not something that motivates us. We’re model builders, not model appliers.
As you suggest, that probably leaves our ideas half baked. We haven’t caught the inconsistencies and inadequacies, because we’ve never tested the model against reality, or even against a clear powerpoint presentation outside our own heads.
We’re internal inconsistency minimizers. Once the internals seem coherent, we feel fine and move on. We’re powerful and useful machines, if used properly.
XKCD on my desk at work: http://xkcd.com/974/
This topic made me wonder if we can’t make things, or we make create things that are harder to put your finger on—better ideas. Another way of putting that is what we have conceptual OCD, and compulsively spend our time straightening up our ideas.
Instead of paper clip maximizers, we’re good idea maximizers.
Or sounds-superfically-good-enough-to-make-you-feel-good idea maximizers?
Most definitely. That’s sort of what I meant by conceptual OCD. We internally survey our ideas, they seem a mess, and we feel a compulsion to tidy them up. Once we’re tidy internally, we feel ok, and take a nap.
We are good idea maximizers, once all the data is stuffed into our heads. But the application of the ideas to make a change outside ourselves is not something that motivates us. We’re model builders, not model appliers.
As you suggest, that probably leaves our ideas half baked. We haven’t caught the inconsistencies and inadequacies, because we’ve never tested the model against reality, or even against a clear powerpoint presentation outside our own heads.
We’re internal inconsistency minimizers. Once the internals seem coherent, we feel fine and move on. We’re powerful and useful machines, if used properly.