Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don’t know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.
They’re often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They’re “mushy” and don’t have clear lines, so much of their insides blur together.
AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.
This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.
I wasn’t eager on this, but your justification updated me a bit. I think the most important distinction is indeed the ‘grown/evolved/trained/found, not crafted’, and ‘brainware’ didn’t immediately evoke that for me. But you’re right, brains are inherently grown, they’re very diverse, we can probe them but don’t always/ever grok them (yet), structure is somewhat visible, somewhat opaque, they fit into a larger computational chassis but adapt to their harness somewhat, properties and abilities can be elicited by unexpected inputs, they exhibit various kinds of learning on various timescales, …
Incidentally I noticed Yudkowsky uses ‘brainware’ in a few places (e.g. in conversation with Paul Christiano). But it looks like that’s referring to something more analogous to ‘architecture and learning algorithms’, which I’d put more in the ‘software’ camp when in comes to the taxonomy I’m pointing at (the ‘outer designer’ is writing it deliberately).
Brainware.
Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don’t know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.
They’re often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They’re “mushy” and don’t have clear lines, so much of their insides blur together.
AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.
This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.
I wasn’t eager on this, but your justification updated me a bit. I think the most important distinction is indeed the ‘grown/evolved/trained/found, not crafted’, and ‘brainware’ didn’t immediately evoke that for me. But you’re right, brains are inherently grown, they’re very diverse, we can probe them but don’t always/ever grok them (yet), structure is somewhat visible, somewhat opaque, they fit into a larger computational chassis but adapt to their harness somewhat, properties and abilities can be elicited by unexpected inputs, they exhibit various kinds of learning on various timescales, …
Incidentally I noticed Yudkowsky uses ‘brainware’ in a few places (e.g. in conversation with Paul Christiano). But it looks like that’s referring to something more analogous to ‘architecture and learning algorithms’, which I’d put more in the ‘software’ camp when in comes to the taxonomy I’m pointing at (the ‘outer designer’ is writing it deliberately).