Would it be useful to have a term, analogous to ‘hardware’, ‘software’, ‘wetware’, ‘vaporware’ etc.[1], which could be used to distinguish learned/discovered components of software, like gradient-trained DNNs, prompt-hacked LLMs, etc?
EDIT 2024-01-04: my current favourites are ‘ML-ware’ (HT Shankar), ‘fuzzware’ (me), and ‘hunchware’ (Claude), in that order; LW votes concur with ‘ML-ware’.
In a lot of conversations with nonexperts, I find that the general notion of AI as being ‘programmed’ apparently still has a surprisingly strong grip, even after the rise of ML and DL made it even clearer that this is an unhelpful anchor to have. Thane recently expressed similar, quite strongly.
David Mannheim has a short take AI is not software which I think nicely encapsulates some parts of the important distinctions.
The important thing, for me, is that, in contrast to traditional software, nobody wrote it, the specification is informal at best, and we can’t (currently) explain why or how it works. Traditionally, software is ‘data you can run’, but traditionally this class of data were exclusively crafted (substantially) by human design.
A valid answer to this question is, ‘no, we do not need such a term, just say, “learned components of software” or similar’.
In practice, we probably wouldn’t apply this term to, say, a logistic regression, but maybe?
Some ideas, none of which I like enough yet
netware (seems too NN-specific; also evokes networking which is the wrong concept)
dryware (like wetware but… dry)
neuroware (too NN-specific; also evokes bio neuro—maybe that’s fine)
infoware (sounds like just any data though)
learnedware/learnware
trainware
emergeware
adaptware
implicitware
paraware
foundware
guessware
fuzzware
noware
everyware
anyware
selfaware
please-beware
After a bit of back-and-forth, Claude managed to produce a few which I think are OK but I’m not very sold on these either
fogware
cloudware
enigware
blurware
darkware
specware
inferware
luckware
chanceware
hunchware
- ↩︎
For some illuminating compendia of -ware terms, see wiktionary, computerhope ware jargon, Everyware from rdrop, or gears’ shortlist of suggestions. Notably, almost all of these are really semantically <thing>-[soft]ware with the ‘soft’ elided e.g. spyware really means spy-software.
Why not just “ML-ware”?
It’s not specific to neural networks, corresponds closely to what most people would refer to as “AI” today, but explicitly excludes handcrafted algorithms. The resemblance to “malware” is serendipitous.
This is simple but surprisingly good, for the reasons you said. It’s also easy to say and write. Along with fuzz-, and hunch-, this is my favourite candidate so far.
Brainware.
Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don’t know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.
They’re often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They’re “mushy” and don’t have clear lines, so much of their insides blur together.
AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.
This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.
I wasn’t eager on this, but your justification updated me a bit. I think the most important distinction is indeed the ‘grown/evolved/trained/found, not crafted’, and ‘brainware’ didn’t immediately evoke that for me. But you’re right, brains are inherently grown, they’re very diverse, we can probe them but don’t always/ever grok them (yet), structure is somewhat visible, somewhat opaque, they fit into a larger computational chassis but adapt to their harness somewhat, properties and abilities can be elicited by unexpected inputs, they exhibit various kinds of learning on various timescales, …
Incidentally I noticed Yudkowsky uses ‘brainware’ in a few places (e.g. in conversation with Paul Christiano). But it looks like that’s referring to something more analogous to ‘architecture and learning algorithms’, which I’d put more in the ‘software’ camp when in comes to the taxonomy I’m pointing at (the ‘outer designer’ is writing it deliberately).
“tensorware” sprang to mind
This one independently sprang to mind for me too.
This is nice in its way, and has something going for it, but to me it’s far too specific, while also missing the ‘how we got this thing’ aspect which (I think) is the main reason to emphasise the difference through terminology.
because the goal here is to have a word that people skeptical of the “lifeyness” or “brainyness” of ai will accept to understand that it’s not normal software, I really like “moldware” and will be using it until something sticks better. it nicely describes the general nature of function approximators without getting into the weeds of why or how, or claiming function approximators have inherent lifeyness. it also feels like the right amount of decrease in “firmness” after software.
more candidates to reject from, a few favorite picks from asking an llm to dump many suggestions: fit-; contour-; match-; mirror-; conform-; mimic-; map-; cast-; imprint-;
Mold like fungus or mold like sculpt? I like this a bit, and I can imagine it might… grow on me. (yeuch)
Mold-as-in-sculpt has the benefit that it encompasses weirder stuff like prompt-wrangled and scaffolded stuff, and also kinda large-scale GOFAI-like things alla ‘MCTS’ and whatnot.
Groware/grownware? (Because it’s “grown”, as it’s now popular to describe)
Oozeware?
Gradientware? Seems verbose and isn’t robust to other ML approaches to fit data.
Datagenicware? Captures the core of what makes them like that, but it’s a mouthful.
Modelware? I don’t love it
Puttyware? Aims to capture the “takes the shape of its surroundings” aspect, might be too abstract though. Also implies that it will take the shape of its current surroundings, rather than the ones it was built with
Resinware—maybe more evocative of the “was fit very closely to its particular surroundings”, but still doesn’t seem to capture quite what I want
I don’t really like any of those ideas. I think it’s really interesting that aware is so related though. I think the best bet would be based on software. So something like deepsoftware, nextsoftware, nextgenerationsoftware, enhancedsoftware, etc.
I like “evolveware” myself.
it’s distinctly not evolved. gradients vs selection-crossover-mutate are very different algos.
I agree in the narrow sense of different from bio-evolution, but I think it captures something tonally correct anyway.
this has been an ongoing point of debate recently, and I think we can do much better than incorrect analogy to evolution.
I hate to wheel this out again but evolution-broadly-construed is actually a very close fit for gradient methods. Agreed there’s a whole lot of specifics in biological natural selection, and a whole lot of specifics in gradient-methods-as-practiced, but they are quite akin really.
please wheel such things out every time they seem relevant until such time as someone finds a strong argument not to, people underrecommend sturdy work imo. in this case, I think the top comment on that post raises some issues with it that I’d like to see resolved before I’d feel like I could rely on it to be a sturdy generalization. but I appreciate the attempt.
Separately, I’m not a fan of ‘evolveware’ or ‘evoware’ in particular, though I can’t put my finger on exactly why. Possibly it’s because of a connotation of ongoing evolution, which is sorta true in some cases but could be misleading as a signifier. Though the same criticism could be levelled against ‘ML-ware’, which I like more.