Someone who wasn’t familiar with computers might refuse to recognize sorting algorithms as real sorting, as opposed to mere “artificial sorting”. After all, a human sorting her bookshelf intends to put the books in order, whereas the computer is just an automaton following instructions, and doesn’t intend anything at all—a zombie sorter!
The opposite case is more easily made: if the sorting isn’t done by a computer, then how do you know it’s been done right? (Humans can make mistakes, and may operate with fallible means (of operation and reason). Computers are so logical, how can they fail at any task?)
The benefit of sorting by human (by hand) is that machines can break down, or...fail in unfamiliar/unpredictable ways. The trolley may fail to break, where a human would fail and collapse instead of(,) running wild.
the most natural concept of sorting includes computers performing quicksort, merge sort, &c.., despite the lack of intent.
There’s a difference between ‘natural concepts’ and ‘mathematical concepts’. By that token, “shuffling” is a sort.
Sorting by subject is more natural—which computers are widely used to do impressively well—up to a point.
Note that this is a “behaviorist”, “third person” perspective: we’re not talking about some subjective feeling of intending something, just systems that systematically steer reality into otherwise-improbable states that rank high with respect to some preference ordering.
So ‘algorithmic intent’ isn’t a good argument against zombies. Zombie—an optimizer which doesn’t experience:
pain
emotion
etc.
The fact that a zombie was made by/is being used by something that isn’t a zombie, i.e. the existence of necromancers, doesn’t rule out the possibility of zombies.
The Hansonian Generalized Anti-Zombie Principle says that when someone “loses control” and makes angry threats, it’s not because they’re a zombie that coincidentally happens to do so when being nice isn’t getting them what they want.
it would in no way aid our understanding to insist that the system isn’t really “learning”
Sticking with the faces possibility—suppose such an ‘AI’ is somehow induced to generate an entire character, and the result doesn’t make sense at all. A bunch of heads everywhere, or a bunch of smaller portraits of headshots.
It intuitive that there is a difference between a program that ‘generates’ only images in the training dataset and one that generates original images. There may be real, and important differences between what these programs do and what people do.
any more than it would to insist that quicksort isn’t really sorting.
1. Aside from robustness to perturbation (mentioned in the post), how does quicksort perform on a sorted list? (In a world with already/partially sorted collections, quicksort may be lacking in greed.)
2. Sort a set of works from best to worst.
3. Can a program learn how to sort a new alphabet like humans can? Perhaps one can be made, or GPT may already have the capability—but a program with that ability is different from ones that don’t, and failing to recognize that a program, or an entity, does (or doesn’t) have the capacity to learn a thing, affects future predictions of behavior.
Truth is important.* That argument is also ridiculous…
The sentence is in the wrong tense/voice. (You can do anything.)
Would the court care that (the people who work at the) company/ies that use GPT-5 ‘aren’t allowed’ to care about anything but profit?
An ‘isolated demand for rigor’ may be ridiculous—but is it more ridiculous than than ‘the isolated presence of rigor’/topic choice?
This is a choice. (Presumably no one (else) is forcing you to conform to this standard?)
Unfortunately, as an aspiring epistemic rationalist, I’m not allowed to care whether some descriptions might be socially harmful for a human community to adopt; I’m only allowed to care about what descriptions shorten the length of the message needed to describe my observations.
I’d guess you don’t have this property, that you do care.
It would be weirdly pedantic to say this post isn’t short enough—but that standard doesn’t seem like it’s what is being optimized for. (If what you mean is honesty or accuracy, that seems like an imperfect description.)
*Holding up “Truth” highly makes sense, but that particular relation doesn’t. (And who has been banned from this site for falsehood/deceit?)
The opposite case is more easily made: if the sorting isn’t done by a computer, then how do you know it’s been done right? (Humans can make mistakes, and may operate with fallible means (of operation and reason). Computers are so logical, how can they fail at any task?)
The benefit of sorting by human (by hand) is that machines can break down, or...fail in unfamiliar/unpredictable ways. The trolley may fail to break, where a human would fail and collapse instead of(,) running wild.
There’s a difference between ‘natural concepts’ and ‘mathematical concepts’. By that token, “shuffling” is a sort.
Sorting by subject is more natural—which computers are widely used to do impressively well—up to a point.
So ‘algorithmic intent’ isn’t a good argument against zombies. Zombie—an optimizer which doesn’t experience:
pain
emotion
etc.
The fact that a zombie was made by/is being used by something that isn’t a zombie, i.e. the existence of necromancers, doesn’t rule out the possibility of zombies.
‘Hidden motives’ is a shorter, better name.
evolved deception
Sticking with the faces possibility—suppose such an ‘AI’ is somehow induced to generate an entire character, and the result doesn’t make sense at all. A bunch of heads everywhere, or a bunch of smaller portraits of headshots.
It intuitive that there is a difference between a program that ‘generates’ only images in the training dataset and one that generates original images. There may be real, and important differences between what these programs do and what people do.
1. Aside from robustness to perturbation (mentioned in the post), how does quicksort perform on a sorted list? (In a world with already/partially sorted collections, quicksort may be lacking in greed.)
2. Sort a set of works from best to worst.
3. Can a program learn how to sort a new alphabet like humans can? Perhaps one can be made, or GPT may already have the capability—but a program with that ability is different from ones that don’t, and failing to recognize that a program, or an entity, does (or doesn’t) have the capacity to learn a thing, affects future predictions of behavior.
Truth is important.* That argument is also ridiculous…
The sentence is in the wrong tense/voice. (You can do anything.)
Would the court care that (the people who work at the) company/ies that use GPT-5 ‘aren’t allowed’ to care about anything but profit?
An ‘isolated demand for rigor’ may be ridiculous—but is it more ridiculous than than ‘the isolated presence of rigor’/topic choice?
This is a choice. (Presumably no one (else) is forcing you to conform to this standard?)
I’d guess you don’t have this property, that you do care.
It would be weirdly pedantic to say this post isn’t short enough—but that standard doesn’t seem like it’s what is being optimized for. (If what you mean is honesty or accuracy, that seems like an imperfect description.)
*Holding up “Truth” highly makes sense, but that particular relation doesn’t. (And who has been banned from this site for falsehood/deceit?)