“Ultimately, most objects, man-made or not are ‘black boxes.’”
OK, I see what you’re getting at.
Three questions about black boxes:
1) Does the input have to be fully known/observable to constitute a black box? When investigating a population of neurons, we can give stimulus to these cells, but we cannot be sure that we are aware of all the inputs they are receiving. So we effectively do not entirely understand the input being given.
2) Does the output have to be fully known/observable to constitute a black box? When we measure the output of a population of neurons, we also cannot be sure of the totality of information being sent out, due to experimental limitations.
3) If one does not understand a system one uses, does that fact alone make that system a black box? In that case there are absolute black boxes, like the human mind, about which complete information is not known, and relative black boxes, like the car or TCP/IP, about which complete information is not known to the current user.
4) What degree of understanding is sufficient for something not to be called a black box?
Depending on how we answer these things, it will determine whether black box comes to mean:
1) Anything that is identifiable as a ‘part’, whose input and output is known but whose intermediate working/processing is not understood.
2) Anything that is identifiable as a ‘part’ whose input, output and/or processing is not understood.
3) Any ‘part’ that is not completely understood (i.e. presuming access to all information)
4) Anything that is not understood by the user at the time
5) Anything that is not FULLY understood by the user at the time.
We will quickly be in the realm where anything and everything on earth is considered to be a black box, if we take the latter definitions. So how can this word/metaphor be most profitably wielded?
Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you’ve recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That’s exactly the sort of rationality technique we should be learning.
A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term “black box”, we can try to remember why it was originally used, and look for another way to express the intended concept.
In this case, I’d say the point was:
“Sometimes, we will use a tool expecting to get one result, and instead we will get a completely different, unexpected result. Often we can explain these results later. They may even have been predictable in advance, and yet they weren’t predicted.”
Computer programming is especially prime to this. The computer will faithfully execute the instructions that you gave it, but those instructions might not have the net result that you wanted.
“Ultimately, most objects, man-made or not are ‘black boxes.’”
OK, I see what you’re getting at.
Three questions about black boxes:
1) Does the input have to be fully known/observable to constitute a black box? When investigating a population of neurons, we can give stimulus to these cells, but we cannot be sure that we are aware of all the inputs they are receiving. So we effectively do not entirely understand the input being given.
2) Does the output have to be fully known/observable to constitute a black box? When we measure the output of a population of neurons, we also cannot be sure of the totality of information being sent out, due to experimental limitations.
3) If one does not understand a system one uses, does that fact alone make that system a black box? In that case there are absolute black boxes, like the human mind, about which complete information is not known, and relative black boxes, like the car or TCP/IP, about which complete information is not known to the current user.
4) What degree of understanding is sufficient for something not to be called a black box?
Depending on how we answer these things, it will determine whether black box comes to mean:
1) Anything that is identifiable as a ‘part’, whose input and output is known but whose intermediate working/processing is not understood. 2) Anything that is identifiable as a ‘part’ whose input, output and/or processing is not understood. 3) Any ‘part’ that is not completely understood (i.e. presuming access to all information) 4) Anything that is not understood by the user at the time 5) Anything that is not FULLY understood by the user at the time.
We will quickly be in the realm where anything and everything on earth is considered to be a black box, if we take the latter definitions. So how can this word/metaphor be most profitably wielded?
I like this style of reasoning.
Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you’ve recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That’s exactly the sort of rationality technique we should be learning.
A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term “black box”, we can try to remember why it was originally used, and look for another way to express the intended concept.
In this case, I’d say the point was: “Sometimes, we will use a tool expecting to get one result, and instead we will get a completely different, unexpected result. Often we can explain these results later. They may even have been predictable in advance, and yet they weren’t predicted.”
Computer programming is especially prime to this. The computer will faithfully execute the instructions that you gave it, but those instructions might not have the net result that you wanted.