Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn’t happen without some very serious side effects.
All of which are subject to the humans’ interpretation and use. You can set up an organizational culture, but that won’t stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn’t mean they’ll even follow them at all. If you specify a great deal of process, they may not even do so intentionally—they may just forget. With a computer, that would be caused by an error, but it’s a controllable process. With a human? People can’t just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.
So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.
As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines.
However, at the moment, there are also advantages to a man-machine symbiosis—namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects—and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet—during a pretty important period in history.
I just think it’s a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures—discover possible failure modes and approaches to deal with them), but that’s a different conversation.
Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn’t happen without some very serious side effects.
All of which are subject to the humans’ interpretation and use. You can set up an organizational culture, but that won’t stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn’t mean they’ll even follow them at all. If you specify a great deal of process, they may not even do so intentionally—they may just forget. With a computer, that would be caused by an error, but it’s a controllable process. With a human? People can’t just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.
So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.
As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines.
However, at the moment, there are also advantages to a man-machine symbiosis—namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects—and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet—during a pretty important period in history.
I just think it’s a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures—discover possible failure modes and approaches to deal with them), but that’s a different conversation.