I disagree strongly with the idea that organizations have sub-human intelligence. Organizations are significantly smarter than people- indeed, people can only afford specialized knowledge because of their organizational membership.
Organizations don’t have values in the ways that humans do, and so value creep is a giant problem- but value creep and intelligence loss are very different things.
I think it might be productive to taboo “intelligence” here. It’s pretty clear that any reasonably large organization has more raw computational power at its disposal than any individual—but you need to make some nontrivial assumptions to say that organizations are better on average at allocating that power, or at making and acting on predictions.
There are any number of organization-specific failures of rationality—groupthink, the Peter Principle, etc. It’s not immediately clear to me under what circumstances these would outweigh the corresponding benefits for the classes of problem that are being discussed, although I suspect organizations would still outperform individuals most of the time (with some substantial caveats).
What class of problems are being discussed? The OP seemed pretty open-ended, and it seems to me that for any problem, a well-designed organization will outperform a well-chosen individual. I agree that organizations have failures- but so do individuals.
Indeed, it seems we’re more likely to get a recursively self-improving AI (with or without the G) through organizational design than by approaching the problem directly.
It’s not entirely clear, but I get the impression that the OP is mainly concerned with how efficiently organizational effort satisfies our long-term preferences. If I’m right, then specifying the goal precisely would amount to solving the “meaning of right” problem, which is probably why the post seems a little muddled.
As to organizational vs. individual rationality, I broadly AWYC—but with the caveats that the optimal organizational design is not identical for all problems, and that I don’t have anything approaching a proof.
Entire agreement with the organizations being smarter than people part. In terms of actually being able to steer the future into more favorable regions, I’d say that organizations are smarter than the vast majority of humans.
To use a specific example, my robotics team is immensely better at building robots than I am (or anyone else on the team is) on my own. Even if it messes up really really badly it’s still better at building a robot in the given constraints (budgetary constraints, 6 week time span) than I am.
I can see an argument being made that organizations don’t very efficiently turn raw computational power into optimization though.
Ima split up intelligence into optimizer (being able to more effectively reach a specified goal) and an inferencer (being able to process information to produce accurate models of reality).
There are weaknesses of the team as an inferencer. It seems to be (largely) unable to remember things beyond the extent that individuals involved with the team do, and all connecting of integration of information is ultimately done by individuals.
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don’t think its a fundamental shift. It seems to more be augmenting a human inferencer, and combining those results for better optimization, rather than a fundamental change in cognitive capability.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world’s universities into one well-organized super-university. There’s a limit to how brilliant people can be.
That being said, I’m pretty sure that the rate incremental change could be drastically improved by a combination like that.
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don’t think its a fundamental shift.
Maybe, if you discount the organization of “modern civilization.” There’s certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone “fundamental”? Possibly not, though I’d probably call it that.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world’s universities into one well-organized super-university. There’s a limit to how brilliant people can be.
Actually, there’s some question about this. Having one flagship university where you put all the best people seems significantly better than spreading them out across the country/world. All the work that gets done at conferences could be the kind of work that gets done at a department’s weekly meeting. Part 4 of this essay suggests something similar.
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you’re at Harvard, you probably have a much higher opinion of your colleagues than if you’re at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
Maybe, if you discount the organization of “modern civilization.” There’s certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone “fundamental”? Possibly not, though I’d probably call it that.
I avoided this example because I don’t have a particularly good goalset for modern civilization to cohesively work towards, so discussing optimization is sort of difficult.
In terms of optimization to my standards I agree that its a huge shift. I can get things from across the world shipped to my door, preprocessed for my consumption, at a ridiculously low cost.
But in terms of informational processing ability I feel like its not that gigantic of a deal. Like, it processes way more information but can’t do much beyond the capabilities of its constituent people to individual pieces of data, (like, eyeball a least squares regression line or properly calculate a posterior probability without using an outside algorithm to do so).
(Note: lots of fuzzy linguistic constructions in the previous two sentences. I notice some confusion.)
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you’re at Harvard, you probably have a much higher opinion of your colleagues than if you’re at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
I wasn’t being clear, sorry.
Concentrating your best people does help, but I don’t think you can get the equivalent of the best people by just clustering together enough people, no matter how good your structure is.
Concentrating your best people does help, but you can’t get the equivalent of the best people by just clustering together enough people.
Not sure about this either. It seems like a few good people can be as effective as a great person, and a few great people as effective as a fantastic person, especially when you’re looking for things that are broader (design an airplane) rather than deeper (design general relativity). It’s very possible we’ve hit saturation and so this isn’t as noticeable; the few great people aren’t competing against one fantastic person, but a few fantastic people.
I worded that too strongly.
You get diminishing returns on possible breakthroughs after a certain point. You get more effective smartness, but its not drastically better to the extent that a GAI is.
The OP seemed pretty open-ended, and it seems to me that for any problem, a well-designed organization will outperform a well-chosen individual.
Counterexample: Very few great works of literature were written by committee. The King James Bible may be the one example of one that was. (Note that I’m explicitly excluding “literature” meant to be performed by actors instead of read from a page.)
Counterexample: Very few great works of literature were written by committee.
Counter counter example: very few great works of literature were created by hermits. And, of those that were written by hermits, most of the time that’s legendary, like with the Tao Te Ching.
It’s true that in any organization, there’s a level where individuals dominate. Organizations are built up of individuals, so that level must exist and it must be significant. But whenever you go up a step, you often find an organization that helps those individuals accomplish a task. Form follows function- and so for creative works, those organizations tend to be loose groups of mutually inspiring people. Good design happens in chunks.
The case for literature seems a bit worse than the case for painting- but my feeling is still the best literature comes out of an organization of great and good people, not from fantastic people working alone. Particularly if you’re only interested in the best literature you’ve heard of, not the best literature that didn’t sell a thousand copies.
Well, yeah, there’s a lot of stuff in the Bible that, when considered as literature, is actually pretty bad by any standard, such as the extensive listing of the laws of ancient Israel. And there have been a lot of changes in the English language since the King James Bible was written, so it’s hard to judge it as a translation by reading it today, but it’s supposed to have had some really good poetry in it—for a translation, at least.
And there have been a lot of changes in the English language since the King James Bible was written
That’s a good point. I’m also comparing it to other translations done by committees. Better committees with more education, superior technology and with more early manuscripts to work with. That doesn’t leave much to be impressed with.
Indeed, it seems we’re more likely to get a recursively self-improving AI (with or without the G) through organizational design than by approaching the problem directly.
Precisely the point I was trying to make with “well knit human computer team”
Take a theoritical human being who has most of the actionable knowledge that a corporation possesses at one moment. My point is that the decision she makes will be better, for the bottom line, than one that most corporations make today. Today’s business intelligence systems try to achieve this, but they are only proxies.
Take a theoritical human being who has most of the actionable knowledge that a corporation possesses at one moment.
So, this “theoretical” human has Google’s data centres between her ears? One has to wonder what the point of such “theoretical” considerations are—if they start out this absurd.
My issue is that, depending on what you mean by “actionable knowledge” and “bottom line,” this is either impossible or what already happens. I’m only interested in realistic theories.
The actionable knowledge a corporation possesses is all the actionable knowledge possessed by all of its members. It may not aggregate that knowledge very well for decision-making purposes, but generally the limitation it faces are human limitations (generally on time and memory). When workarounds are adopted, it’s not “bring a smarter person on board” but organizational tricks to lessen the knowledge cost involved. Several retail outlets decide what products to stock by polling their employees (who are strongly representative of their customer base), and I believe a few even have it set up as a prediction market, which is an organizational trick that replaced having a clever person do research and tell them what their customer base wants.
The alternative is you mean “if I gave the CEO’s reports to someone cleverer, that cleverer person would make a better decision.” Possibly, but anyone you have in mind would probably be able to get the CEO job normally.
By bottom line you can either mean corporate profit, or total social benefit- if it’s the first, corporations are at least as good as individuals as doing that (often better, because it’s easier to insulate decision-makers from the emotional consequences of their actions), and if it’s the second, we again run into the knowledge cost problem. How can a single person hold in their head all the information necessary to determine the total social impact of an action?
I disagree strongly with the idea that organizations have sub-human intelligence. Organizations are significantly smarter than people- indeed, people can only afford specialized knowledge because of their organizational membership.
Organizations don’t have values in the ways that humans do, and so value creep is a giant problem- but value creep and intelligence loss are very different things.
I think it might be productive to taboo “intelligence” here. It’s pretty clear that any reasonably large organization has more raw computational power at its disposal than any individual—but you need to make some nontrivial assumptions to say that organizations are better on average at allocating that power, or at making and acting on predictions.
There are any number of organization-specific failures of rationality—groupthink, the Peter Principle, etc. It’s not immediately clear to me under what circumstances these would outweigh the corresponding benefits for the classes of problem that are being discussed, although I suspect organizations would still outperform individuals most of the time (with some substantial caveats).
What class of problems are being discussed? The OP seemed pretty open-ended, and it seems to me that for any problem, a well-designed organization will outperform a well-chosen individual. I agree that organizations have failures- but so do individuals.
Indeed, it seems we’re more likely to get a recursively self-improving AI (with or without the G) through organizational design than by approaching the problem directly.
It’s not entirely clear, but I get the impression that the OP is mainly concerned with how efficiently organizational effort satisfies our long-term preferences. If I’m right, then specifying the goal precisely would amount to solving the “meaning of right” problem, which is probably why the post seems a little muddled.
As to organizational vs. individual rationality, I broadly AWYC—but with the caveats that the optimal organizational design is not identical for all problems, and that I don’t have anything approaching a proof.
Entire agreement with the organizations being smarter than people part. In terms of actually being able to steer the future into more favorable regions, I’d say that organizations are smarter than the vast majority of humans.
To use a specific example, my robotics team is immensely better at building robots than I am (or anyone else on the team is) on my own. Even if it messes up really really badly it’s still better at building a robot in the given constraints (budgetary constraints, 6 week time span) than I am.
I can see an argument being made that organizations don’t very efficiently turn raw computational power into optimization though.
Ima split up intelligence into optimizer (being able to more effectively reach a specified goal) and an inferencer (being able to process information to produce accurate models of reality).
There are weaknesses of the team as an inferencer. It seems to be (largely) unable to remember things beyond the extent that individuals involved with the team do, and all connecting of integration of information is ultimately done by individuals.
Though the organization does facilitate specialization of knowledge, and conversations cause ideas better than the ideas of an individual working alone, I don’t think its a fundamental shift. It seems to more be augmenting a human inferencer, and combining those results for better optimization, rather than a fundamental change in cognitive capability.
To illustrate, breakthroughs in science are certainly helped by universities, but I doubt that you can make breakthroughs significantly faster by combining all of the world’s universities into one well-organized super-university. There’s a limit to how brilliant people can be.
That being said, I’m pretty sure that the rate incremental change could be drastically improved by a combination like that.
My 2 cents.
Maybe, if you discount the organization of “modern civilization.” There’s certainly a fundamental shift between self-reliant generalists and trading specialists. But is the difference between programmers in a small company and programmers working alone “fundamental”? Possibly not, though I’d probably call it that.
Actually, there’s some question about this. Having one flagship university where you put all the best people seems significantly better than spreading them out across the country/world. All the work that gets done at conferences could be the kind of work that gets done at a department’s weekly meeting. Part 4 of this essay suggests something similar.
Now, would it be best to have one super-university? Probably not- one of the main benefits of top universities is their selectivity. If you’re at Harvard, you probably have a much higher opinion of your colleagues than if you’re at a community college. It seems there are additional benefits to be gained from clustering, but there are decreasing returns to clustering (that become negative).
I avoided this example because I don’t have a particularly good goalset for modern civilization to cohesively work towards, so discussing optimization is sort of difficult.
In terms of optimization to my standards I agree that its a huge shift. I can get things from across the world shipped to my door, preprocessed for my consumption, at a ridiculously low cost. But in terms of informational processing ability I feel like its not that gigantic of a deal. Like, it processes way more information but can’t do much beyond the capabilities of its constituent people to individual pieces of data, (like, eyeball a least squares regression line or properly calculate a posterior probability without using an outside algorithm to do so). (Note: lots of fuzzy linguistic constructions in the previous two sentences. I notice some confusion.)
I wasn’t being clear, sorry. Concentrating your best people does help, but I don’t think you can get the equivalent of the best people by just clustering together enough people, no matter how good your structure is.
Not sure about this either. It seems like a few good people can be as effective as a great person, and a few great people as effective as a fantastic person, especially when you’re looking for things that are broader (design an airplane) rather than deeper (design general relativity). It’s very possible we’ve hit saturation and so this isn’t as noticeable; the few great people aren’t competing against one fantastic person, but a few fantastic people.
I worded that too strongly. You get diminishing returns on possible breakthroughs after a certain point. You get more effective smartness, but its not drastically better to the extent that a GAI is.
Counterexample: Very few great works of literature were written by committee. The King James Bible may be the one example of one that was. (Note that I’m explicitly excluding “literature” meant to be performed by actors instead of read from a page.)
Counter counter example: very few great works of literature were created by hermits. And, of those that were written by hermits, most of the time that’s legendary, like with the Tao Te Ching.
It’s true that in any organization, there’s a level where individuals dominate. Organizations are built up of individuals, so that level must exist and it must be significant. But whenever you go up a step, you often find an organization that helps those individuals accomplish a task. Form follows function- and so for creative works, those organizations tend to be loose groups of mutually inspiring people. Good design happens in chunks.
The case for literature seems a bit worse than the case for painting- but my feeling is still the best literature comes out of an organization of great and good people, not from fantastic people working alone. Particularly if you’re only interested in the best literature you’ve heard of, not the best literature that didn’t sell a thousand copies.
I would dispute that (based on the opinions and information I held when I was religious). The King James Bible is overrated.
Well, yeah, there’s a lot of stuff in the Bible that, when considered as literature, is actually pretty bad by any standard, such as the extensive listing of the laws of ancient Israel. And there have been a lot of changes in the English language since the King James Bible was written, so it’s hard to judge it as a translation by reading it today, but it’s supposed to have had some really good poetry in it—for a translation, at least.
That’s a good point. I’m also comparing it to other translations done by committees. Better committees with more education, superior technology and with more early manuscripts to work with. That doesn’t leave much to be impressed with.
Precisely the point I was trying to make with “well knit human computer team”
Take a theoritical human being who has most of the actionable knowledge that a corporation possesses at one moment. My point is that the decision she makes will be better, for the bottom line, than one that most corporations make today. Today’s business intelligence systems try to achieve this, but they are only proxies.
So, this “theoretical” human has Google’s data centres between her ears? One has to wonder what the point of such “theoretical” considerations are—if they start out this absurd.
My issue is that, depending on what you mean by “actionable knowledge” and “bottom line,” this is either impossible or what already happens. I’m only interested in realistic theories.
The actionable knowledge a corporation possesses is all the actionable knowledge possessed by all of its members. It may not aggregate that knowledge very well for decision-making purposes, but generally the limitation it faces are human limitations (generally on time and memory). When workarounds are adopted, it’s not “bring a smarter person on board” but organizational tricks to lessen the knowledge cost involved. Several retail outlets decide what products to stock by polling their employees (who are strongly representative of their customer base), and I believe a few even have it set up as a prediction market, which is an organizational trick that replaced having a clever person do research and tell them what their customer base wants.
The alternative is you mean “if I gave the CEO’s reports to someone cleverer, that cleverer person would make a better decision.” Possibly, but anyone you have in mind would probably be able to get the CEO job normally.
By bottom line you can either mean corporate profit, or total social benefit- if it’s the first, corporations are at least as good as individuals as doing that (often better, because it’s easier to insulate decision-makers from the emotional consequences of their actions), and if it’s the second, we again run into the knowledge cost problem. How can a single person hold in their head all the information necessary to determine the total social impact of an action?