My interpretation of the previous several posts is: alignment of organizations is hard, and if you’re even a little bit misaligned, the mazeys will exploit that misalignment to the hilt. Allow any divergence between measures of performance and actual performance, and a whole bureaucracy will soon arise, living off of that divergence and expanding it whenever possible.
My interpretation of this post is: let’s solve it by making a fund which pays people to always be aligned! The only hard part is figuring out how to verify that they are, in fact, aligned.
… Which was the whole problem to begin with.
The underlying problem is that alignment is hard. If we had a better way to align organizations, then organizations which use that method would already be outperforming everyone else. The technique would already be used. Invent that technology (possibly a social technology), and it will spread. The mazes will fight it, and the mazes will die. But absent some sort of alignment technology, there is not much else which will help.
This is a problem which fundamentally cannot be fixed by throwing money at it. Setting up a fund to pay people for being aligned will result in people trying to look aligned. Without some way of making one’s measure of alignment match actual alignment, this will not do any good at all.
I was in no way trying to disguise that the problem of people faking alignment with the stack in order to extract resources is the biggest problem with the project, if someone were to actually try to implement it. If I get feedback that this wasn’t clear enough I will edit to make it more clear. And certainly one does not simply throw money at the problem.
So that far, fair enough.
However, this also incorporates a number of assumptions, and a general view of how things function, that I do not share.
First, the idea that alignment is a singular problem, or that it either does or does not have a solution. That seems very wrong to me. Alignment has varying known solutions depending on the situation and which prices you are wiling to pay and how much you care, and varies based on what alignment you are trying to verify. You can also attempt structure the deal such that people that are non-aligned (e.g. with the maze nature, or even not very into being opposed to it) do not want what you are offering.
I don’t think there are cheap solutions. And yes, eventually you will fail and have to start over, but I do think this is tractable for long enough to make a big difference.
Second, the idea that if there was a solution then it would be implemented because it outcompetes others just doesn’t match my model on multiple levels. I don’t think it would be worth paying the kind of prices the stack would be willing to pay, in order to align a generic corporation. It’s not even clear that this level of anti-maze would be an advantage in that spot, given the general reaction to such a thing on many levels and the need for deep interaction with mazes. And it’s often the case that there are big wins, and people just don’t know about them, or they know about them but for some reason don’t take them. I’ve stopped finding such things weird.
You can also do it backwards-only if you’re too scared of this—award it to people who you already are confident in now, and don’t extend it later to avoid corruption. It would be a good start on many goals.
In any case, yes, I have thought a lot about the practical problems, most of which such people already face much worse in other forms, and have many many thoughts about them, and the problem is hard. But not ‘give up this doesn’t actually help’ kinds of hard.
Not going to go deeper than that here. If I decide to expand on the problem I’ll do it with more posts (which are not currently planned).
If you want to go further, “Reinventing Organizations” by Frederic Laloux is basically a book on “creating full alternative stacks”.
He tried to compile examples of organizations working with this mindset. He tries to build intuition on why these are successful and how to reproduce their success. It goes into practical details of internal processes / tools adapted to this new way of working.
My take is that it’s hard. But probably worth trying because there isn’t any better alternatives.
If you’re seriously interested in finding a way out of bad equilibria, this book surprisingly clarifies a lot of unsuspected options. I highly recommend it.
My interpretation of the previous several posts is: alignment of organizations is hard, and if you’re even a little bit misaligned, the mazeys will exploit that misalignment to the hilt. Allow any divergence between measures of performance and actual performance, and a whole bureaucracy will soon arise, living off of that divergence and expanding it whenever possible.
My interpretation of this post is: let’s solve it by making a fund which pays people to always be aligned! The only hard part is figuring out how to verify that they are, in fact, aligned.
… Which was the whole problem to begin with.
The underlying problem is that alignment is hard. If we had a better way to align organizations, then organizations which use that method would already be outperforming everyone else. The technique would already be used. Invent that technology (possibly a social technology), and it will spread. The mazes will fight it, and the mazes will die. But absent some sort of alignment technology, there is not much else which will help.
This is a problem which fundamentally cannot be fixed by throwing money at it. Setting up a fund to pay people for being aligned will result in people trying to look aligned. Without some way of making one’s measure of alignment match actual alignment, this will not do any good at all.
I was in no way trying to disguise that the problem of people faking alignment with the stack in order to extract resources is the biggest problem with the project, if someone were to actually try to implement it. If I get feedback that this wasn’t clear enough I will edit to make it more clear. And certainly one does not simply throw money at the problem.
So that far, fair enough.
However, this also incorporates a number of assumptions, and a general view of how things function, that I do not share.
First, the idea that alignment is a singular problem, or that it either does or does not have a solution. That seems very wrong to me. Alignment has varying known solutions depending on the situation and which prices you are wiling to pay and how much you care, and varies based on what alignment you are trying to verify. You can also attempt structure the deal such that people that are non-aligned (e.g. with the maze nature, or even not very into being opposed to it) do not want what you are offering.
I don’t think there are cheap solutions. And yes, eventually you will fail and have to start over, but I do think this is tractable for long enough to make a big difference.
Second, the idea that if there was a solution then it would be implemented because it outcompetes others just doesn’t match my model on multiple levels. I don’t think it would be worth paying the kind of prices the stack would be willing to pay, in order to align a generic corporation. It’s not even clear that this level of anti-maze would be an advantage in that spot, given the general reaction to such a thing on many levels and the need for deep interaction with mazes. And it’s often the case that there are big wins, and people just don’t know about them, or they know about them but for some reason don’t take them. I’ve stopped finding such things weird.
You can also do it backwards-only if you’re too scared of this—award it to people who you already are confident in now, and don’t extend it later to avoid corruption. It would be a good start on many goals.
In any case, yes, I have thought a lot about the practical problems, most of which such people already face much worse in other forms, and have many many thoughts about them, and the problem is hard. But not ‘give up this doesn’t actually help’ kinds of hard.
Not going to go deeper than that here. If I decide to expand on the problem I’ll do it with more posts (which are not currently planned).
If you want to go further, “Reinventing Organizations” by Frederic Laloux is basically a book on “creating full alternative stacks”.
He tried to compile examples of organizations working with this mindset. He tries to build intuition on why these are successful and how to reproduce their success. It goes into practical details of internal processes / tools adapted to this new way of working.
My take is that it’s hard. But probably worth trying because there isn’t any better alternatives.
If you’re seriously interested in finding a way out of bad equilibria, this book surprisingly clarifies a lot of unsuspected options. I highly recommend it.