My own personal experience following this post: I don’t have enough training data for most of the people I’d like to emulate. When I think of the people I know irl that is like to learn from, I’ve spent about ten hours 1-on-1 with each of them; not enough to have a solid mental model of what advice they may give. At the same time, part of why I value their advice is that I can’t predict it; they have wisdom and experience that I don’t. Often, I’ll ask them for advice and be surprised by their answer. When I tried to create a shoulder advisor of one of them, it didn’t work; I just didn’t know enough about them to accurately understand what they were thinking in a certain situation.
Still a great post, though; just didn’t work for a specific use case of mine.
I (weakly) predict that building a shoulder advisor or two out of less-useful-but-more-emulable people might be worth it via giving you the skill of emulating to the available max? Such that, finding emulation in general a little easier and more familiar, you might be able to try again with the actual higher-value targets?
FWIW, I have been genuinely surprised by advice from shoulder advisors that I could not predict; in a very real sense, that’s the primary claim of the post. You don’t have to have the same wisdom and experience to spin up a mental chatbot that will sometimes be able to mimic the pattern well enough to produce something novel and useful. If having a good shoulder advisor required being as wise and experienced as the real person, I don’t think I would have felt this post was worth writing.
I (weakly) predict that building a shoulder advisor or two out of less-useful-but-more-emulable people might be worth it via giving you the skill of emulating to the available max? Such that, finding emulation in general a little easier and more familiar, you might be able to try again with the actual higher-value targets?
That makes sense, I’ll give it a try.
FWIW, I have been genuinely surprised by advice from shoulder advisors that I could not predict;
My own personal experience following this post: I don’t have enough training data for most of the people I’d like to emulate. When I think of the people I know irl that is like to learn from, I’ve spent about ten hours 1-on-1 with each of them; not enough to have a solid mental model of what advice they may give. At the same time, part of why I value their advice is that I can’t predict it; they have wisdom and experience that I don’t. Often, I’ll ask them for advice and be surprised by their answer. When I tried to create a shoulder advisor of one of them, it didn’t work; I just didn’t know enough about them to accurately understand what they were thinking in a certain situation.
Still a great post, though; just didn’t work for a specific use case of mine.
That makes sense.
I (weakly) predict that building a shoulder advisor or two out of less-useful-but-more-emulable people might be worth it via giving you the skill of emulating to the available max? Such that, finding emulation in general a little easier and more familiar, you might be able to try again with the actual higher-value targets?
FWIW, I have been genuinely surprised by advice from shoulder advisors that I could not predict; in a very real sense, that’s the primary claim of the post. You don’t have to have the same wisdom and experience to spin up a mental chatbot that will sometimes be able to mimic the pattern well enough to produce something novel and useful. If having a good shoulder advisor required being as wise and experienced as the real person, I don’t think I would have felt this post was worth writing.
That makes sense, I’ll give it a try.
Ah, I see.