I think more leaders of orgs should be trying to shape their organizations incentives and cultures around the challenges of “crunch time”. Examples of this include:
What does pay look like in a world where cognitive labor is automated in the next 5 to 15 years? Are there incentive structures (impact equity, actual equity, bespoke deals for specific scenarios) that can help team members survive, thrive, and stay on target?
What cultural norms should the team have to AI assisted work? On the one hand it seems necessary to accelerate safety progress, on the other I expect many applications are in fact trojan horses designed to automate people out of jobs (looking at you MSFT rewind) - are there credible deals to be made that can provide trust?
Does the organization expect to be rapidly changing to new events in AI—and if so how will sensemaking happen—or does it expect to make it’s high conviction bet early on and stay the course through distractions? Do teammembers know that?
I have more questions than answers, but the background level of stress and disorientation for employees and managers will be rising, especially in AI Safety orgs, and starting to come up w/ contextually true answers (I doubt there’s a universal answer) will be important.
Some kind of payment for training data from applications like MSFT rewind does seem fair. I wonder if there will be a lot of growth in jobs where your main task is providing or annotating training data.
I think more leaders of orgs should be trying to shape their organizations incentives and cultures around the challenges of “crunch time”. Examples of this include:
What does pay look like in a world where cognitive labor is automated in the next 5 to 15 years? Are there incentive structures (impact equity, actual equity, bespoke deals for specific scenarios) that can help team members survive, thrive, and stay on target?
What cultural norms should the team have to AI assisted work? On the one hand it seems necessary to accelerate safety progress, on the other I expect many applications are in fact trojan horses designed to automate people out of jobs (looking at you MSFT rewind) - are there credible deals to be made that can provide trust?
Does the organization expect to be rapidly changing to new events in AI—and if so how will sensemaking happen—or does it expect to make it’s high conviction bet early on and stay the course through distractions? Do teammembers know that?
I have more questions than answers, but the background level of stress and disorientation for employees and managers will be rising, especially in AI Safety orgs, and starting to come up w/ contextually true answers (I doubt there’s a universal answer) will be important.
Some kind of payment for training data from applications like MSFT rewind does seem fair. I wonder if there will be a lot of growth in jobs where your main task is providing or annotating training data.
I’ve seen reddit ads from multiple companies offering to work for them doing freelance annotation / high-quality-text-data generation.