I wonder if it even economically pays to break the data-wall. Like, let’s say an AI company could pivot their focus to breaking the data-wall rather than focusing on productivizing their AIs. That means they’re gonna lag behind in users, which they’d have to hope they could make up for by having a better AI. But once they break the data-wall, competitors are presumably gonna copy their method.
But once they break the data-wall, competitors are presumably gonna copy their method.
Is the assumption here that corporate espionage is efficient enough in the AI space that inventing entirely novel methods of training doesn’t give much of a competitive advantage?
Hm, I guess this was kind of a cached belief from how it’s gone down historically, but I guess they can ( and have) increased their secrecy, so I should probably invalidate the cache.
I wonder if it even economically pays to break the data-wall. Like, let’s say an AI company could pivot their focus to breaking the data-wall rather than focusing on productivizing their AIs. That means they’re gonna lag behind in users, which they’d have to hope they could make up for by having a better AI. But once they break the data-wall, competitors are presumably gonna copy their method.
Is the assumption here that corporate espionage is efficient enough in the AI space that inventing entirely novel methods of training doesn’t give much of a competitive advantage?
Hm, I guess this was kind of a cached belief from how it’s gone down historically, but I guess they can ( and have) increased their secrecy, so I should probably invalidate the cache.