The disconnect between what Machine Learning represents and the desired or “hyped” abilities is very real. The flashy, headline grabbing results of the past decade are certainly a sideshow, but instead of hiding how far these systems are from actual cognition (and especially from being sentient), they obscure the simple nature of what the xNN models represent. They are a vast probability table with a search function. The impressive outputs are based on the impressive inputs that went into the training process. There is no active mind in the middle. There is no form of cognition taking place at the time of training.
We should be even more focused on AI.
When focus is based on the latest arXiv paper, or this weeks SOTA, then general sight has already been lost. Rather than follow behind the circus animals, effort should be made to stop and orient one’s self. Where is AI today. Where was it 60 years ago. What direction should it be going. Is the circus, the sideshow, really where all time and effort should be invested? What else is out there? In describing AI research as a journey from Los Angeles to New York, ML is Las Vegas. Sentient animal research might be Albuquerque or Denver. Chicago as early childhood development.
Different aspects of the problem.
My original post attempted to address this very point. Current efforts to predict arrival times or plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late.
Take a leap of faith and assume advances in Machine Learning are only specific to ML. Looking for the right kind of markers of progress toward the science fiction levels of AI, the kind that are not just incrementally better than current years examples, requires understanding AI itself. Not what is current. Not what came before. An AI that understood basic math would make zero errors on a math test. That current systems get some percentage below that tells us they don’t REALLY understand.
It’s beyond the scope of EA efforts, but I need to add that examples of high-level thinking are what current systems try to mimic. Actual research doesn’t start with language translation or solving mathematical conjectures. Like the human mind, a lot of effort goes into building a structure that supports abstract logic, and solving this “critical mass” problem is likely to leave a very tiny footprint.
Scrutiny is mutiny.
While the premise is to have the communities assumptions “exposed to external scrutiny,” there is a strong correlation between “popular” posts and those that support existing assumptions. I don’t think selection bias is going to improve anything.
Do you really want your mind changed? If the AI challenged is solved in a manner you didn’t plan for, then 10’s of millions spent on ‘wrong method’ alignment will have been wasted. Seems like you literally can afford to keep your options open.
Review of the Challenge
All of this AI stuff is a misguided sideshow.
The disconnect between what Machine Learning represents and the desired or “hyped” abilities is very real. The flashy, headline grabbing results of the past decade are certainly a sideshow, but instead of hiding how far these systems are from actual cognition (and especially from being sentient), they obscure the simple nature of what the xNN models represent. They are a vast probability table with a search function. The impressive outputs are based on the impressive inputs that went into the training process. There is no active mind in the middle. There is no form of cognition taking place at the time of training.
We should be even more focused on AI.
When focus is based on the latest arXiv paper, or this weeks SOTA, then general sight has already been lost. Rather than follow behind the circus animals, effort should be made to stop and orient one’s self. Where is AI today. Where was it 60 years ago. What direction should it be going. Is the circus, the sideshow, really where all time and effort should be invested? What else is out there? In describing AI research as a journey from Los Angeles to New York, ML is Las Vegas. Sentient animal research might be Albuquerque or Denver. Chicago as early childhood development.
Different aspects of the problem.
My original post attempted to address this very point. Current efforts to predict arrival times or plot a development path based on compute scaling laws are a game of Russian Roulette. You only get it right after finding the magic bullet, and by then, it’s too late.
Take a leap of faith and assume advances in Machine Learning are only specific to ML. Looking for the right kind of markers of progress toward the science fiction levels of AI, the kind that are not just incrementally better than current years examples, requires understanding AI itself. Not what is current. Not what came before. An AI that understood basic math would make zero errors on a math test. That current systems get some percentage below that tells us they don’t REALLY understand.
It’s beyond the scope of EA efforts, but I need to add that examples of high-level thinking are what current systems try to mimic. Actual research doesn’t start with language translation or solving mathematical conjectures. Like the human mind, a lot of effort goes into building a structure that supports abstract logic, and solving this “critical mass” problem is likely to leave a very tiny footprint.
Scrutiny is mutiny.
While the premise is to have the communities assumptions “exposed to external scrutiny,” there is a strong correlation between “popular” posts and those that support existing assumptions. I don’t think selection bias is going to improve anything.
Do you really want your mind changed? If the AI challenged is solved in a manner you didn’t plan for, then 10’s of millions spent on ‘wrong method’ alignment will have been wasted. Seems like you literally can afford to keep your options open.