I’d rather call it proto- not pseudo- science. Currently it’s alchemy before chemistry was a thing.
There is a real field somewhere adjacent to the discussions lead here and people are actively searching for it. AGI is coming , you can argue the timeline, but not the event (well, unless humanity destroys itself with something else first). And artificial systems we now have often shows unexpected and difficult to predict properties. So the task “how can we increase difficulty and capabilities of AI systems, possibly to the point of AGI, while simultaneously decreasing unpredictable and unexpected side effects” is perfectly reasonable.
The problem is that current understanding of the systems and entire framework is on the level of Ptolemy astronomy. A lot of things discussed at this moment will be discarded, but some grains of gold will become new science.
TBH I have a lot of MAJOR questions to the current discourse, it’s plagued by misunderstanding of what and how is possible in artificial intelligence systems, but I don’t think it should stop. The only way we can find the solution is by working on it, even if 99% of the work will be meaningless in the end.
I’d rather call it proto- not pseudo- science. Currently it’s alchemy before chemistry was a thing.
There is a real field somewhere adjacent to the discussions lead here and people are actively searching for it. AGI is coming , you can argue the timeline, but not the event (well, unless humanity destroys itself with something else first). And artificial systems we now have often shows unexpected and difficult to predict properties. So the task “how can we increase difficulty and capabilities of AI systems, possibly to the point of AGI, while simultaneously decreasing unpredictable and unexpected side effects” is perfectly reasonable.
The problem is that current understanding of the systems and entire framework is on the level of Ptolemy astronomy. A lot of things discussed at this moment will be discarded, but some grains of gold will become new science.
TBH I have a lot of MAJOR questions to the current discourse, it’s plagued by misunderstanding of what and how is possible in artificial intelligence systems, but I don’t think it should stop. The only way we can find the solution is by working on it, even if 99% of the work will be meaningless in the end.