Is it reasonable to expect that every future technology be comprehensible to the minds of human beings alive today, otherwise it’s impossible? I realize this sounds awfully convenient/magic-like, but is there not a long track record in technological development of feats which were believed impossible, becoming possible as our understanding improves? A famous example being the advent of spectrometry making possible the determination of the composition of stars, and the atmospheres of distant planets:
”In his 1842 book The Positive Philosophy, the French philosopher Auguste Comte wrote of the stars: “We can never learn their internal constitution, nor, in regard to some of them, how heat is absorbed by their atmosphere.” In a similar vein, he said of the planets: “We can never know anything of their chemical or mineralogical structure; and, much less, that of organized beings living on their surface.”
Comte’s argument was that the stars and planets are so far away as to be beyond the limits of everything but our sense of sight and geometry. He reasoned that, while we could work out their distance, their motion and their mass, nothing more could realistically be discerned. There was certainly no way to chemically analyse them.
Ironically, the discovery that would prove Comte wrong had already been made. In the early 19th century, William Hyde Wollaston and Joseph von Fraunhofer independently discovered that the spectrum of the Sun contained a great many dark lines.
This feels like an issue of framing. It is not contentious on this site to propose that AI which exceeds human intelligence will be able to produce technologies beyond our understanding and ability to develop on our own, even though it’s expressing the same meaning.
Is it reasonable to expect that every future technology be comprehensible to the minds of human beings alive today, otherwise it’s impossible? I realize this sounds awfully convenient/magic-like, but is there not a long track record in technological development of feats which were believed impossible, becoming possible as our understanding improves? A famous example being the advent of spectrometry making possible the determination of the composition of stars, and the atmospheres of distant planets:
”In his 1842 book The Positive Philosophy, the French philosopher Auguste Comte wrote of the stars: “We can never learn their internal constitution, nor, in regard to some of them, how heat is absorbed by their atmosphere.” In a similar vein, he said of the planets: “We can never know anything of their chemical or mineralogical structure; and, much less, that of organized beings living on their surface.”
Comte’s argument was that the stars and planets are so far away as to be beyond the limits of everything but our sense of sight and geometry. He reasoned that, while we could work out their distance, their motion and their mass, nothing more could realistically be discerned. There was certainly no way to chemically analyse them.
Ironically, the discovery that would prove Comte wrong had already been made. In the early 19th century, William Hyde Wollaston and Joseph von Fraunhofer independently discovered that the spectrum of the Sun contained a great many dark lines.
By 1859 these had been shown to be atomic absorption lines. Each chemical element present in the Sun could be identified by analysing this pattern of lines, making it possible to discover just what a star is made of.”
https://www.newscientist.com/article/dn13556-10-impossibilities-conquered-by-science/
No, but it’s also not reasonable to privilege a hypothesis.
This feels like an issue of framing. It is not contentious on this site to propose that AI which exceeds human intelligence will be able to produce technologies beyond our understanding and ability to develop on our own, even though it’s expressing the same meaning.
Then why limit things to light cones?
Conservatism, just not absolute.