Before I ask these questions, I’d like to say that my computer knowledge is limited to “if it’s not working, turn it off and turn it on again” and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I’m pretty talented at. So, uh… don’t assume I know anything, okay? :)
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of “if people evolved from apes then why are there still apes?”)
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI? Certainly the emphasis on Friendliness is important, but is that the only unique thing they’re doing?
Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?
Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.
How does it know what bits to change to make itself more intelligent?
There is an entire field called “metaheuristics” devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won’t say more about this at the moment because I’m writing some articles about it, but Chalmers’ article analyzes the logical structure of intelligence explosion in some detail.
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI?
The emphasis on Friendliness is the key thing that distinguishes SIAI and FHI from other AI-interested organizations, and is really the whole point. To develop full-blown AI without Friendliness is to develop world-destroying unfriendly AI.
Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.
Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world.
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
A couple of things come to mind, but I’ve only been studying the surrounding material for around eight months so I can’t guarantee a wholly accurate overview of this. Also, even if accurate, I can’t guarantee that you’ll take to my explanation.
Anyway, the first thing is that brain form computing probably isn’t a necessary or likely approach to artificial general intelligence (AGI) unless the first AGI is an upload. There doesn’t seem to be good reason to build an AGI in a manner similar to a human brain and in fact, doing so seems like a terrible idea. The issues with opacity of the code would be nightmarish (I can’t just look at a massive network of trained neural networks and point to the problem when the code doesn’t do what I thought it would).
The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn’t need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it’s sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).
Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it’s fairly mainstream.
As far as computing power—the computing power of the human brain is definitely measurable so we can do a pretty straightforward analysis of how much more is possible. As far as raw computing power, I think we’re actually getting quite close to the level of the human brain, but I can’t seem to find a nice source for this. There are also interesting “neuromorphic” technologies geared to stepping up the massively parallel processing (many things being processed at once) and scale down hardware size by a pretty nice factor (I can’t recall if it was 10 or 100), such as the SyNAPSE project. In addition, with things like cloud/distributed computing, I don’t think that getting enough computing power together is likely to be much of an issue.
Bootstrapping is a metaphor referring to the ability of a process to proceed on its own. So a bootstrapping AI is one that is able to self-improve along a stable gradient until it reaches superintelligence. As far as “how does it know what bits to change”, I’m going to interpret that as “How does it know how to improve itself”. That’s tough :) . We have to program it to improve automatically by using the utility function as a guide. In limited domains, this is easy and has already been done. It’s called reinforcement learning. The machine reads off its environment after taking an action an updates its “policy” (the function it uses to pick its actions) after getting feedback (positive or negative or no utility).
The tricky part is having a machine that can self-improve not just by reinforcement in a single domain, but in general, both by learning and by adjusting its own code to be more efficient, all while keeping its utility function intact—so it doesn’t start behaving dangerously.
As far as SIAI, I would say that Friendliness is the driving factor. Not because they’re concerned about friendliness, but because (as far as I know) they’re the first group to be seriously concerned with friendliness and one of the only groups (the other two being headed by Nick Bostrom and having ties to SIAI) concerned with Friendly AI.
Of course the issue is that we’re concerned that developing a generally intelligent machine is probable, and if it happens to be able to self improve to a sufficient level it will be incredibly dangerous if no one put in some serious, serious effort into thinking about how it could go wrong and solving all of the problems necessary to safeguard against that. If you think about it, the more powerful the AGI is, the more needs to be considered. An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat. This is, effectively, because the utility function has to take into account everything the machine can affect in order to guarantee we avoid catastrophe. An AGI that can affect things at a global scale needs to take everyone into consideration, otherwise it might, say, drain all electricity from the Eastern seaboard (including hospitals and emergency facilities) in order to solve a math problem. It won’t “know” not to do that, unless it’s programed to (by properly defining its utility function to make it take those things into consideration). Otherwise it will just do everything it can to solve the math problem and pay no attention to anything else. This is why keeping the utility function intact is extremely important. Since only a few groups, SIAI, Oxford’s FHI and the Oxford Martin Programme on the Impacts of Future Technologies, seem to be working on this, and it’s an incredibly difficult problem, I would much rather have SIAI develop the first AGI than anywhere else I can think of.
Hopefully that helps without getting too mired in details :)
The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn’t need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it’s sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).
Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it’s fairly mainstream.
I am not entirely sure I understood what was meant by those two paragraphs. Is a rough approximation of what you’re saying “an AI doesn’t need to be conscious, an AI needs code that will allow it to adapt to new environments and understand data coming in from its sensory modules, along with a utility function that will tell it what to do”?
Yeah, I’d say that’s a fair approximation. The AI needs a way to compress lots of input data into a hierarchy of functional categories. It needs a way to recognize a cluster of information as, say, a hammer. It also needs to recognize similarities between a hammer and a stick or a crow bar or even a chair leg, in order to queue up various policies for using that hammer (if you’ve read Hofstadter, think of analogies) - very roughly, the utility function guides what it “wants” done, the statistical inference guides how it does it (how it figures out what actions will accomplish its goals). That seems to be more or less what we need for a machine to do quite a bit.
If you’re just looking to build any AGI, he hard part of those two seems to be getting a nice, working method for extracting statistical features from its environment in real time. The (significantly) harder of the two for a Friendly AI is getting the utility function right.
An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat.
Interestingly, hypothetical UFAI (value drift) risk is something like other existential risks in its counterintuitive impact, but more so, in that (compared to some other risks) there are many steps where you can fail, that don’t appear dangerous beforehand (because nothing like that ever happened), but that might also fail to appear dangerous after-the-fact, and therefore as properties of imagined scenarios where they’re allowed to happen. The grave implications aren’t easy to spot. Assuming soft takeoff, a prototype AGI escapes to the Internet—would that be seen as a big deal if it didn’t get enough computational power to become too disruptive? In 10 years it grown up to become a major player, and in 50 years it controls the whole future…
Even without assuming intelligence explosion or other extraordinary effects, the danger of any misstep is absolute, and yet arguments against these assumptions are taken as arguments against the risk.
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
As far as we know, it easily could require an insanely high amount of computing power. The thing is, there are things out there that have as much computing power as human brains—namely, human brains themselves. So if we ever become capable of building computers out of the same sort of stuff that human brains are built out of (namely, really tiny machines that use chemicals and stuff), we’ll certainly be able to create computers with the same amount of raw power as the human brain.
How hard will it be to create intelligent software to run on these machines? Well, creating intelligent beings is hard enough that humans haven’t managed to do it in a few decades of trying, but easy enough that evolution has done it in three billion years. I don’t think we know much else about how hard it is.
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right?
Well, “bootstrapping” is the idea of AI “pulling itself up by its own bootstraps”, or, in this case, “making itself more intelligent using its own intelligence”. The idea is that every time the AI makes itself more intelligent, it will be able to use its newfound intelligence to find even more ways to make itself more intelligent.
Is it possible that the AI will eventually “hit a wall”, and stop finding ways to improve itself? In a word, yes.
How does it know what bits to change to make itself more intelligent?
There’s no easy way. If it knows the purpose of each of its parts, then it might be able to look at a part, and come up with a new part that does the same thing better. Maybe it could look at the reasoning that went into designing itself, and think to itself something like, “What they thought here was adequate, but the system would work better if they had known this fact.” Then it could change the design, and so change itself.
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do
The highlighted portion of your sentence is not obvious. What exactly do you mean by work differently? There’s a thought experiment (that you’ve probably heard before) about replacing your neurons, one by one, with circuits that behave identically to each replaced neuron. The point of the hypo is to ask when, if ever, you draw the line and say that it isn’t you anymore. Justifying any particular answer is hard (since it is axiomatically true that the circuit reacts the way that the neuron would). I’m not sure that circuit-neuron replacement is possible, but I certainly couldn’t begin to justify (in physics terms) why I think that. That is, the counter-argument to my position is that neurons are physical things and thus should obey the laws of physics. If the neuron was build once (and it was, since it exists in your brain), what law of physics says that it is impossible to build a duplicate?
how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I’m not physicist, but I don’t know that it is feasible (or understand the science well enough to have an intelligent answer). That said, it is clearly feasible with biological parts (again, neurons actually exist).
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of “if people evolved from apes then why are there still apes?”)
By hypothesis, the AI is running a deterministic process to make decisions. Let’s say that the module responsible for deciding Newcomb problems is originally coded to two-box. Further, some other part of the AI decides that this isn’t the best choice for achieving AI goals. So, the Newcomb module is changed so that it decides to one-box. Presumably, doing this type of improvement repeatedly to will make the AI better and better at achieving its goals. Especially if the self-improvement checker can itself by improved somehow.
It’s not obvious to me that this leads to super intelligence (i.e. Straumli-perversion level intelligence, if you’ve read [EDIT] A Fire on the Deep), even with massively faster thinking. But that’s what the community seems to mean by “recursive self-improvement.”
Before I ask these questions, I’d like to say that my computer knowledge is limited to “if it’s not working, turn it off and turn it on again” and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I’m pretty talented at. So, uh… don’t assume I know anything, okay? :)
How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of “if people evolved from apes then why are there still apes?”)
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI? Certainly the emphasis on Friendliness is important, but is that the only unique thing they’re doing?
Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?
Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.
There is an entire field called “metaheuristics” devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won’t say more about this at the moment because I’m writing some articles about it, but Chalmers’ article analyzes the logical structure of intelligence explosion in some detail.
The emphasis on Friendliness is the key thing that distinguishes SIAI and FHI from other AI-interested organizations, and is really the whole point. To develop full-blown AI without Friendliness is to develop world-destroying unfriendly AI.
Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
This is wrong in a boring way; you’re supposed to be wrong in interesting ways. :-)
What prevents you from making a meat-based AI?
Obligatory link.
A couple of things come to mind, but I’ve only been studying the surrounding material for around eight months so I can’t guarantee a wholly accurate overview of this. Also, even if accurate, I can’t guarantee that you’ll take to my explanation.
Anyway, the first thing is that brain form computing probably isn’t a necessary or likely approach to artificial general intelligence (AGI) unless the first AGI is an upload. There doesn’t seem to be good reason to build an AGI in a manner similar to a human brain and in fact, doing so seems like a terrible idea. The issues with opacity of the code would be nightmarish (I can’t just look at a massive network of trained neural networks and point to the problem when the code doesn’t do what I thought it would).
The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn’t need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it’s sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).
Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it’s fairly mainstream.
As far as computing power—the computing power of the human brain is definitely measurable so we can do a pretty straightforward analysis of how much more is possible. As far as raw computing power, I think we’re actually getting quite close to the level of the human brain, but I can’t seem to find a nice source for this. There are also interesting “neuromorphic” technologies geared to stepping up the massively parallel processing (many things being processed at once) and scale down hardware size by a pretty nice factor (I can’t recall if it was 10 or 100), such as the SyNAPSE project. In addition, with things like cloud/distributed computing, I don’t think that getting enough computing power together is likely to be much of an issue.
Bootstrapping is a metaphor referring to the ability of a process to proceed on its own. So a bootstrapping AI is one that is able to self-improve along a stable gradient until it reaches superintelligence. As far as “how does it know what bits to change”, I’m going to interpret that as “How does it know how to improve itself”. That’s tough :) . We have to program it to improve automatically by using the utility function as a guide. In limited domains, this is easy and has already been done. It’s called reinforcement learning. The machine reads off its environment after taking an action an updates its “policy” (the function it uses to pick its actions) after getting feedback (positive or negative or no utility).
The tricky part is having a machine that can self-improve not just by reinforcement in a single domain, but in general, both by learning and by adjusting its own code to be more efficient, all while keeping its utility function intact—so it doesn’t start behaving dangerously.
As far as SIAI, I would say that Friendliness is the driving factor. Not because they’re concerned about friendliness, but because (as far as I know) they’re the first group to be seriously concerned with friendliness and one of the only groups (the other two being headed by Nick Bostrom and having ties to SIAI) concerned with Friendly AI.
Of course the issue is that we’re concerned that developing a generally intelligent machine is probable, and if it happens to be able to self improve to a sufficient level it will be incredibly dangerous if no one put in some serious, serious effort into thinking about how it could go wrong and solving all of the problems necessary to safeguard against that. If you think about it, the more powerful the AGI is, the more needs to be considered. An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat. This is, effectively, because the utility function has to take into account everything the machine can affect in order to guarantee we avoid catastrophe. An AGI that can affect things at a global scale needs to take everyone into consideration, otherwise it might, say, drain all electricity from the Eastern seaboard (including hospitals and emergency facilities) in order to solve a math problem. It won’t “know” not to do that, unless it’s programed to (by properly defining its utility function to make it take those things into consideration). Otherwise it will just do everything it can to solve the math problem and pay no attention to anything else. This is why keeping the utility function intact is extremely important. Since only a few groups, SIAI, Oxford’s FHI and the Oxford Martin Programme on the Impacts of Future Technologies, seem to be working on this, and it’s an incredibly difficult problem, I would much rather have SIAI develop the first AGI than anywhere else I can think of.
Hopefully that helps without getting too mired in details :)
I am not entirely sure I understood what was meant by those two paragraphs. Is a rough approximation of what you’re saying “an AI doesn’t need to be conscious, an AI needs code that will allow it to adapt to new environments and understand data coming in from its sensory modules, along with a utility function that will tell it what to do”?
Yeah, I’d say that’s a fair approximation. The AI needs a way to compress lots of input data into a hierarchy of functional categories. It needs a way to recognize a cluster of information as, say, a hammer. It also needs to recognize similarities between a hammer and a stick or a crow bar or even a chair leg, in order to queue up various policies for using that hammer (if you’ve read Hofstadter, think of analogies) - very roughly, the utility function guides what it “wants” done, the statistical inference guides how it does it (how it figures out what actions will accomplish its goals). That seems to be more or less what we need for a machine to do quite a bit.
If you’re just looking to build any AGI, he hard part of those two seems to be getting a nice, working method for extracting statistical features from its environment in real time. The (significantly) harder of the two for a Friendly AI is getting the utility function right.
Interestingly, hypothetical UFAI (value drift) risk is something like other existential risks in its counterintuitive impact, but more so, in that (compared to some other risks) there are many steps where you can fail, that don’t appear dangerous beforehand (because nothing like that ever happened), but that might also fail to appear dangerous after-the-fact, and therefore as properties of imagined scenarios where they’re allowed to happen. The grave implications aren’t easy to spot. Assuming soft takeoff, a prototype AGI escapes to the Internet—would that be seen as a big deal if it didn’t get enough computational power to become too disruptive? In 10 years it grown up to become a major player, and in 50 years it controls the whole future…
Even without assuming intelligence explosion or other extraordinary effects, the danger of any misstep is absolute, and yet arguments against these assumptions are taken as arguments against the risk.
As far as we know, it easily could require an insanely high amount of computing power. The thing is, there are things out there that have as much computing power as human brains—namely, human brains themselves. So if we ever become capable of building computers out of the same sort of stuff that human brains are built out of (namely, really tiny machines that use chemicals and stuff), we’ll certainly be able to create computers with the same amount of raw power as the human brain.
How hard will it be to create intelligent software to run on these machines? Well, creating intelligent beings is hard enough that humans haven’t managed to do it in a few decades of trying, but easy enough that evolution has done it in three billion years. I don’t think we know much else about how hard it is.
Well, “bootstrapping” is the idea of AI “pulling itself up by its own bootstraps”, or, in this case, “making itself more intelligent using its own intelligence”. The idea is that every time the AI makes itself more intelligent, it will be able to use its newfound intelligence to find even more ways to make itself more intelligent.
Is it possible that the AI will eventually “hit a wall”, and stop finding ways to improve itself? In a word, yes.
There’s no easy way. If it knows the purpose of each of its parts, then it might be able to look at a part, and come up with a new part that does the same thing better. Maybe it could look at the reasoning that went into designing itself, and think to itself something like, “What they thought here was adequate, but the system would work better if they had known this fact.” Then it could change the design, and so change itself.
The highlighted portion of your sentence is not obvious. What exactly do you mean by work differently? There’s a thought experiment (that you’ve probably heard before) about replacing your neurons, one by one, with circuits that behave identically to each replaced neuron. The point of the hypo is to ask when, if ever, you draw the line and say that it isn’t you anymore. Justifying any particular answer is hard (since it is axiomatically true that the circuit reacts the way that the neuron would).
I’m not sure that circuit-neuron replacement is possible, but I certainly couldn’t begin to justify (in physics terms) why I think that. That is, the counter-argument to my position is that neurons are physical things and thus should obey the laws of physics. If the neuron was build once (and it was, since it exists in your brain), what law of physics says that it is impossible to build a duplicate?
I’m not physicist, but I don’t know that it is feasible (or understand the science well enough to have an intelligent answer). That said, it is clearly feasible with biological parts (again, neurons actually exist).
By hypothesis, the AI is running a deterministic process to make decisions. Let’s say that the module responsible for deciding Newcomb problems is originally coded to two-box. Further, some other part of the AI decides that this isn’t the best choice for achieving AI goals. So, the Newcomb module is changed so that it decides to one-box. Presumably, doing this type of improvement repeatedly to will make the AI better and better at achieving its goals. Especially if the self-improvement checker can itself by improved somehow.
It’s not obvious to me that this leads to super intelligence (i.e. Straumli-perversion level intelligence, if you’ve read [EDIT] A Fire on the Deep), even with massively faster thinking. But that’s what the community seems to mean by “recursive self-improvement.”
(A Fire Upon the Deep)
ETA: Oops! Deepness in the Sky is a prequel, didn’t know and didn’t google.
(Also, added to reading queue.)
Thanks, edited.