The theories actually used in particle physics can generally be obtained by starting with some classical field theory and then “quantizing” it. You go from something described by straightforward differential equations (the classical theory) to a quantum theory on the configuration space of the classical theory, with uncertainty principle, probability amplitudes, and so forth. There is a formal procedure in which you take the classical differential equations and reinterpret them as “operator equations”, that describe relationships between the elements of the Schrodinger equation of the resulting quantum field theory.
Many-worlds, being a theory which says that the universal wavefunction is the fundamental reality, starts with a quantum perspective and then tries to find the observable quasi-classical reality somewhere within it. However, given the fact that the quantum theories we actually use have not just a historical but a logical relationship to corresponding classical theories, you can start at the other end and try to understand quantum theory in basically classical terms, only with something extra added. This is what Hadley is doing. His hypothesis is that the rigmarole of quantization is nothing but the modification to probability theory required when you have a classical field theory coupled to general relativity, because microscopic time-loops (“closed timelike curves”) introduce certain constraints on the possible behavior of quantities which are otherwise causally disjoint (“spacelike separated”). To reduce it all to a slogan: Hadley’s theory is that quantum mechanics = classical mechanics + loops in time.
There are lots of people out there who want to answer big questions in a simple way. Usually you can see where they go wrong. In Hadley’s case I can’t, nor has anyone else rebutted the proposal. Superficially it makes sense, but he really needs to exactly re-derive the Schrodinger equation somehow, and maybe he can’t do that without a much better understanding (than anyone currently possesses) of “non-orientable 4-manifolds”. For (to put it yet another way) he’s saying that the Schrodinger equation is the appropriate approximate framework to describe the propagation of particles and fields on such manifolds.
Hadley’s theory is one member of a whole class of theories according to which complex numbers show up in quantum theory because you’re conditioning on the future as well as on the past. I am not aware of any logical proof that complex-valued probabilities are the appropriate formalism for such a situation. But there is an intriguing formal similarity between quantum field theory in N space dimensions and statistical mechanics in N+1 dimensions. It is as if, when you think about initial and final states of an evolving wavefunction, you should think about events in the intermediate space-time volume as having local classically-probabilistic dependencies both forwards and backwards in time—and these add up to chained dependencies in the space-like direction, as you move infinitesimally forward along one light-cone and then infinitesimally backward along another—and the initial and final wavefunctions are boundary conditions on this chunk of space-time, with two components (real and imaginary) everywhere corresponding to forward-in-time and backward-in-time dependencies.
This sort of idea has haunted physics for decades—it’s in “Wheeler-Feynman absorber theory”, in Aharonov’s time-symmetric quantum mechanics (where you have two state vectors, one evolving forwards and one evolving backwards)… and to date it has neither been vindicated nor debunked, as a possible fundamental explanation of quantum theory.
Turning now to your final questions: perhaps it is a little clearer now that you do not need magic to not have many-worlds at the macro level, you need only have an interpretation of micro-level superposition which does not involve two-things-in-the-one-place. Thus, according to these zigzag-in-time theories, micro-level superposition is a manifestation of a weave of causal/probabilistic dependencies oriented in two time directions, into the past and into the future. Like ordinary probability, it’s mere epistemic uncertainty, but in an unusual formalism, and in actuality the quantum object is only ever in one state or the other.
Now let’s consider Bohm’s theory. How does a quantum computer work according to Bohm? As normally understood, Bohm’s theory says you have universal wavefunction and classical world, whose evolution is guided by said wavefunction. So a Bohmian quantum computer gets to work because the wavefunction is part of the theory. However, the conceptually interesting reformulation of Bohm’s theory is one where the wavefunction is just treated as a law of motion, rather than as a thing itself. The Bohmian law of motion for the classical world is that it follows the gradient of the complex phase in configuration space. But if you calculate that through, for a particular universal wavefunction, what you get is the classically local potential exhibited by the classical theory from which your quantum theory was mathematically derived, and an extra nonlocal potential. The point is that Bohmians do not strictly need to posit wavefunctions at all—they can just talk about the form of that nonlocal potential. So, though no-one has done it, there is going to be a neo-Bohmian explanation for how a quantum computer works in which qubits don’t actually go into superposition and the nonlocal dynamics somehow (paging Dr Aaronson...) gives you that extra power.
To round this out, I want to say that my personally preferred interpretation is none of the above. I’d prefer something like thisso I can have my neo-monads. In a quasi-classical, space-time-based one-world interpretation, like Hadley’s theory or neo-Bohmian theory, Hilbert space is not fundamental. But if we’re just thinking about what looks promising as a mathematical theory of physics, then I think those options have to be mentioned. And maybe consideration of them will inspire hybrid or intermediate new theories.
I hope this all makes clear that there is a mountain of undigested complexity in the theoretical situation. Experiment has not validated many-worlds, it has validated quantum mechanics, and many worlds is just one interpretation thereof. If the aim is to “think like reality”—the epistemic reality is that we’re still thinking it through and do not know which, if any, is correct.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
John Cramer and transactional interpretation for by far the most prominent example. Wheeler-Feynman absorber theory was the historical precursor; also see “Feynman checkerboard”. Mark Hadley I mentioned. Aharonov-Vaidman for the “two state vector” version of QM, which is in the same territory. Costa de Beauregard was another physicist with ideas in this direction.
This paper is just one place where a potentially significant fact is mentioned, namely that quantum field theory with an imaginary time coordinate (also called “Euclidean field theory” because the metric thereby becomes Euclidean rather than Riemannian) resembles the statistical mechanics of a classical field theory in one higher dimension. See the remark about how “the quantum mechanical amplitude” takes on “the form of a Boltzmann probability weight”. A number of calculations in quantum field theory and quantum gravity actually use Euclideanized metrics, but just because the integrals are easier to solve there; then you do an analytic continuation back to Minkowski space and real-valued time. The holy grail for this interpretation, as far as I am concerned, would be to start with Boltzmann and derive quantum amplitudes, because it would mean that you really had justified quantum mechanics as an odd specialization of standard probability theory. But this hasn’t been done and perhaps it can’t be done.
I think that you mean Euclidean rather than Minkowskian. Euclidean vs Riemannian has to do with whether spacetime is curved (Euclidean no, Riemannian yes), while Euclidean vs Minkowskian has to do with whether the metric treats the time coordinate differently (Euclidean no, Minkowskian yes). (And then the spacetime of classical general relativity, which answers both questions yes, is Lorentzian.)
That is an excellent book even if one ignores the QM part. (In fact, I found that part the weakest, although perhaps I would understand it better now.)
The theories actually used in particle physics can generally be obtained by starting with some classical field theory and then “quantizing” it. You go from something described by straightforward differential equations (the classical theory) to a quantum theory on the configuration space of the classical theory, with uncertainty principle, probability amplitudes, and so forth. There is a formal procedure in which you take the classical differential equations and reinterpret them as “operator equations”, that describe relationships between the elements of the Schrodinger equation of the resulting quantum field theory.
Many-worlds, being a theory which says that the universal wavefunction is the fundamental reality, starts with a quantum perspective and then tries to find the observable quasi-classical reality somewhere within it. However, given the fact that the quantum theories we actually use have not just a historical but a logical relationship to corresponding classical theories, you can start at the other end and try to understand quantum theory in basically classical terms, only with something extra added. This is what Hadley is doing. His hypothesis is that the rigmarole of quantization is nothing but the modification to probability theory required when you have a classical field theory coupled to general relativity, because microscopic time-loops (“closed timelike curves”) introduce certain constraints on the possible behavior of quantities which are otherwise causally disjoint (“spacelike separated”). To reduce it all to a slogan: Hadley’s theory is that quantum mechanics = classical mechanics + loops in time.
There are lots of people out there who want to answer big questions in a simple way. Usually you can see where they go wrong. In Hadley’s case I can’t, nor has anyone else rebutted the proposal. Superficially it makes sense, but he really needs to exactly re-derive the Schrodinger equation somehow, and maybe he can’t do that without a much better understanding (than anyone currently possesses) of “non-orientable 4-manifolds”. For (to put it yet another way) he’s saying that the Schrodinger equation is the appropriate approximate framework to describe the propagation of particles and fields on such manifolds.
Hadley’s theory is one member of a whole class of theories according to which complex numbers show up in quantum theory because you’re conditioning on the future as well as on the past. I am not aware of any logical proof that complex-valued probabilities are the appropriate formalism for such a situation. But there is an intriguing formal similarity between quantum field theory in N space dimensions and statistical mechanics in N+1 dimensions. It is as if, when you think about initial and final states of an evolving wavefunction, you should think about events in the intermediate space-time volume as having local classically-probabilistic dependencies both forwards and backwards in time—and these add up to chained dependencies in the space-like direction, as you move infinitesimally forward along one light-cone and then infinitesimally backward along another—and the initial and final wavefunctions are boundary conditions on this chunk of space-time, with two components (real and imaginary) everywhere corresponding to forward-in-time and backward-in-time dependencies.
This sort of idea has haunted physics for decades—it’s in “Wheeler-Feynman absorber theory”, in Aharonov’s time-symmetric quantum mechanics (where you have two state vectors, one evolving forwards and one evolving backwards)… and to date it has neither been vindicated nor debunked, as a possible fundamental explanation of quantum theory.
Turning now to your final questions: perhaps it is a little clearer now that you do not need magic to not have many-worlds at the macro level, you need only have an interpretation of micro-level superposition which does not involve two-things-in-the-one-place. Thus, according to these zigzag-in-time theories, micro-level superposition is a manifestation of a weave of causal/probabilistic dependencies oriented in two time directions, into the past and into the future. Like ordinary probability, it’s mere epistemic uncertainty, but in an unusual formalism, and in actuality the quantum object is only ever in one state or the other.
Now let’s consider Bohm’s theory. How does a quantum computer work according to Bohm? As normally understood, Bohm’s theory says you have universal wavefunction and classical world, whose evolution is guided by said wavefunction. So a Bohmian quantum computer gets to work because the wavefunction is part of the theory. However, the conceptually interesting reformulation of Bohm’s theory is one where the wavefunction is just treated as a law of motion, rather than as a thing itself. The Bohmian law of motion for the classical world is that it follows the gradient of the complex phase in configuration space. But if you calculate that through, for a particular universal wavefunction, what you get is the classically local potential exhibited by the classical theory from which your quantum theory was mathematically derived, and an extra nonlocal potential. The point is that Bohmians do not strictly need to posit wavefunctions at all—they can just talk about the form of that nonlocal potential. So, though no-one has done it, there is going to be a neo-Bohmian explanation for how a quantum computer works in which qubits don’t actually go into superposition and the nonlocal dynamics somehow (paging Dr Aaronson...) gives you that extra power.
To round this out, I want to say that my personally preferred interpretation is none of the above. I’d prefer something like this so I can have my neo-monads. In a quasi-classical, space-time-based one-world interpretation, like Hadley’s theory or neo-Bohmian theory, Hilbert space is not fundamental. But if we’re just thinking about what looks promising as a mathematical theory of physics, then I think those options have to be mentioned. And maybe consideration of them will inspire hybrid or intermediate new theories.
I hope this all makes clear that there is a mountain of undigested complexity in the theoretical situation. Experiment has not validated many-worlds, it has validated quantum mechanics, and many worlds is just one interpretation thereof. If the aim is to “think like reality”—the epistemic reality is that we’re still thinking it through and do not know which, if any, is correct.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Answering from within a zigzag interpretation:
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
The Novikov self-consistency principle for classical wormhole space-times seems like it might pose similar challenges.
By the way, can’t I ask you, as a many-worlder, precisely the same question—does A happen first, or does B happen first?
My understanding was that Eliezer is more taking time out of the equation than worrying about which “happen[ed] first.”
His questions make no sense to me from a timeless perspective. They seem remarkably unsophisticated for him.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
This is definitely an area where I wouldn’t presume to have my own opinion.
Still, I’m pretty convinced that Porter and Yudkowsky have really learned something about quantum physics.
Could you give a couple of keywords/entry points/references for the zig-zag thingie?
John Cramer and transactional interpretation for by far the most prominent example. Wheeler-Feynman absorber theory was the historical precursor; also see “Feynman checkerboard”. Mark Hadley I mentioned. Aharonov-Vaidman for the “two state vector” version of QM, which is in the same territory. Costa de Beauregard was another physicist with ideas in this direction.
This paper is just one place where a potentially significant fact is mentioned, namely that quantum field theory with an imaginary time coordinate (also called “Euclidean field theory” because the metric thereby becomes Euclidean rather than Riemannian) resembles the statistical mechanics of a classical field theory in one higher dimension. See the remark about how “the quantum mechanical amplitude” takes on “the form of a Boltzmann probability weight”. A number of calculations in quantum field theory and quantum gravity actually use Euclideanized metrics, but just because the integrals are easier to solve there; then you do an analytic continuation back to Minkowski space and real-valued time. The holy grail for this interpretation, as far as I am concerned, would be to start with Boltzmann and derive quantum amplitudes, because it would mean that you really had justified quantum mechanics as an odd specialization of standard probability theory. But this hasn’t been done and perhaps it can’t be done.
I think that you mean Euclidean rather than Minkowskian. Euclidean vs Riemannian has to do with whether spacetime is curved (Euclidean no, Riemannian yes), while Euclidean vs Minkowskian has to do with whether the metric treats the time coordinate differently (Euclidean no, Minkowskian yes). (And then the spacetime of classical general relativity, which answers both questions yes, is Lorentzian.)
Also, the book by Huw Price.
That is an excellent book even if one ignores the QM part. (In fact, I found that part the weakest, although perhaps I would understand it better now.)