This Web page is devoted to proofs that there are few if any mathematical differences between classical physics and quantum physics, especially given the lack of a well-formed semantics for physics. These proofs force the conclusion that there are mathematical systems for which the mathematics is equivalent for applied classical physics and applied quantum physics. A subset of this conclusion includes the applied physics that is computing. And thus, there are mathematical formulisms for which classical computing and quantum computing are equivalent. The challenge then, addressed on other parts of this Web site, is deriving mathematical formalisms that express these equivalances, formalisms that can be engineered in software to completely enable universal quantum algorithm execution on 'classical computers' (i.e., without physical/optical qubits).
That classical physics and quantum physics have much in common, much overlap, dates back to the alternative formalism of quantum mechanics due to Eugene Wigner, who in a 1932 paper [Wigner1932] introduced a density matrix basis for quantum mechanics. One consequence of his formalism is that, to quote [Matulis2019] "quantum symbols and equations are quite similar to their classical counterparts" (discussed below).
For a long time, Wigner's formalisms was treated as a second-class formalism, as opposed to the wavefunction matrix formalism of Heisenberg and Schrodinger, because of one bizarre consequence of Wigner's formalism - the appearance of negative probabilities, a concept even today is outside of the mainstream of probability theory, and barely mentioned in undergraduate and graduate physics programs.
Even Wigner himself did not mention these negative probabilities much after his 1932 paper, and for decades they were referred to as 'pseudo-probabilities', useful math tools, but not a part of reality because there weren't being measured. But starting in 1996, negative probabilities were measured in the laboratory and more recently have been engineered for practical uses (and thus much more a part of reality than strings). Which lends support, starting from Wigner's formalism, to the argument that there are few if any differences between classical and quantum physics (much like the few differences between American English and British English :-)
What follows are excerpts from papers in recent years arguing these few if any differences between classical physics and quantum physics. Which forces the conclusion that there should be few differences between classical computing and quantum computing. (Note: this is Appendix C of a comprehensive review of negative probabilities).
Note: this Web page can be equivalently titled "The Corruption of the Teaching of Quantum Mechanics and Quantum Computing", given that none of the following is taught in the introductory courses, even though the following makes it much easier to transition from the classical to the quantum.
In 1915, L. Fejér published a paper on polynomials, “Ueber trigonometrische polynome” (in J. reine u. angew. Math., v146 53-82), one of the implications of which is that there are no fundamental differences between classical and quantum mechanical probabilities, and that much of quantum mechanics can be derived from classical mechanics. I quote from a 1998 paper by F.H. Frohner, “Missing link between probability theory and quantum mechanics: the Riesz-Fejér theorem” [Fröhner1998] that excellently explores all of the implications of Fejér’s work with regards to the near mathematical equivalence of classical and quantum physics (implications pretty much ignored by everyone afterwards, and not recognized by Fejér and Riesz).Abstract: ... The superposition principle is found to be a consequence of an apparently little-known mathematical theorem for non-negative Fourier polynomials published by Fejér in 1915 that implies wave-mechanical interference already for classical probabilities. Combined with the classical Hamiltonian equations for free and accelerated motion, gauge invariance and particle indistinguishability, it yields all basic quantum features - wave-particle duality, operator calculus, uncertainty relations, Schrödinger equation, CPT invariance and even the spin-statistics relationship - which demystifies quantum mechanics to quite some extent.
Page 651: The final conclusion is (1) traditional probability theory can be extended by means of the Riesz-Fejer superposition theorem, without change of the basic sum and product rules from which it unfolds, hence without violation of Cox's consistency conditions; (2) the resulting probability wave theory turns out to be essentially the formalism of quantum mechanics inferred by physicists with great effort from the observation of atomic phenomena.
Page 652: From this viewpoint quantum mechanics looks much like an error propagation (or rather information transmittal) formalism for uncertainty-afflicted physical systems that obey the classical equations of motion.
The near equivalence of classical and quantum mechanics can be traced back as far as Wigner’s 1932 paper [Wigner1932] where he introduced his density matrix formalism of quantum mechanics (which led to the appearance of negative probabilities). [Matulis2019], Section 8, has a nice summary of the implicit equivalence in Wigner’s paper:MOYAL - 1949[The] main advantage [of the Wigner formalism] is that in this representation quantum symbols and equations are quite similar to their classical counterparts. For instance, the coordinate and momentum operators convert themselves just to the numbers similar to the coordinate x and the momentum v used in classical mechanics. Their mean values are expressed in terms of simple integrals where they are multiplied by the Wigner function. The density of the particles as a function of space and time is just the integral of the Wigner function. … In the case of simple systems (that of free particles) the density matrix in the Wigner representation satisfies the classical Liouville equation, and the quantum effects may reveal themselves only due to additional restrictions such as the boundary or initial conditions to be satisfied by the density matrix.The Liouville equation describing the time evolution of the phase space distribution function of classical mechanics, the Liouville theorem asserting that the phase space distribution function is constant among the trajectories of a Hamiltonian dynamical system.
Sadly, Wigner died before negative probabilities were first measured in laboratory experiments. Had he lived long enough, he would have rewritten his 1932 paper by deleting the following paragraph (where P() is the Wigner function for both pure and mixed states):Of course, P(x1, . . . , xn; p1, . . ., pn) cannot be really interpreted as the simultaneous probability for coordinates and momenta, as is clear from the fact, that it may take negative values. But of course this must not hinder the use of it in calculations as an auxiliary function which obeys many relations we would expect from such a possibility.
In 1949, José Enrique Moyal, in a paper "Quantum Mechanics as a Statistical Theory" [Moyal1949], independently derives the distributions of Wigner, and recognizes that they are quantum moment-generating functionals, and thus the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in classical phase space (from the Wikipedia page).ZACHOS - 2002
Cosmas Zachos, at Argonne National Laboratory, in his paper "Deformation Quantization: quantum mechanics lives and works in phase space" [Zachos2002], provides an excellent summary of how Wigner's formalism gets rid of much of quantum mechanics: wavefunctions, Hilbert space, operators, etc.CLAEYS / POLKOVNIKOV - 2020[The third formalism after Schrodinger/Heisenberg Hilbert space, and then Feynman's path integrals] is the phase space formalism, based on Wigner's (1932) quasi-distribution function and Weyl's (1927) correspondence between quantum mechanical operators and ordinary c-number phase-space functions. The crucial comosition structure of these functions, which relies on the *-product, was fully understood by Groeneweld (1946), who, together with Moyal (1949), pulled the entire formulation together. ...
[Wigner's distribution function] is a special representation fo the density matrix (n the Weyl correspondence). Alternatively, it is a generating function for all spatial autocorrelation functions of a given quantum mechanical wavefunction.
... the central conceit of this review is that the above input wavefunctions may ultimately be foreited, since the Wigner functions are determined, in principle, as the solutions of suitable functional equations. Connections to the Hilbert space formalism of quantum mechanics may thus be ignored, ...
It is not only wavefunctions that are missing in this formulation. Beyond an all-important (noncommutative, associative, pseudodifferential) operation, the *-product, which encodes the entire quantum mechanical action, there are no operators. Observables and transition amplitudes are phase-space integrals of c-number function (which compose through the *-product), weighted by the Wigner function, as in statistical mechanics. ... the computation of observables and the associated concepts are evocative of classical probability theory.
[Wigner's] formulation of quantum mechanics is useful in describing quantum transport processes in phase space, of importance in quantum optics, nuclear physics, condensed matter, and the study of semiclassical limits of mesoscopic systems and the transition to classical statistical mechanics. It is the natural language to study quantum chaos and decoherence, and provides intuition in quantum mechanical interference, probability flows as negative probability backflows, and measurements of atomic systems. ...
As a significant aside, the Wigner function has extensive practical applications in signal processing and engineering (time-frequency analysis), since time and energy (frequency) constitute a pair of Fourier-conjugate variables just like the x and p of phase space.
Claeys and Polkovnikov, in their paper, "Quantum eigenstates from classical Gibbs distributions", show how complete the Wigner formalism is (sometimes referred to as the Wigner/Moyal and Wigner/Weyl formalisms reflecting the similar writings of Moyal and Weyl):BRACKEN - 2008Abstract: ... We discuss how the language of wave functions (state vectors) and associated non-commuting Hermitian operators naturally emerges from classical mechanics by applying the inverse Wigner-Weyl transform to the phase space probability distribution and observables. In this language, the Schrödinger equation follows from the Liouville equation, with h-bar now a free parameter. ... We illustrate this correspondence by showing that some paradigmatic examples such as tunneling, band structures, and quantum eigenstates in chaotic potentials can be reproduced to a surprising precision from a classical Gibbs ensemble, without any reference to quantum mechanics and with all parameters (including h-bar) on the order of unity.Note: what is of additional interest in [Claeys2020] is that their extension of classical mechanics also gives rise to negative probabilities. "Interestingly, it is now classical mechanics which allows for apparent negative probabilities to occupy eigenstates, dual to the negative probabilities in Wigner's quasiprobability distribution."
A.J. Bracken, in his paper “Quantum mechanics as an approximation to classical mechanics in Hilbert space” [Bracken2008], uses Wigner’s function in another way to create a near classical/quantum equivalence:Classical mechanics is formulated in complex Hilbert space with the introduction of a commutative product of operators, an antisymmetric bracket, and a quasi-density operator. These are analogues of the star product, the Moyal bracket, and the Wigner function in the phase space formulation of quantum mechanics. Classical mechanics can now be viewed as a deformation of quantum mechanics. The forms of semi-quantum approximations to classical mechanics are indicated.
Jean-André Ville, in a paper “Theory and Applications of the Notion of a Complex Signal” [Ville1958], independently derives Wigner's quasiprobability functions from a purely signal theory point of view (one year later, Moyal also independently derives Wigner's function, but back again in the world of quantum mechanics). In Part Three, "Distribution of Energy in the Time Frequency Domain", equations (6), (8), and (10) on page 22 of the article are basically Wigner's function. While he does not discuss how his function goes negative, he does tie his analysis back to quantum mechanics (page 11):We treat this question in Part III, according to the following principles: a signal may be considered as being a certain amount of energy, whose distribution in time (given by the form of the signal) and in frequency (given by the spectrum) is known. If the signal extends through an interval of time T and an interval of frequencies Ω, we have a distribution of energy in a rectangle TΩ . We know the projections of this distribution upon the sides of the rectangle, but we do not know the distribution in the rectangle itself. If we try to determine the distribution within the rectangle, we run into the following difficulty: if we cut up the signal on the time scale, we display frequencies; if we cut up on the frequency scale, we display the times. The distribution cannot by determined by successive measures. A simultaneous determination must be sought, which has only a theoretical significance. Therefore, we must operate either on the signal or on the spectrum. But for the signal where, for example, time is a variable, frequency is properly speaking an operator (the operator (pi/2 j)d/dt, for frequencies in cps). We have determined the simultaneous distribution of t and of (pi/2 j)d/dt, by methods of calculus of probabilities, which easily leads to the instantaneous spectrum (and just as easily to the distribution in time of the energy associated with one frequency). It is seen that the formal character of the method of calculation used is imposed by the difficulty encountered, which is analogous to that which occurs in quantum mechanics when non-permutable operators must be composed.It may be more than “analogous”.
We thus see, that over 70 years ago, in the works of classical physics and mathematics of Fejer, Wigner, Moyal and Ville, that one can express much of quantum mechanics without any 'spooky' assumptions, but instead just with the mathematics of classical mechanics. As others argue below, if you add the constraint that classical measurements can't be infinitely precise, you pretty much get all of quantum mechanics in classical mechanics. That is, the mathematics of the classical is much the same the mathematics of the quantum. Which means that one subset of this latter statement, applied mathematics, allows us to state that the applied mathematics of the classical is much the same as the applied mathematics of the quantum. With computing being applied mathematics, you get that classical computing should be much the same as quantum computing. You just need to find the right mathematics. Since we engineers axiomatically accept that you can't measure anything with infinite precision (even with the best Hewlett-Packard equipment), we engineers accept the challenge of finding this mathematics.
And that all of this is barely discussed in standard quantum mechanics (starting with it is never taught), nor in stardard quantum computing, is a sad statement of intellectual corruption.
Christian Baumgarten publishes an interesting paper, “How to (Un)-Quantum Mechanics” [Baumgart2018], in which he argues quite bluntly that “that the real difference between Classical Mechanics and Quantum Mechanics is not mathematical”:If the metaphysical assumptions ascribed to classical mechanics are dropped, then there exists a presentation in which little of the purported difference between quantum and classical mechanics remains. This presentation allows us to derive the mathematics of relativistic quantum mechanics on the basis of a purely classical Hamiltonian phase space picture. It is shown that a spatio-temporal description is not a condition for but a consequence of objectivity. It requires no postulates. This is achieved by evading spatial notions and assuming nothing by time translation invariance. ... that the real difference between Classical Mechanics and Quantum Mechanics is not mathematical.He couples this with a reference to much debunking of the mysteries:Many, maybe most, of the alleged mysteries of QM have been debunked before, or their non-classicality has been critically reviewed. We went beyond a mere critic of the standard approach: as we have shown there is little in the mathematical formalism of quantum theory that cannot be obtained from classical Hamiltonian mechanics.With regards to non-locality, he writes:The intrinsic non-locality of un-quantum mechanics explains why it makes only limited sense to ask where an electron or photon “really” is in space: the electron is not located at some specific position in space at all. Because physical ontology is not primarily defined by spatial notions, it is meaningless to ask if it can simultaneously “be” at different positions. Surely it can, since projected onto space-time, the electron has no definite location, but “is” a wave.He concludes:The math [of CM and QM] is literally the same. The only victim of our presentation is the metaphysical presupposition that space is fundamental. This however is in agreement with the experimental tests of Bell’s theorem: it is a price we have to pay anyhow.In other papers, he shows how Schrodinger’s equation [Baumgart2020] and Dirac’s equation can be simply derived by abandoning the illogic of ‘point’ particles:As Rohrlich’s analysis reveals, the alleged intuitive-ness and logic of the notion of the point particle fails, on closer inspection, to provide a physically and logically consistent classical picture. If we dispense this notion, Schrödinger’s equation can be easily derived and might be regarded as a kind of regularization that allows to circumvent the problematic infinities of the ‘classical’ point-particle-idealization. Our presentation demonstrates that the “Born rule”, which states that ψ⋆ψ is a density (also a “probability density” is positive semidefinite), can be made the initial assumption of the theory rather than its interpretation. However, as well-known, Schrödinger’s equation is not the most fundamental equation, but is derived from the Dirac equation. Only for the Lorentz covariant Dirac equation we can expect full compatibility with electromagnetic theory. We have shown elsewhere how the Dirac equation can be derived from ‘first’ (logical) principles [14–16]. The derivation automatically yields the Lorentz transformations, the Lorentz force law [17–19] and even Maxwell’s equations  in a single coherent framework.Again, if the mathematics of classical and quantum mechanics are mostly the same, then there will be much overlap between the mathematics of classical and quantum computing, allowing the small gap to be filled in by negative probabilities to then eliminate the need for quantum computing hardware (other than your cellphone or personal computer).
Quantum, classical and intermediate: a measurement model Diederik Aerts and Thomas Durt, in the paper "Quantum, classical and intermediate: a measurement model" [Aerts1994] (Free University of Brussels, 1994) present a measurement system that continuously transfers smoothly (with the correct math) from classical to a hybrid/intermediate to a quantum state.“The limit of zero fluctuations is classical and the limit of maximal fluctuations is quantum.”
“If we consider the structure of the intermediate case, we can show that the Hilbert space axioms of quantum mechanics are no longer valid), but are replaced by a more general structure, and this explains why it is not possible to have a continuous transition between quantum and classical within the orthodox Hilbert space quantum mechanics. We shall show that for the intermediate case, the probability model is not quantum (representable by a Hilbertian probability model). Both results indicate that the fundamental difficulty of describing the measurement process might be due to a structural shortcoming of the available physical theories (quantum mechanics and classical mechanics).”
Ghenadie Mardari and James Greenword argue that one interpretation allows quantum superposition to be a classical process, in their paper “Classical sources of non-classical physics: the case of linear superposition” [Mardari2004]:Classical linear wave superposition produces the appearance of interference. This observation can be interpreted in two equivalent ways: one can assume that interference is an illusion because input components remain unperturbed, or that interference is real and input components undergo energy redistribution. Both interpretations entail the same observable consequences at the macroscopic level, but the first approach is considerably more popular. This preference was established before the emergence of quantum mechanics. Unfortunately, it requires a non-classical underlying mechanism and fails to explain well-known microscopic observations. Classical physics appears to collapse at the quantum level. On the other hand, quantum superposition can be described as a classical process if the second alternative is adopted. The gap between classical mechanics and quantum mechanics is an interpretive problem.
Gerard ‘t Hooft that there are few differences between the classical and the quantum, in his paper “Quantum mechanics from classical logic” [Hooft2012], from the Abstract:Although quantum mechanics is generally considered to be fundamentally incompatible with classical logic, it is argued here that the gap is not as great as it seems. Any classical, discrete, time reversible system can be naturally described using a quantum Hilbert space, operators, and a Schrödinger equation. The quantum states generated this way resemble the ones in the real world so much that one wonders why this could not be used to interpret all of quantum mechanics this way. Indeed, such an interpretation leads to the most natural explanation as to why a wave function appears to “collapse” when a measurement is made, and why probabilities obey the Born rule. Because it is real quantum mechanics that we generate, Bell’s inequalities should not be an obstacle.Three years later, 't Hooft writes a 250 page paper on how to view quantum mechanics as nothing more than a tool, not a theory, for analyzing classical systems, using cellular automaton techniques, in his paper, "The Cellular Automaton Interpretation of Quantum Mechanics" [tHooft2015]. He writes:Abstract: ... Quantum mechanics is looked upon as a tool, not as a theory. Examples are displayed of models that are classical in essence, but can be analyzed by the use of quantum techniques, and we argue that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analyze a system that could be classical at its core. We explain how such thoughts can conceivably be reconciled with Bell’s theorem, and how the usual objections voiced against the notion of ‘superdeterminism’ can be overcome, at least in principle. Our proposal would eradicate the collapse problem and the measurement problem. Even the existence of an “arrow of time” can perhaps be explained in a more elegant way than usual.Six years later, 't Hooft shows how any quantum mechanical model can be modeled by a sufficiently complex classical system of equations, in his paper, "Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy" [tHooft2021], while also arguing that we don't need the un-real real numbers of Cantor and Dedekind. He writes:Abstract: The machinery of quantum mechanics is fully capable of describing a single ontological world. Here we discuss the converse: in spite of appearances, and indeed numerous claims to the contrary, any quantum mechanical model can be mimicked, up to any required accuracy, by a completely classical system of equations. An implication of this observation is that Bell’s theorem cannot hold in many cases. This is explained by scrutinising Bell’s assumptionsc oncerning causality, retrocausality, statistical (in-)dependence, and his fear of ‘conspiracy’ (there is no conspiracy in our constructions).
Conclusion: What one can notice from the results of this paper is, that we cannot have just any set of real numbers for these interaction parameters [of the Standard Model]. Finiteness of the lattice of fast fluctuating parameters would suggest that, if only we could guess exactly what the fast moving variables are, we should be able to derive all interactionsin terms of simple, rational coefficients. Thus, a prudent prediction might be made:All interaction parameters for the fundamental particles are calculable in terms of simple, rational coefficients.
Manaka Okuyama and Masayuki Ohzeki, in their paper, "Quantum speed limit is not quantum" [Okuyama2017], show yet another non-distinction between quantum mechanics and classical mechanics:Abstract: The quantum speed limit (QSL), or the energy-time uncertainty relation, describes the fundamental maximum rate for quantum time evolution and has been regarded as being unique in quantum mechanics. In this study, we obtain a classical speed limit corresponding to the QSL using the Hilbert space for the classical Liouville equation. Thus, classical mechanics has a fundamental speed limit, and QSL is not a purely quantum phenomenon but a universal dynamical property of the Hilbert space. Furthermore, we obtain similar speed limits for the imaginary-time Schrödinger equations such as the master equation.
Flavio Del Santo and Nicolas Gisin, in their paper "Physics without determinism: alternative interpretations of classical physics" [Santo2019], argue that another supposed difference between classical and quantum mechanics is not true and is an arbitrary distinction (from the Conclusion):However, it seems clear that the empirical results of both classical and quantum mechanics can fit either in a deterministic or indeterministic framework. Furthermore, there are compelling arguments to support the view that the same conclusion can be reached for any given physical theory – a trivial way to make an indeterminate theory fully determined is to "complete" the theory with all the results of every possible experiments that can be performed.They continue these arguments in a later paper, "The relativity of indeterminacy" [Santo2021], "... in this paper, we note that upholding reasonable principles of finiteness of information hints at a picture of the physical world that should be both relativistic and indeterministic".
Del Santo argues much the same in another paper, "Indeterminism, causality and information: has physics ever been deterministic?" [Santo2020], concluding:"... compelling arguments [of Suppes and Werndl] show that every physical theory, including classical and quantum mechanics, can be interpreted either deterministically or indeterministically and no experiment will ultimately discriminate between these two opposite worldviews."Suggesting that there is some classical computing theory with little difference from quantum computing.
Alexey Kryukov, in a series of papers in the last few years, argues much the same as Baumgarten, for example, in his paper "Mathematics of the classical and the quantum" [Kryukov2020] (from the Abstract):Newtonian dynamics is shown to be the Schrödinger dynamics of states constrained to a sub-manifold of the space of states, identified with the classical phase space of the system. Quantum observables are identified with vector fields on the space of states. ... Under the embedding, the normal distribution of measurement results associated with a classical measurement implies the Born rule for the probability of transition of quantum states.
Ashida, Gong and Ueda at the University of Tokyo, in their paper "Non-hermitian physics" [Ashida2020], review another similarity of classical and quantum physics – that in the real world, many classical and quantum processes violate one of the key postulates of quantum mechanics – hermicity (which "ensures the conservation of probability in an isolated system, and guarantees the real-valuedness of an expectation value of energy with respect to a quantum state"). But since few real-world systems are isolated, the hermicity is less useful for distinguishing classical and quantum mechanics. They have a table of this commonality (page 3):System/Process Physical origin of non-Hermicity Theoretical Methods Photonics Gain/loss of photons Maxwell equations Mechanics Friction Newton equations Electrical circuits Joule heating Circuit equations Stochastic processes Nonreciprocity of state transitions Fokker-Planck eqn. Soft matter/fluid Nonlinear instability Linear hydrodynamics Nuclear reactions Radiative decays Projection methods Mesoscopic systems Finite lifetimes of resonances Scattering theory Open quantum systems Dissipation Master equation Quantum measurement Measurement backaction Quantum trajectories
In a different approach to deriving quantum mechanics from (relativistic) classical mechanics, Andrzej Dragan and Artur Ekert [Dragan2020] derive quantum mechanics with one assumption – extending the Lorentz transformation into the superluminal region:We show that the full mathematical structure of the Lorentz transformation, the one which includes the superluminal part, implies the emergence of non-deterministic dynamics, together with complex probability amplitudes and multiple trajectories. ... Here we show that if we retain the superluminal terms, and take the resulting mathematics of the Lorentz transformation seriously, then the notion of a particle moving along a single path must be abandoned, and replaced by a propagation along many paths, exactly like in quantum theory.
Starting in the 1960s, G R Allcock studied the idea of 'quantum probability backflow', an interference effect involving the wave-aspects of quantum particles. Bracken and Melloy [Bracken2014, discussed below] tie this backflow to negative probabilities: "Negative probability moving to the right has the same effect on the total probabilities in the left and right quadrants as positive probability moving to the left, thus giving rise to the backflow phenomenon."
More interestingly, a paper by Arseni Goussev at the University of Portsmouth [Goussev2020] argues that by using the Wigner representation of the wave packet, one can show that the negative flow of probability seen in the quantum world is rooted in classical mechanics. And nicely, shortly after Goussev’s paper, a paper by Matulis and Acus proposes that a classical system of a chain of masses interconnected by springs [Matulis2020], structured in a certain way, exhibits a negative flow of energy, first seen in quantum systems, in a classical mechanical wave. Both papers discussed below at [Goussev2020]. Earlier, Matulis and Acus, in “Classical analog to the Airy wave packet” [Matulis2019], in which they offer a solution of the Liouville equation for an ensemble of free particles that is a classic analog to the non-dispersive quantum accelerating Airy wave packet.
Peter Morgan, at Yale University, in his paper "An algebraic approach to Koopman classical mechanics" [Morgan2020] writes:Abstract: ... In this form [a variant of the Koopman-vonNeumann approach], the measurement theory for unary classical mechanics can be the same as and inform that for quantum mechanics, expanding classical mechanics to include non-commutative operators so that it is close to quantum mechanics, ... The measurement problem as it appears in unary classical mechanics suggests a classical signal analysis approach that can also be successfully applied to the measurement problem of quantum mechanics.
Gabriele Carcassi and Christine Aidala, at the University of Michigan, in their paper "The fundamental connections between Hamiltonian mechanics, quantum mechanics and information entropy" [Carcassi2020], discusses one difference between classical and quantum physics that doesn't impact their engineering:Abstract: We show that the main difference between classical and quantum systems can be understood in terms of information entropy. ... As information information entropy can be used to characterize how much of the state of the whole system identifies the state of its parts, classical systems can have arbitrarily small information entropy while quantum systems cannot.
Some years earlier, Johannes Kofler and Caslav Bruker [Kofler2007] also write that as long as you don't do something that you can't do ("arbitrarily small") as in [Carcassi2020], the differences between classical and quantum have much overlap. They write:Conceptually different from the decoherence program, we present a novel theoretical approach to macroscopic realism and classical physics within quantum theory. It focuses on the limits of observability of quantum effects of macroscopic objects, i.e., on the required precision of our measurement apparatuses such that quantum phenomena can still be observed. First, we demonstrate that for unrestricted measurement accuracy no classical description is possible for arbitrarily large systems. Then we show for a certain time evolution that under coarse-grained measurements not only macrorealism but even the classical Newtonian laws emerge out of the Schrödinger equation and the projection postulate.
Michael Feldman, in his paper, "Information-theoretic interpretation of quantum formalism" [Feldman2020], writes of another reduction of quantum information processing:Abstract: We present an information theoretic interpretation of quantum formalism based on a Bayesian framework and device of any extra axiom or principle. Quantum information is merely construed as a technique of Bayesian inference for analyzing a logical system subject to classical constraints, while still enabling the use of all relevant Boolean variable batches. ... In the end, our major conclusion is that quantum information is nothing but classical information processed by Bayesian inference techniques and as such, consubstantial with Aristotelian logic.
David Ellerman (Univ. of Ljubljana) in his paper, "Probability theory with superposition events: a classical generalization in the direction of quantum mechanics", shows one can recreate much of quantum mechanics with an extension to basic probability theory:Abstract: In finite probability theory, events are subsets of the outcome set. Subsets can be represented by 1-dimensional column vectors. By extending the representation of events to two dimensional matrices, we can introduce “superposition events”. Probabilities are introduced for classical events, superposition events, and their mixtures by using density matrices. Then probabilities for experiments or ‘measurements’ of all these events can be determined in a manner exactly like in quantum mechanics (QM) using density matrices. Moreover, the transformation of the density matrices induced by the experiments or ‘measurements’ is the Lüders mixture operation as in QM. And finally, by moving the machinery into then-dimensional vector space over Z2, different basis sets become different outcome sets. That ‘non-commutative’ extension of finite probability theory yields the pedagogical model of quantum mechanics over Z2 that can model many characteristic non-classical results of QM.
Fabio Anza and James Crutchfield, at Univ.Cal. Davis, in their paper, "Geometric quantum thermodynamics" [Anza2020], that discusses similiarities of classical and quantum thermodynamics:Abstract: Building on parallels between geometric quantum mechanics and classical mechanics, we explore an alternative basis for quantum thermodynamics that exploits the differential geometry of the underlying state space. ... Building on parallels between geometric quantum mechanics and classical mechanics, we explore an alternative basis for quantum thermodynamics that exploits the differential geometry of the underlying state space.
Michael Vaugon, in his paper "A mathematician’s view of geometrical unification of general relativity and quantum mechanics" [Vaugon2020], uses psuedo-Riemannian geometry to describe both classical and quantum physics:Abstract: This document contains a description of physics entirely based on a geometric presentation: all of the theory is described giving only a pseudo-Riemannian manifold (M,g) of dimension n>5 for which the tensor g is, in studied domains, almost everywhere of signature (-,-,+, ... .+). No object is added in this space-time, no general principle is assumed. The properties we demand to some domains of (M,g) are only simple geometric constraints, essentially based on the concept of "curvature". These geometric properties allow to define, depending on considered cases, some objects (frequently depicted by tensors) that are similar to the classical physics ones, they are however built here only from the tensor g. The links between these objects, coming from their natural definitions, give, applying standard theorems from the pseudo-Riemannian geometry, all equations governing physical phenomena usually described by classical theories, including general relativity and quantum physics. The purely geometric approach introduced here on quantum phenomena is profoundly different from the standard one. Neither Lagrangian nor Hamiltonian is used. This document ends with a presentation of our approach of complex quantum phenomena usually studied by quantum field theory.
John Klauder, in his paper, "A unified combination of classical and quantum system" [Klauder2020], writes:Abstract: ... In the final sections we illustrate how alternative quantization procedures, e.g., spin and affine quantizations, can also have smooth paths between classical and quantum stories, and with a few brief remarks, can also lead to similar stories for non-renormalizable covariant scalar fields as well as quantum gravity.
Xinyu Song and others at Shanghai University, in their paper, "Statistical analysis of quantum annealing" [Song2021], show that how up to a certain scale, classical and quantum annealing are equivalent:[Our paper shows that for less than a 1000 qubits, classical annealing (simulated annealing) and quantum annealing are mathematically identical]: “We show that if the classical and quantum annealing are characterized by equivalent Ising models, then solving an optimization problem, i.e., finding the minimal energy of each Ising model, by the two annealing procedures, are mathematically identical."