Quantum logic
Main article:
Quantum logic
Von Neumann first proposed a quantum logic in his 1932 treatise
Mathematical Foundations of Quantum Mechanics, where he noted that projections on a
Hilbert space can be viewed as propositions about physical observables. The field of quantum logic was subsequently inaugurated, in a famous paper of 1936 by von Neumann and Garrett Birkhoff, the first work ever to introduce quantum logics,
[94] wherein von Neumann and Birkhoff first proved that quantum mechanics requires a
propositional calculus substantially different from all classical logics and rigorously isolated a new algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are polarized perpendicularly (
e.g., one horizontally and the other vertically), and therefore,
a fortiori, it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession, but if the third filter is added
in between the other two, the photons will, indeed, pass through. This experimental fact is translatable into logic as the
non-commutativity of conjunction ( A ∧ B ) ≠ ( B ∧ A ) {\displaystyle (A\land B)\neq (B\land A)}
. It was also demonstrated that the laws of distribution of classical logic, P ∨ ( Q ∧ R ) = ( P ∨ Q ) ∧ ( P ∨ R ) {\displaystyle P\lor (Q\land R)=(P\lor Q)\land (P\lor R)}
and P ∧ ( Q ∨ R ) = ( P ∧ Q ) ∨ ( P ∧ R ) {\displaystyle P\land (Q\lor R)=(P\land Q)\lor (P\land R)}
, are not valid for quantum theory.
[95]
The reason for this is that a quantum disjunction, unlike the case for classical disjunction, can be true even when both of the disjuncts are false and this is, in turn, attributable to the fact that it is frequently the case, in quantum mechanics, that a pair of alternatives are semantically determinate, while each of its members are necessarily indeterminate. This latter property can be illustrated by a simple example. Suppose we are dealing with particles (such as electrons) of semi-integral spin (spin angular momentum) for which there are only two possible values: positive or negative. Then, a principle of indetermination establishes that the spin, relative to two different directions (e.g.,
x and
y) results in a pair of incompatible quantities. Suppose that the state
ɸ of a certain electron verifies the proposition "the spin of the electron in the
x direction is positive." By the principle of indeterminacy, the value of the spin in the direction
y will be completely indeterminate for
ɸ. Hence,
ɸ can verify neither the proposition "the spin in the direction of
y is positive" nor the proposition "the spin in the direction of
y is negative." Nevertheless, the disjunction of the propositions "the spin in the direction of
y is positive or the spin in the direction of
y is negative" must be true for
ɸ. In the case of distribution, it is therefore possible to have a situation in which
A ∧ ( B ∨ C ) = A ∧ 1 = A {\displaystyle A\land (B\lor C)=A\land 1=A}
, while ( A ∧ B ) ∨ ( A ∧ C ) = 0 ∨ 0 = 0 {\displaystyle (A\land B)\lor (A\land C)=0\lor 0=0}
.
[95]
As
Hilary Putnam writes, von Neumann replaced classical logic with a logic constructed in
orthomodular lattices (isomorphic to the lattice of subspaces of the Hilbert space of a given physical system).
[96]
Game theory
Von Neumann founded the field of
game theory as a mathematical discipline.
[97] Von Neumann proved his
minimax theorem in 1928. This theorem establishes that in
zero-sum games with
perfect information (i.e. in which players know at each time all moves that have taken place so far), there exists a pair of
strategies for both players that allows each to minimize his maximum losses, hence the name minimax. When examining every possible strategy, a player must consider all the possible responses of his adversary. The player then plays out the strategy that will result in the minimization of his maximum loss.
[98]
Such strategies, which minimize the maximum loss for each player, are called optimal. Von Neumann showed that their minimaxes are equal (in absolute value) and contrary (in sign). Von Neumann improved and extended the minimax theorem to include games involving imperfect information and games with more than two players, publishing this result in his 1944
Theory of Games and Economic Behavior (written with
Oskar Morgenstern). Morgenstern wrote a paper on game theory and thought he would show it to von Neumann because of his interest in the subject. He read it and said to Morgenstern that he should put more in it. This was repeated a couple of times, and then von Neumann became a coauthor and the paper became 100 pages long. Then it became a book. The public interest in this work was such that
The New York Times ran a front-page story.[
citation needed] In this book, von Neumann declared that economic theory needed to use
functional analytic methods, especially
convex sets and
topological fixed-point theorem, rather than the traditional differential calculus, because the
maximum-operator did not preserve differentiable functions.
[97]
Independently,
Leonid Kantorovich's functional analytic work on mathematical economics also focused attention on optimization theory, non-differentiability, and
vector lattices. Von Neumann's functional-analytic techniques—the use of
duality pairings of real
vector spaces to represent prices and quantities, the use of
supporting and
separating hyperplanes and convex sets, and fixed-point theory—have been the primary tools of mathematical economics ever since.
[99]
Mathematical economics
Von Neumann raised the intellectual and mathematical level of economics in several influential publications. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of the
Brouwer fixed-point theorem.
[97] Von Neumann's model of an expanding economy considered the
matrix pencil A − λB with nonnegative matrices
A and
B; von Neumann sought
probability vectors p and
q and a positive number
λ that would solve the
complementarity equation
p T ( A − λ B ) q = 0 {\displaystyle p^{T}(A-\lambda B)q=0}
along with two inequality systems expressing economic efficiency. In this model, the (
transposed) probability vector
p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution
λ represents the growth factor which is 1 plus the
rate of growth of the economy; the rate of growth equals the
interest rate.
[100][101]
Von Neumann's results have been viewed as a special case of
linear programming, where von Neumann's model uses only nonnegative matrices. The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics.
[102][103][104] This paper has been called the greatest paper in mathematical economics by several authors, who recognized its introduction of fixed-point theorems,
linear inequalities,
complementary slackness, and
saddlepoint duality. In the proceedings of a conference on von Neumann's growth model, Paul Samuelson said that many mathematicians had developed methods useful to economists, but that von Neumann was unique in having made significant contributions to economic theory itself.
[105]
Von Neumann's famous 9-page paper started life as a talk at Princeton and then became a paper in German, which was eventually translated into English. His interest in economics that led to that paper began as follows: When lecturing at Berlin in 1928 and 1929 he spent his summers back home in Budapest, and so did the economist
Nicholas Kaldor, and they hit it off. Kaldor recommended that von Neumann read a book by the mathematical economist
Léon Walras. Von Neumann found some faults in that book and corrected them, for example, replacing equations by inequalities. He noticed that Walras'
General Equilibrium Theory and
Walras' Law, which led to systems of simultaneous linear equations, could produce the absurd result that the profit could be maximized by producing and selling a negative quantity of a product. He replaced the equations by inequalities, introduced dynamic equilibria, among other things, and eventually produced the paper.
[106]
Linear programming
Building on his results on matrix games and on his model of an expanding economy, von Neumann invented the theory of duality in linear programming when
George Dantzig described his work in a few minutes, and an impatient von Neumann asked him to get to the point. Then, Dantzig listened dumbfounded while von Neumann provided an hour lecture on convex sets, fixed-point theory, and duality, conjecturing the equivalence between matrix games and linear programming.
[107]
Later, von Neumann suggested a new method of
linear programming, using the homogeneous linear system of
Paul Gordan (1873), which was later popularized by
Karmarkar's algorithm. Von Neumann's method used a pivoting algorithm between simplices, with the pivoting decision determined by a nonnegative
least squares subproblem with a convexity constraint (
projecting the zero-vector onto the
convex hull of the active
simplex). Von Neumann's algorithm was the first
interior point method of linear programming.
[107]
Mathematical statistics
Von Neumann made fundamental contributions to
mathematical statistics. In 1941, he derived the exact distribution of the ratio of the mean square of successive differences to the sample variance for independent and identically
normally distributed variables.
[108] This ratio was applied to the residuals from regression models and is commonly known as the
Durbin–Watson statistic[109] for testing the null hypothesis that the errors are serially independent against the alternative that they follow a stationary first order
autoregression.
[109]
Subsequently,
Denis Sargan and
Alok Bhargava extended the results for testing if the errors on a regression model follow a Gaussian
random walk (
i.e., possess a
unit root) against the alternative that they are a stationary first order autoregression.
[110]