Introducing the divinity of the Universe
to pave the way for scientifically credible theology

Contact us: Click to email

Chapter 19: Quantization: the mathematical theory of communication

Synopsis

The Universe comprises many particles ranging in size from elementary particles to galaxies. From a communication point of view these particles are sources, entities capable of carrying, sending and receiving messages. The unity of the Universe is maintained by communication between these sources in visible classical networks organized by size (see Chapter 22: Network, cooperation and bonding). The stability of the system requires error free communication. Claude Shannon found that errors could be eliminated by structuring messages into packets or quanta. History shows that the progress of human science and technology depends upon rediscovering systems that the Universe has already discovered (often many times, like vision) through evolution. It is not therefore surprising to see that all communication in the Universe is quantized.

Contents
19.1: Communication

19.2: The Shannon-Nyquist theorem for the digitization of continuous signals

19.3 Shannon's prescription for overcoming noise

19.4: Application Shannon's work

19.5: Entropy and information

19.6: Superposition and quantum representation of information

19.1: Communication

The standard model of quantum mechanics comes in two parts. The first deals with the evolution through time of undisturbed quantum systems. We imagine that this situation exists in the initial singularity, in isolated particles created by the initial singularity and in the Universe as a whole. This process is essentially invisible, so our knowledge of it is speculative.

The second describes the evolution of systems when they are disturbed by measurement, that is by an interaction with another quantum system. Here quantum theory differs radically from classical theory. Classical physics assumes that it is possible to observe a natural system without changing it in any way. In contrast, quantum theory imagines that an act of communication unites particles in a common Hilbert space where they evolve together, at least temporarily. Both seeing and being seen require action. We will explore this second mode in Chapter 20: Measurement: the interface between Hilbert and Minkowski spaces. This chapter deals with ordinary classical communications in Minkowski space like phone calls and the internet. Here source includes both sender, message and receiver.

Classical communication copies information from one point in spacetime to another. This may or may not involve the deletion of the initial information. The unavoidable flow of classical Minkowski space-time, illustrated by the light cone, shows messages emitted from a particular source can only be received in the forward light cone of that source. Conversely, a source can only receive messages from its past light cone.

This structure is determined by the velocity of light, the maximum rate that physical information can be transmitted through Minkowski space by particles travelling on null geodesics.

This is true from the point of view of particles and observers in Minkowski space, but in this book we are assuming that quantum mechanics describes a process underlying Minkowski space so that quantum mechanics operates in a world before the advent of space-time in the emerging Universe.

Kinetic quantum processes prior to the emergence of spacetime can partly avoid this classical constraint through entanglement but this process cannot transmit specific information. Here we are necessarily looking at Hilbert space from our position in Minkowski space.

We assume that the structure of the Universe is maintained by communications between its various components. If the Universe is to be stable, we further assume that these communications must be at least to some degree error free. The mathematical theory of communication developed by Shannon establishes that quantization and error prevention are very closely related.

On the basis of the discussion on Chapter 2: The theology of the Trinity and Chapter 11: The axioms of abstract Hilbert space I assume that the primordial Hilbert space grows as a sequence of processions within the initial singularity and the children of that singularity.

We assume that interactions between real physical particles in Minkowski space is one-to-one and that the relevant Hilbert space during the real time interaction of such particles is the tensor product of their individual Hilbert spaces. Tensor product - Wikipedia

The so called measurement problem which arises from this situation is the subject of Chapter 20: Measurement: the interface between Hilbert and Minkowski spaces

19.2: The Shannon-Nyquist theorem for the digitization of continuous signals

Continuous modulated electromagnetic signals with limited bandwidth can be accurately represented as digital signals if they are sampled at twice the rate of their highest frequency. Since the upper limit of human hearing is about 20 000 Hz, digital audio is often sampled at 44 kHz. Nyquist-Shannon sampling theorem - Wikipedia

In practical engineering terms, this enables every message and every signal to be identified as a point in a rational function space. Shannon provides and example of the size of the space necessary by estimating that one hour of a 5MHz television signal would be represented by a point in a space of 3.6 × 1010 dimensions. Claude Shannon (1949): Communication in the presence of noise

19.3: Shannon's prescription for overcoming noise

In communication terms, the message to be sent is a point in message space and the signal transmitted is a point in signal space. The role of the transmitter is to encode or map the message into the signal. The receiver does the opposite. The computational procedure that does this work is called a coder-decoder or codec. The error control is embodied in this mapping. Codec - Wikipedia

The key to the distinction of signals is:

. . . two signals can be reliably distinguished if they differ by only a small amount, provided this difference is sustained over a long period of time. Each sample of the received signal then gives a small amount of statistical information concerning the transmitted signal; in combination, these statistical indications result in near certainty.

The technique is to package or quantize the message and the signal to produce units extended in time which are clearly distinguishable:

The transmitter will take long sequences of binary digits and represent this entire sequence by a particular signal function of long duration, that is a vector. The delay is required because the transmitter must wait for the full sequence before the signal is determined. Similarly, the receiver must wait for the full signal function before decoding into binary digits.

The power of this approach is implicit in Shannon's Theorem 2:

Let P be the average transmitter power, and suppose the noise is white thermal noise of power N in the band W. By sufficiently complicated encoding systems it is possible to transmit binary digits at a rate :

C = W log2 (P + N) / N

with as small a frequency of errors as desired. It is not possible by any encoding method to send at a higher rate and have an arbitrarily low frequency of errors.

He proves this theorem using geometric methods in rational function space. He concludes with a summary of the properties of a system that transmits without error at the limiting rate C, an ideal system. Some features of an ideal system are implicit in quantum mechanics, particularly quantization:

1. To avoid error there must be no overlap between signals representing different messages. They must, in other words, be orthogonal, as with the vectors of a Hilbert space. Orthogonality - Wikipedia

2. The basis signals or letters of the source alphabet may be chosen at random in the signal space, provided only that they are orthogonal. The same message may be encoded into any satisfactory basis provided that the transformations (the codec) used by the transmitter to encode the message into the signal and receiver to decode the signal back to the message are inverses of one another. Quantum processes are cyclic or reversible in time in the sense that the unitary evolution of an isolated quantum system acts as though it is processed by a lossless codec. Unitary operator - Wikipedia

3. The signals transmitted by an ideal system have maximum entropy and so are indistinguishable from random noise. The fact that a set of physical observations looks like a random sequence is not therefore evidence for meaninglessness. Until the algorithms used to encode and decode such a sequence are known, little can be said about its significance. Many codecs are designed to hide the contents of messages from sources that do not possess the decoding algorithm. A widely used derivative of this approach is public key cryptography. Tamper evident quantum methods are also used for key distribution. Andrey Kolmogorov (1956): Foundations of the Theory of Probability

4. Only in the simplest cases are the mappings used to encode and decode messages are linear and topological. For practical purposes, however, they must all be computable with available machines. How this applies in quantum theory is closely related to the measurement problem and the so called collapse of the wave function (see Chapter 20: Measurement—the interface between Hilbert and Minkowski space).

5. As a system approaches the ideal, the length of the transmitted packets, the delay at the transmitter while it takes in a chunk of message for encoding, and the corresponding delay at the receiver while the message is decoded, increase indefinitely.

19.4: Application of Shannon's work

The difficulty with Shannon's theory lies in the phrase by sufficiently complicated encoding systems. In the seventy years since Shannon wrote, a large number of ingenious codecs have been developed which have established the internet as an all purpose transporter of all forms of information, open and secret.

The advent of powerful digital computers capable of executing non-linear and non-topological algorithms has taken care of the sufficiently complicated aspect of the problem. In the error free world of formal mathematics, noise is not a problem. In the physical world, however, things may not be so clear cut.

19.5: Entropy and information

In his Mathematical Theory of Communication Shannon framed the communication problem in terms of entropy, a concept derived from thermodynamics. Entropy - Wikipedia, Claude E Shannon (1948): A Mathematical Theory of Communication

Shannon's paper begins with a definition of the entropy of a communication source. The entropy of a source A capable of emitting i different symbols or letters ai with probabilities pi such that Σi pi = 1 is:

H = Σi pi log2 pi

Quantum systems described by algorithms such as the Schrödinger equation are constrained to evolve through time in a unitary and reversible manner. The Schrödinger equation defines an error free communication channel which is nevertheless invisible to us.

This process is interrupted when systems interact, just as computers in a network are interrupted when they are required to deal with an incoming message and the message and the computer begin a joint procedure.

An important role of unitarity in quantum mechanics is to constrain the potential outcomes of quantum measurements so that they are a complete system of events identical to the output of a communication source as defined by the entropy equation above. Collectively exhaustive events - Wikipedia

19.5 Superposition and quantum representation of information

Hilbert space is a vector space. All vectors are normalized so that when observed they retain the normalization of communication sources represented by the equation Σi pi = 1 above. The addition (superposition) of vectors produces a new vector whose direction is the sum of the directions of its components while its length remains 1.

Information in Hilbert space is therefore carried by the direction of vectors. Since Hilbert space is linear, we can easily change the basis in which vectors are represented. This means that the directions of individual vectors are not significant but the angles between vectors are.

The eigenvector of an operator is a vector whose direction or phase is unchanged by the operator. This value is observable, consistent with the idea that we can only observe stationary entities. The Born Rule extracts the distances between eigenvectors, say 1 and 2. These distances are complex probability amplitudes (see Chapter 13: The emergence of quantum mechanics). The squared probability tells us the probability of seeing eigenvector 1 if we look with eigenvector 2. Born rule - Wikipedia

These probabilities associated with an interaction are discrete and precise. Their sum, 1, represents the outcomes of a complete set of events. There is no uncertainty associated with these values. The measurement process is uncertain insofar as it does not determine which particular eigenvalue will appear in the observation. We only have normalized probabilities to guide us. Nor is the precise timing of a particular event determined. Nature, like us, pronounces its words with precision but uses them in random sequences.

By tradition this feature of quantum measurement has come to be called the 'collapse' of the wave function but a more reasonable explanation is simply that the measurement operation is an information source consistent with the demands of the theory of communication. This theory assumes that a communication source yields only one character of its alphabet at a time and the sum of the probabilities of observing these characters is normalized.

Quantum mechanics, like the theory of communication, is not concerned with specific messages but rather the constraints on error free communication of all possible messages established by the statistics of a particular source. We assume that every quantum communication involves a quantum of action, analogous to a packet of data on a communication link.

We return to this story in Chapter 20: Measurement—the interface between Hilbert and Minkowski spaces.

Copyright:

You may copy this material freely provided only that you quote fairly and provide a link (or reference) to your source.

Notes and references

Further reading

Books

Kolmogorov (1956), Andrey Nikolaevich, and Nathan Morrison (Translator) (With an added bibliography by A T Bharucha-Reid), Foundations of the Theory of Probability, Chelsea 1956 Preface: 'The purpose of this monograph is to give an axiomatic foundation for the theory of probability. . . . This task would have been a rather hopeless one before the introduction of Lebesgue's theories of measure and integration. However, after Lebesgue's publication of his investigations, the analogies between measure of a set and mathematical expectation of a random variable became apparent. These analogies allowed of further extensions; thus, for example, various properties of independent random variables were seen to be in complete analogy with the corresponding properties of orthogonal functions . . .' 
Amazon
  back

Links

Born rule - Wikipedia, Born rule - Wikipedia, the free encyclopedia, ' The Born rule (also called the Born law, Born's rule, or Born's law) is a law of quantum mechanics which gives the probability that a measurement on a quantum system will yield a given result. It is named after its originator, the physicist Max Born. The Born rule is one of the key principles of the Copenhagen interpretation of quantum mechanics. There have been many attempts to derive the Born rule from the other assumptions of quantum mechanics, with inconclusive results. . . . The Born rule states that if an observable corresponding to a Hermitian operator A with discrete spectrum is measured in a system with normalized wave function (see bra-ket notation), then the measured result will be one of the eigenvalues λ of A, and the probability of measuring a given eigenvalue λi will equal <ψ|Pi|ψ> where Pi is the projection onto the eigenspace of A corresponding to λi'.' back

Claude E Shannon (1948), A Mathematical Theory of Communication, ' The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.' back

Claude Shannon (1949), Communication in the Presence of Noise, 'A method is developed for representing any communication system geometrically. Messages and the corresponding signals are points in two “function spaces,” and the modulation process is a mapping of one space into the other. Using this representation, a number of results in communication theory are deduced concerning expansion and compression of bandwidth and the threshold effect. Formulas are found for the maximum rate of transmission of binary digits over a system when the signal is perturbed by various types of noise. Some of the properties of “ideal” systems which transmit at this maximum rate are discussed. The equivalent number of binary digits per second for certain information sources is calculated.' [C. E. Shannon , “Communication in the presence of noise,” Proc. IRE, vol. 37, pp. 10–21, Jan. 1949.] back

Codec - Wikipedia, Codec - Wikipedia, the free encyclopedia, 'A codec is a device or computer program that encodes or decodes a data stream or signal. Codec is a portmanteau of coder/decoder. . . . IA coder or encoder encodes a data stream or a signal for transmission or storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications. In the mid-20th century, a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including companding functions. ' back

Collectively exhaustive events - Wikipedia, Collectively exhaustive events - Wikipedia, the free encyclopedia, ' In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, and 6 balls of a single outcome are collectively exhaustive, because they encompass the entire range of possible outcomes.' back

Entropy - Wikipedia, Entropy - Wikipedia, the free encyclopedia, 'In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is closely related to the number Ω of microscopic configurations (known as microstates) that are consistent with the macroscopic quantities that characterize the system (such as its volume, pressure and temperature). Under the assumption that each microstate is equally probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally (assuming equiprobable microstates), S = k B ln ⁡ Ω . ' back

Nyquist-Shannon sampling theorem - Wikipedia, Nyquist-Shannon sampling theorem - Wikipedia, the free encyclopedia, ' In the field of digital signal processing, the sampling theorem is a fundamental bridge between continuous-time signals (often called "analog signals") and discrete-time signals (often called "digital signals"). It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.' back

Orthogonality - Wikipedia, Orthogonality - Wikipedia, the free encyclopedia, Orthogonality occurs when two things can vary independently, they are uncorrelated, or they are perpendicular. back

Tensor product - Wikipedia, Tensor product - Wikipedia, the free encyclopedia, ' In mathematics, the tensor product V ⊗ W of two vector spaces V and W (over the same field) is itself a vector space, endowed with the operation of bilinear composition, denoted by ⊗, from ordered pairs in the Cartesian product V × W to V ⊗ W in a way that generalizes the outer product. Essentially the difference between a tensor product of two vectors and an ordered pair of vectors is that if one vector is multiplied by a nonzero scalar and the other is multiplied by the reciprocal of that scalar, the result is a different ordered pair of vectors, but the same tensor product of two vectors. The tensor product of V and W is the vector space generated by the symbols v ⊗ w, with v ∈ V and w ∈ W, in which the relations of bilinearity are imposed for the product operation ⊗, and no other relations are assumed to hold. The tensor product space is thus the "freest" (or most general) such vector space, in the sense of having the fewest constraints.' back

Unitary operator - Wikipedia, Unitary operator - Wikipedia, the free encyclopedia, ' In functional analysis, a branch of mathematics, a unitary operator . . . is a bounded linear operator U : H → H on a Hilbert space H satisfying UU* = U*U = I where U* is the adjoint of U, and I : H → H is the identity operator. This property is equivalent to the following: 1. U preserves the inner product ( , ) of the Hilbert space, ie for all vectors x and y in the Hilbert space, (Ux, Uy) = (x, y) and
2. U is surjective.' back

 
 

https://www.cognitivecosmology.com is maintained by The Theology Company Proprietary Limited ACN 097 887 075 ABN 74 097 887 075 Copyright 2000-2024 © Jeffrey Nicholls