“Nobody really knows what entropy really is.”—John von Neumann.
Contradictory opinions among practicing physicists on entropy:
“The theory of probability (has nothing to do with/ is the basis of) statistical mechanics.”
“The entropy of an ideal classical gas of distinguishable particles (is not / is) extensive.”
“The properties of macroscopic classical systems with distinguishable and indistinguishable particles are (different / same).”
“The entropy of a classical ideal gas of distinguishable particles (is not / is) additive.”
“Boltzmann (defined / did not define) the entropy of a classical system by the logarithm of a volume in phase space.”
“The symbol W in the equation S = k log W, which is inscribed on Boltzmann’s tombstone, refers to (a volume in phase space / the German word “Wahrscheinlichkeit” (probability)).”
“The entropy should be defined in terms of the properties of (an isolated / a composite) system.”
“Thermodynamics is only valid (for finite systems / in the “thermodynamic limit,” that is, in the limit of infinite system size).”
“Extensivity (is / is not) essential to thermodynamics.”
Robert H. Swendsen (Carnegie Mellon University, Pennsylvania, USA), “How physicists disagree on the meaning of entropy,” American Journal of Physics 79: 342-348, April 2011, “put forward 12 principles based on the concept of macroscopic measurements that advocate the use of Boltzmann’s 1877 definition of the entropy over other definitions that are often found in textbooks.”
Principle 1: Probability theory is necessary for a theoretical description of macroscopic behavior. A macroscopic system contains a large number of particles. A macroscopic measurement of a quantity of interest in thermodynamics presents a relative statistical fluctuation generally inversely proportional to the square root of the number of particles. The basis for obtaining a description of a macroscopic system from microscopic laws of motion is given by probability theory. We cannot determine the microscopic state experimentally from macroscopic measurements. Bayesian probability theory requires a prior or model probability distribution in phase space: the simplest one is a uniform distribution for isolated classical systems and correspondingly uniform over microscopic states of quantum systems. Such probability distributions are known to lead to predictions that agree with experiment.
Principle 2: Probability theory is sufficient for a theoretical description of macroscopic states. The introduction of probability distributions very nearly completes the theory of many-body systems. We could calculate anything and everything about the behavior of macroscopic systems without mentioning the concepts of entropy, free energy, etc. The definitions of these concepts must be consistent with the predictions of probability theory if they are to have the properties required by thermodynamics.
Principle 3: Statistical mechanics and thermodynamics must predict the properties of composite systems. A simple isolated system in equilibrium does not do anything macroscopically measurable. Composite systems play a leading role in the development of statistical mechanics and thermodynamics, although some textbooks define thermodynamic functions for isolated systems and only much later consider equilibrium in composite systems.
Principle 4: The values of extensive parameters that maximize the probability predict the results of measurements of those parameters for composite systems in equilibrium. This principle provides the key link between statistical mechanics and thermodynamic measurements. In each case the probability distribution is very narrow, so that the fluctuations cannot be observed by macroscopic measurements. The extremely small relative fluctuations of macroscopic observables are so universal that, in the 19th century, many of Boltzmann’s opponents didn’t believe in their existence.
Principle 5: A macroscopic equilibrium state is defined by two properties: the probability of macroscopically observable changes is extremely small, and there is no macroscopically observable flux of energy or particles. There is a substantial literature in statistical mechanics that makes the fundamental assertion that equilibrium is defined by a particular “equilibrium probability distribution” in phase space (or Hilbert space). In my opinion, such a view is a serious error, primarily because the probability distribution of the microscopic states is not macroscopically observable. We use probability theory because we cannot discern microscopic states; we certainly cannot measure the relative frequency with which they occur.
Principle 6: The predictions of statistical mechanics and thermodynamics are representations or descriptions of a system based on the extent of our knowledge. This principle again reflects the distinction between reality and our knowledge of reality. The energy, the entropy, and the associated free energies are thermodynamic descriptions rather than real properties of a macroscopic system. The distinction between real properties of a system and our knowledge of the system might seem philosophical and a bit pedantic, but it greatly clarifies some issues that might otherwise be rather puzzling.
Principle 7: The primary property of the entropy is that it is maximized in equilibrium. Because the macroscopically observable behavior of an isolated system in equilibrium does not change with time, the maximization of the entropy cannot be applied to a simple system, only to a composite system. Principle 7 leads directly to the second law of thermodynamics. If the entropy is always maximized in equilibrium for a composite system, then the change in entropy after a constraint is released cannot be negative. The location of the maximum of the entropy always coincide with the location of the maximum of the probability distribution. The automatic agreement of the predictions of Boltzmann’s definition of the entropy with the correct equilibrium values of macroscopic observables makes it the natural choice.
Principle 8: Additivity is essential to any consistent definition of the entropy of a system with short-ranged interactions between its particles. In thermodynamics it is generally assumed that the entropy of a composite system is given by the sum of the entropies of the subsystems.
Principle 9: The thermodynamic limit is not required for the validity of thermodynamics. The thermodynamic limit is defined as the infinite-size limit of the ratios of extensive quantities—ratios such as the energy per particle U/N or the particle density N/V. The thermodynamic limit is not essential to the foundations of thermodynamics. It cannot be essential if we are to apply thermodynamics to real systems, which are necessarily finite. We never do experiments on infinite systems. If thermodynamics worked only for infinite systems, it might still be interesting as mathematics, but it would be irrelevant as science.
Principle 10: “Indistinguishability” is a property of microscopic states. It does not depend on experimental resolution. The definitions of distinguishability and indistinguishability are simple: (1) If the exchange of two particles in a system results in a different microscopic state, the particles are distinguishable. (2) If the exchange of two particles in a system results in the original microscopic state, the particles are indistinguishable. Unfortunately, “distinguishable” is sometimes confused with what might be called “observably different.” Two particles are observably different if exchanging them alters the properties of the system in a way that is observable. The microscopic state of a classical system of indistinguishable particles would be described by the N! points in phase space found from the set of all permutations of the particles. The trajectory (or trajectories) of the set of N! points is clearly unaffected by the exchange of any two particles at any point in time. Indistinguishability is not a classical concept, but to be imposed on a classical system, this representation seems to be the most reasonable way of doing it. The macroscopic properties of a classical system are exactly the same whether the particles are distinguishable or indistinguishable.
Principle 11: Systems with identical macroscopic properties should be described by the same entropy. Boltzmann’s 1877 definition of the entropy gives the same expression for the entropy for classical systems with either distinguishable or indistinguishable particles. The worst failings of the traditional definition of the entropy for a system of distinguishable particles are that it violates the second law of thermodynamics and predicts that the entropy of an ideal gas is not extensive.
Principle 12: Extensivity is not essential to thermodynamics. Extensivity is the property that the macroscopic observables of a system are all directly proportional to its size. This property implies that ratios, such as U/N, V/N, and S/N, are all independent of the size of the system. In many textbooks, extensivity is taken to be a fundamental postulate of thermodynamics. But statistical mechanics and thermodynamics must be applicable to nonextensive systems. The common textbook definition of the entropy as the logarithm of a volume in phase space gives an answer that is not extensive (and incorrect for other reasons). Although extensivity is a useful assumption when analyzing the properties of a material it is not essential and should not be included as part of the definition of entropy.
The 12 principles I have given have led Dr. Swendsen to the conclusion that Boltzmann’s 1877 definition of the entropy as the logarithm of the probability of macroscopic states for composite systems is superior to any other proposed definition. In particular, it is superior to a definition in terms of a volume in phase space that is often found in textbooks for classical statistical mechanics which results in Gibbs’ paradox, the entropy is not extensive resulting in a violation of the second law of thermodynamics. That is a problem! Boltzmann’s 1877 definition in terms of the logarithm of the probability of a composite system does not have this problem.