Fundamental Chaos; Stable World

Before continuing to bring my QM-related tweets here, I think I need to give some way (even if very cartoonish) to help visualize the kind of physics I have in mind for my approach to QM. But before I am able to do that, I need to introduce (at least some of) you to my way of thinking on these matters. An important consideration in this regard is what I will cover in this post, so that eventually (may be after one more post or so) we could come back to my tweets on QM proper.


Chaos in fundamental theory vs. stability evident in world:

If the physical reality at its deepest level—i.e. at the quantum mechanical level—shows, following my new approach [^], a nonlinear dynamics, i.e. catastrophic changes that can be characterized as being chaotic, then how come the actual physical world remains so stable? Why is it so boringly familiar-looking a place? … A piece of metal like silver kept in a cupboard stays just that way for years together, even for centuries. It stays the same old familiar object; it doesn’t change into something unrecognizable. How come?

The answer in essence is: Because of the actual magnitudes of the various quantities that are involved in the dynamics, that’s why.

To help understand the answer, it is best to make appeal not directly to quantum-mechanical calculations or even simulations, but, IMHO, to the molecular dynamics simulations.


QM calculations are often impossible and simulations are very hard:

Quantum-mechanical calculations can’t handle the nonlinear QM (of the kind proposed by me), full stop.

For that matter, even if you follow the mainstream QM (as given in textbooks), calculations are impossible for even the simplest possible systems like just two interacting electrons in an infinite potential box. No wonder even a single helium atom taken in isolation poses a tough challenge for it [^]. With all due respect (because it was as far back as in 1929), all that even Hylleraas [^] could manage, by way of manual calculations (with a mechanical calculator) was determination of only the energy of the ground-state of the He atom, not a direct expression for its wave-function—including, of course, the latter’s time-dependence.

In fact, even simulations today have to go through a lot of hoops—the methods to simulate QM are rather indirect. They don’t even aim to get out wavefunctions as the result.

Even the density functional theory (DFT), a computational technique that got its inventors a physics Nobel [^], introduces and deals with only the electron density, not wavefunctions proper—in case you never noticed it. Personally, in my limited searches, I haven’t found anyone giving even an approximate expression for the Helium wavefunction itself. [If someone can point me to the literature, please do; thanks in advance!]

To conclude, quantum-mechanical calculations are so tough that direct simulations are not available for even such simplest wavefunctions as that of the helium atom. And mind you, helium is the second simplest atom in the universe, and it’s just an atom—not a molecule, a nano-structure, or a photo-multiplier tube.


Molecular dynamics as a practically accessible technique:

However, for developing some understanding of how systems are put together and work at the atomic scale, we can make use of the advances made over the past 60–70 years of computational modeling. In particular, we can derive some very useful insights by using the molecular dynamics simulations [^] (MD for short).

It is a fairly well-established fact that MD simulations, although classical in nature, are sufficiently close to the ab initio quantum-mechanical calculations too, as to be practically quite useful. They have proved their utility for at least some “simpler” systems and for certain practical purposes, especially in the condensed matter physics (even if not for more ambitious goals like automated drug discovery).

See the place of MD in the various approaches involved in the multi-scale modeling, here. [^]. Note that MD is right next to QM.

So, we can use MD simulations in order to gain insights into our above question, viz. the in-principle nonlinearity at the most basic level of QM vs. the obvious stability of the real world.


Some details of the molecular dynamics technique:

In molecular dynamics, what you essentially have are atomic nuclei, regarded as classical point-particles, which interact with each other via a classical “springy” potential. The potential goes through a minimum, i.e., a point of equilibrium (where the forces are zero).

Imagine two hard steel balls connected via a mechanical spring that has some neutral length. If you compress the balls together, the spring develops forces which try to push the balls apart. If you stretch the balls apart, the spring develops opposite kind of forces which tend to pull the balls back together. Due to their inertia, when the balls are released from an initial position of a stretch/compression, the balls can overshoot the intermediate position of neutrality, which introduces oscillations.

The MD technique is somewhat similar. In the simple balls + mechanical spring system discussed above, the relation of force vs. separation is quite simple: it’s linear. F = -k(x - x_0). In contrast, The inter-nucleus potential used in the molecular dynamics is more complicated. It is nonlinear. However, it still has this feature of a potential valley, which implies a neutral position. See the graph of the most often used potential, viz., the Lennard-Jones potential [^].

In conducting the MD simulation, you begin with a large number of such nuclei, taken as the classical point-particles of definite mass. Although in terms of the original idea, these point-particles represent the nuclei of atoms (with the inter-nuclear potential field playing the role of the \Psi wavefunction), the literature freely uses the term “atoms” for them.

The atoms in an MD simulation are given some initial positions (which do not necessarily lie at the equilibrium separations), and some initial velocities (which are typically random in magnitude and direction, but falling into a well-chosen band of values). The simulation consists of following Newton’s second law: F = ma. Time is discretized, typically in steps of uniform durations. Forces on atoms are calculated from their instantaneous separations. These forces (accelerations) are integrated over a time-step to obtain velocities, which are then again integrated over time to obtain changes in atomic positions over the time-step. The changed positions imply changes in instantaneous forces, and thus makes the technique nonlinear. The entire loop is repeated at each time-step.

As the moving atoms (nuclei) change their positions, the system of their overall collection changes its configuration.

If the numerical ranges of the forces / accelerations / velocities / displacements are small enough, then even if the nuclei undergo changes in their positions with the passage of a simulation, their overall configuration still remains more or less the same. The initially neighbouring atoms remain in each other’s neighbourhood, even if individually, they might be jiggling a little here and there around their equilibrium positions. Such a dynamical state in the simulation corresponds to the solid state.

If you arrange for a gradual increase in the velocities (say by effecting an increase in an atom’s momentum when it bumps against the boundaries of the simulation volume, or even plain by just adding a random value from a distribution to the velocities of all the atoms at regular time intervals), then statistically, it is the same as increasing the temperature of the system.

When the operating velocities become large enough (i.e. when the “temperature” becomes high enough), the configuration becomes such that the moving atoms can now slip past their previous neighbours, and form a new neighbourhood around a new set of atoms. However, their velocities are still small enough that their overall assembly does not “explode;” the assembly continues to occupy roughly the same volume, though it may change its shape. Such a dynamic state corresponds to the liquid state.

Upon further increasing the temperature (i.e. velocities), the atoms now begin to dash around with such high speeds that they can overcome the pull of their neighbouring atoms, and begin to escape into the empty space outside the assemblage. The assembly taken as a whole ceases to occupy a definite sub-region inside the simulation chamber. Instead, the atoms now shoot across the entire chamber. Of course this is nothing but the gaseous state. Impermeable boundaries have to be assumed to keep the atoms inside the finite region of simulation. (Actually, similar, even for the liquid state.) The motion of the atoms in the gaseous phase looks quite chaotic, even if in a statistical sense, certain macro-level properties like pressure are being maintained lawfully. The kinetic energy, in particular, stays constant for an isolated system (within the numerical errors) even in the gaseous state.

While there are tons of resources on the ‘net for MD, here is one particularly simple but accurate enough a Python code, with good explanations [^]. I especially liked it because unlike so many other “basic” MD codes, this one even shows the trick of shifting the potential so as to effectively cut it off to a finite radius in a smooth manner—many introductory MD codes do it rather crudely, by directly cutting off the potential (thereby leaving a discontinuity of the vertical jump in the potential field). [In theory, the potential goes to infinity, but in practice, you need to cut it off just so as to keep the computational costs down.]

Here is a short video showing an MD simulation of melting of ice (and I guess, also of evaporation) [^].  It has taken into account the dipole nature of the water molecules too.

The entire sequence can of course be reversed. You can always simulate a gas-to-liquid transition, and further, you can also simulate solidification. Here is a video showing the reverse phase-change for water: [^]


The points to gather from the MD simulations, for our purposes:

MD simulations of even gases retains a certain orderliness. Despite all the random-looking motions, they still are completely lawful, and the laws are completely deterministic. The liquid state certainly shows much a better degree of orderliness as compared to that in the gaseous state. The solid state shows some remnants of the random-looking motion, but these motions are now very highly localized, and so, the assemblage as a whole looks very orderly, stable. It not only preserves its volume, it wouldn’t show even just “flowing” if you were to program the simulation to have a tilting of the simulation chamber incorporated in it.

Now the one big point I want you to note is this.

Even in the very stable, orderly looking simulation of the solid state, the equations governing the dynamics of all the individual atoms still are nonlinear [^], i.e., they still are chaotic. It is the chaotic equations which produce the very orderly solid state.

MD simulations would not be capable of simulating phase transitions (from solid to liquid to gas etc.) using just identical balls interacting via simple pair-wise interactions, if the basic equations didn’t have a nonlinearity built into them. [I made a tweet to this effect last month, on 02 August 2019.]

So, even the very stable-looking solid state is maintained by the assemblage only by following the same nonlinear dynamical laws as required which allow phase transitions to occur and show the evident randomness for the gaseous state.

It’s just that, when the the parameters like velocity and acceleration (determined by the potential) fall into certain ranges of small enough values, then even if the governing equation remains nonlinear, the dynamical state automatically gets confined to a regime of the highly orderly and stable solid state.

So, the lesson is:

Even if the dynamical nonlinearity in the governing laws sure implies instabilities in principle, what matters is the magnitudes of parameters (here, prominently, the velocities i.e. the “temperature” of simulation). The operative numerical magnitudes of the parameters directly determine the regime of the dynamical state. The regime can correspond to a very highly ordered state too.

Ditto, for the actual world.

In fact, something stronger can be said: If the MD equations were not to be nonlinear, if they were not to be chaotic, then they would fail to reproduce not only phase transitions (like solid to liquid etc.), but also such utterly orderly behaviour as the constancy of the temperature for these phase-transitions. Even stronger: The numerical values of the parameters don’t have to be exactly equal to some critical values. Even if the parameter values vary a lot, so long as they fall into a certain range, the solution regime (the qualitative behaviour of the solution) remains unchanged, stable!

In MD, chaotic equations not only ensure that phase transitions like melting can at all occur in a simulation, their specific nature even ensures the exact constancy of the temperature for phase changes. Parameter values aren’t even required to be exactly equal to some critical values; they can vary a lot (within a certain range), and even then, the solution regime would remain unchanged—it would still show the stability of the qualitative behaviour. 

Ditto, for the quantum mechanical simulations using my new approach. (Though I haven’t done a simulation, the equations show a similar kind of a nonlinearity.)

In my approach, quantum mechanical instability is ever present in each part of the physical world. However, the universe that we live in simply exists (“has been put together”) in such a way that the numerical values of the parameters actually operative are such that the real world shows the same feature of stability, as in the MD simulations.


The example of a metal piece left alone for a while:

If you polish and buff, or chemically (or ultrasonically) clean, a piece of metal to a shining bright state, and then leave it alone for a while, it turns dull over a period of time. That’s because of corrosion.

Corrosion is, chemically, a process of oxidation. Oxygen atoms from the air react with those pure metal atoms which are exposed at a freshly polished surface. This reaction is, ultimately, completely governed by the laws of quantum mechanics—which, in my approach has a nonlinearity of a specific kind. Certain numerical parameters control the speed with which the quantum-mechanical rearrangements of the wavefunction governing the oxygen and metal atoms proceeds.

The world we live in happens to carry those values for such parameters that corrosion turns out to be a slow process. Also, it turns out to be a process that mostly affects only the surface of a metal (in the solid state). The dynamical equations of quantum mechanics, even if nonlinear, are such that the corrosion cannot penetrate very deep inside the metal—the values of the governing parameters are such that oxygen atoms cannot diffuse so easily into the interior regions all that easily. If the metal were to be in a liquid state, it would be altogether a different matter—again, another range of the nonlinear parameters, that’s all.

So, even if it’s a nonlinear (“chaotic” or even “randomness-possessing”) quantum-mechanical evolution, the solid metal piece does not corrode all the way through.

That’s why, you can always polish your family silver, or the copper utensils used in “poojaa” (worship), and use them again and again.

The world is a remarkably stable place to be in.


Concluding…:

So what’s the ultimate physics lesson we can draw from this story?

It’s this:

In the end, the qualitative nature of the solutions to physical problems is not determined solely by the kind of mathematical laws which do govern that phenomenon; the nature of the constituents of a system (what kind of objects there are in it), the particulars of their configurations, and the numerical ranges of the parameters as are operative during their interactions, etc., also matter equally well—if not even more!

Don’t fall for those philosophers, sociologists, humanities folks in general (and even some folks employed as “scientists”) who snatch some bit of the nonlinear dynamics or chaos theory out of its proper context, and begin to spin a truly chaotic yarn of utter gibberish. They simply don’t know any better. That’s another lesson to be had from this story. There is a huge difference between the cultural overtones associated with the word “chaos” and the meaning the same term has in absolutely rigorous studies in physics. Hope I will never have to remind you on this one point.


A song I like:

(Marathi) “shraavaNaata ghana niLaa barasalaa…”
Lyrics: Mangesh Padgaonkar
Music: Shreenivas Khale
Singer: Lata Mangeshkar
[Credits listed, happily, in a random order.]

 

Advertisements