The singularities closest to you

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


0. Preamble/Preface/Prologue/Preliminaries/Whatever Pr… (but neither probability nor public relations):

Natalie Wolchover writes an article in the Quanta Magazine: “Why gravity is not like the other forces” [^].

Motl mentions this piece in his, err.. “text” [^], and asks right in the first para.:

“…the first question should be whether gravity is different, not why [it] is different”

Great point, Lubos, err… Luboš!

Having said that, I haven’t studied relativity, and so, I only cursorily went through the rest of both these pieces.

But I want to add. (Hey, what else is a blog for?)


1. Singularities in classical mechanics:

1.1 Newtonian mechanics:

Singularity is present even in the Newtonian mechanics. If you consider the differential equation for gravity in Newtonian mechanics, it basically applies to point-particles, and so, there is a singularity in this 300+ years old theory too.

It’s a different matter that Newton got rid of the singularities by integrating gravity forces inside massive spheres (finite objects), using his shells-based argument. A very ingenious argument that never ceases to impress me. Anyway, this procedure, invented by Newton, is the reason why we tend to think that there were no singularities in his theory.

1.2 Electrostatics and electrodynamics:

Coulomb et al. couldn’t get rid of the point-ness of the point-charges the way Newton could, for gravity. No electrical phenomenon was found that changed the behaviour at experimentally accessible small enough separations between two charges. In electrostatics, the inverse-square law holds through and through—on the scales on which experiments have been performed. Naturally, the mathematical manner to capture this behaviour is to not be afraid of singularities, and to go ahead, incorporate them in the mathematical formulations of the physical theory. Remember, differential laws themselves are arrived at after applying suitable limiting processes.

So, electrostatics has point singularities in the electrostatic fields.

Ditto, for classical electro-dynamics (i.e. the Maxwellian EM, as recast by Hendrik A. Lorentz, the second Nobel laureate in physics).

Singularities exist at electric potential energy locations in all of classical EM.

Lesson: Singularities aren’t specific to general relativity. Singularities predate relativity by decades if not by centuries.


2. Singularities in quantum mechanics:

2.1 Non-relativistic quantum mechanics:

You might think that non-relativistic QM has no singularities, because the \Psi field must be at least C^0 continuous everywhere, and also not infinite anywhere even within a finite domain—else, it wouldn’t be square-normalizable. (It’s worth reminding that even in infinite domains, Sommerfeld’s radiation condition still applies, and Dirac’s delta distribution most extremely violates this condition.)

Since wavefunctions cannot be infinite anywhere, you might think that any singularities present in the physics have been burnished off due to the use of the wavefunction formalism of quantum mechanics. But of course, you would be wrong!

What the super-smart MSQM folks never tell you is this part (and they don’t take care to highlight it to their own students either): The only way to calculate the \Psi fields is by specifying a potential energy field (if you want to escape the trivial solution that all wavefunctions are zero everywhere), and crucially, in a fundamental quantum-mechanical description, the PE field to specify has to be that produced by the fundamental electric charges, first and foremost. (Any other description, even if it involves complex-valued wavefunctions, isn’t fundamental QM; it’s merely a workable approximation to the basic reality. For examples, even the models like PIB, and quantum harmonic oscillator aren’t fundamental descriptions. The easiest and fundamentally correct model is the hydrogen atom.)

Since the fundamental electric charges remain point-particles, the non-relativistic QM has not actually managed to get rid of the underlying electrical singularities.

It’s something like this. I sell you a piece of a land with a deep well. I have covered the entire field with a big sheet of green paper. I show you the photograph and claim that there is no well. Would you buy it—my argument?

The super-smart MSQM folks don’t actually make such a claim. They merely highlight the green paper so much that any mention of the well must get drowned out. That’s their trick.

2.2 OK, how about the relativistic QM?

No one agrees on what a theory of GR (General Relativity) + QM (Quantum Mechanics) looks like. Nothing is settled about this issue. In this piece let’s try to restrict ourselves to the settled science—things we know to be true.

So, what we can talk about is only this much: SR (Special Relativity) + QM. But before setting to marry them off, let’s look at the character of SR. (We already saw the character of QM above.)


3. Special relativity—its origins, scope, and nature:

3.1 SR is a mathematically repackaged classical EM:

SR is a mathematical reformulation of the classical EM, full-stop. Nothing more, nothing less—actually, something less. Let me explain. But before going to how SR is a bit “less” than classical EM, let me emphasize this point:

Just because SR begins to get taught in your Modern Physics courses, it doesn’t mean that by way of its actual roots, it’s a non-classical theory. Every bit of SR is fully rooted in the classical EM.

3.2 Classical EM has been formulated at two different levels: Fundamental, and Homogenized:

The laws of classical EM, at the most fundamental level, describe reality in terms of the fundamental massive charges. These are point-particles.

Then, classical EM also says that a very similar-looking set of differential equations applies to the “everyday” charges—you know, pieces of paper crowding near a charged comb, or paper-clips sticking to your fridge-door magnets, etc. This latter version of EM is not the most fundamental. It comes equipped with a lot of fudges, most of them having to do with the material (constitutive) properties.

3.3 Enter super-smart people:

Some smart people took this later version of the classical EM laws—let’s call it the homogenized continuum-based theory—and recast them to bring out certain mathematical properties which they exhibited. In particular, the Lorentz invariance.

Some super-smart people took the invariance-related implications of this (“homogenized continuum-based”) theory as the most distinguished character exhibited by… not the fudges-based theory, but by physical reality itself.

In short, they not only identified a certain validity (which is there) for a logical inversion which treats an implication (viz. the invariance) as the primary; they blithely also asserted that such an inverted conceptual view was to be regarded as more fundamental. Why? Because it was mathematically convenient.

These super-smart people were not concerned about the complex line of empirical and conceptual reasoning which was built patiently and integrated together into a coherent theory. They were not concerned with the physical roots. The EM theory had its roots in the early experiments on electricity, whose piece-by-piece conclusions finally came together in Maxwell’s mathematical synthesis thereof. The line culminated with Lorentz’s effecting a reduction in the entire cognitive load by reducing the number of sub-equations.

The relativistic didn’t care for these roots. Indeed, sometimes, it appears as if many of them were gloating to cut off the maths from its physical grounding. It’s these super-smart people who put forth the arbitrary assertion that the relativistic viewpoint is more fundamental than the inductive base from which it was deduced.

3.4 What is implied when you assert fundamentality to the relativistic viewpoint?

To assert fundamentality to a relativistic description is to say that the following two premises hold true:

(i) The EM of homogenized continuaa (and not the EM of the fundamental point particles) is the simplest and hence most fundamental theory.

(ii) One logical way of putting it—in terms of invariance—is superior to the other logical way of putting it, which was: a presentation of the same set of facts via inductive reasoning.

The first premise is clearly a blatant violation of method of science. As people who have done work in multi-scale physics would know, you don’t grant greater fundamentality to a theory of a grossed out effect. Why?

Well, a description in terms of grossed out quantities might be fine in the sense the theory often becomes exponentially simpler to use (without an equal reduction in percentage accuracy). Who would advocate not using Hooke’s law as in the linear formulation of elasticity, but insist on computing motions of 10^23 atoms?

However, a good multi-scaling engineer / physicist also has the sense to keep in mind that elasticity is not the final word; that there are layers and layers of rich phenomenology lying underneath it: at the meso-scale, micro-scale, nano-scale, and then, even at the atomic (or sub-atomic) scales. Schrodinger’s equation is more fundamental than Hooke’s law. Hooke’s law, projected back to the fine-grained scale, does not hold.

This situation is somewhat like this: Your 100 \times 100 photograph does not show all the features of your face the way they come out in the original 4096 \times 4096 image. The finer features remain lost even if you magnify the 100 \times 100 image to the 4096 \times 4096 size, and save it at that size. The fine-grained features remain lost. However, this does not mean that 100 \times 100 is useless. A 28 \times 28 pixels image is enough for the MNIST benchmark problem.

So, what is the intermediate conclusion? A “fudged” (homogenized) theory cannot be as fundamental—let alone be even more fundamental—as compared to the finer theory from which it was homogenized.

Poincaré must have thought otherwise. The available evidence anyway says that he said, wrote, and preached to the effect that a logical inversion of a homogenized theory was not only acceptable as an intellectually satisfying exercise, but that it must be seen as being a more fundamental description of physical reality.

Einstein, initially hesitant, later on bought this view hook, line and sinker. (Later on, he also became a superposition of an Isaac Asimov of the relativity theory, a Merilyn Monroe of the popular press, and a collage of the early 20th century Western intellectuals’ notions of an ancient sage. But this issue, seen in any basis—components-wise or in a new basis in which the superposition itself is a basis—takes us away from the issues at hand.)

The view promulgated by these super-smart people, however, cannot qualify to be called the most fundamental description.

3.5 Why is the usual idea of having to formulate a relativistic quantum mechanics theory a basic error?

It is an error to expect that the potential energy fields in the Schroedinger equation ought to obey the (special) relativistic limits.

The expectation rests on treating the magnetic field at a par with the static electric field.

However, there are no monopoles in the classical EM, and so, the electric charges enjoy a place of greater fundamentality. If you have kept your working epistemology untarnished by corrupt forms of methods and content, you should have no trouble seeing this point. It’s very simple.

It’s the electrons which produce the electric fields; every electric field that can at all exist in reality can always be expressed as a linear superposition of elementary fields each of which has a singularity in it—the point identified for the classical position of the electron.

We compress this complex line of thought by simply saying:

Point-particles of electrons produce electric fields, and this is the only way any electric field can at all be produced.

Naturally, electric fields don’t change anywhere at all, unless the electrons themselves move.

The only way a magnetic field can be had at any point in physical space is if the electric field at that point changes in time. Why do we say “the only way”? Because, there are no magnetic monopoles to create these magnetic fields.

So, the burden of creating any and every magnetic field completely rests on the motions of the electrons.

And, the electrons, being point particles, have singularities in them.

So, you see, in the most fundamental description, EM of finite objects is a multi-scaled theory of EM of point-charges. And, EM of finite objects was, historically, first formulated before people could plain grab the achievement, recast it into an alternative form (having a different look but the same physical scope), and then run naked in the streets shouting “Relativity!”, “Relativity!!”.

Another way to look at the conceptual hierarchy is this:

Answer this question:

If you solve the problem of an electron in a magnetic field quantum mechanically, did you use the most basic QM? Or was it a multi-scale-wise grossed out (and approximate) QM description that you used?

Hint: The only way a magnetic field can at all come into existence is when some or the other electron is accelerating somewhere or the other in the universe.

For the layman: The situation here is like this: A man has a son. The son plays with another man, say the boy’s uncle. Can you now say that because there is an interaction between the nephew and the uncle, therefore, they are what all matters? that the man responsible for creating this relationship in the first place, namely, the son’s father cannot ever enter any fundamental or basic description?

Of course, this viewpoint also means that the only fundamentally valid relativistic QM would be one which is completely couched in terms of the electric fields only. No magnetic fields.

3.6. How to incorporate the magnetic fields in the most fundamental QM description?

I don’t know. (Neither do I much care—it’s not my research field.) But sure, I can put forth a hypothetical way of looking at it.

Think of the magnetic field as a quantum mechanical effect. That is to say, the electrostatic fields (which implies, the positions of electrons’ respective singularities) and the wavefunctions produced in the aether in correspondence with these electrostatic fields, together form a complete description. (Here, the wavefunction includes the spin.)

You can then abstractly encapsulate certain kinds of changes in these fundamental entities, and call the abstraction by the name of magnetic field.

You can then realize that the changes in magnetic and electric fields imply the c constant, and then trace back the origins of the c as being rooted in the kind of changes in the electrostatic fields (PE) and wavefunction fields (KE) which give rise to the higher-level of phenomenon of c.

But in no case can you have the hodge-podge favored by Einstein (and millions of his devotees).

To the layman: This hodge-podge consists of regarding the play (“interactions”) between the boy and the uncle as primary, without bothering about the father. You would avoid this kind of a hodge-podge if what you wanted was a basic consistency.

3.7 Singularities and the kind of relativistic QM which is needed:

So, you see, what is supposed to be the relativistic QM itself has to be reformulated. Then it would be easy to see that:

There are singularities of electric point-charges even in the relativistic QM.

In today’s formulation of relativistic QM, since it takes SR as if SR itself was the most basic ground truth (without looking into the conceptual bases of SR in the classical EM), it does take an extra special effort for you to realize that the most fundamental singularity in the relativistic QM is that of the electrons—and not of any relativistic spacetime contortions.


4. A word about putting quantum mechanics and gravity together:

Now, a word about QM and gravity—Wolchover’s concern for her abovementioned report. (Also, arguably, one of the concerns of the physicists she interviewed.)

Before we get going, a clarification is necessary—the one which concerns with mass of the electron.

4.1 Is charge a point-property in the classical EM? how about mass?

It might come as a surprise to you, but it’s a fact that in the fundamental classical EM, it does not matter whether you ascribe a specific location to the attribute of the electric charge, or not.

In particular, You may take the position (1) that the electric charge exists at the same point where the singularity in the electron’s field is. Or, alternatively, you may adopt the position (2) that the charge is actually distributed all over the space, wherever the electric field exists.

Realize that whether you take the first position or the second, it makes no difference whatsoever either to the concepts at the root of the EM laws or the associated calculation procedures associated with them.

However, we may consider the fact that the singularity indeed is a very distinguished point. There is only one such a point associated with the interaction of a given electron with another given electron. Each electron sees one and only one singular point in the field produced by the other electron.

Each electron also has just one charge, which remains constant at all times. An electron or a proton does not possess two charges. They do not possess complex-valued charges.

So, based on this extraneous consideration (it’s not mandated by the basic concepts or laws), we may think of simplifying the matters, and say that

the charge of an electron (or the other fundamental particle, viz., proton) exists only at the singular point, and nowhere else.

All in all, we might adopt the position that the charge is where the singularity is—even if there is no positive evidence for the position.

Then, continuing on this conceptually alluring but not empirically necessitated viewpoint, we could also say that the electron’s mass is where its electrostatic singularity is.

Now, a relatively minor consideration here also is that ascribing the mass only to the point of singularity also suggests an easy analogue to the Newtonian particle-mechanics. I am not sure how advantageous this analogue is. Even if there is some advantage, it would still be a minor advantage. The reason is, the two theories (NM and EM) are, hierarchically, at highly unequal levels—and it is this fact which is far more important.

All in all, we can perhaps adopt this position:

With all the if’s and the but’s kept in the context, the mass and the charge may be regarded as not just multipliers in the field equations; they may be regarded to have a distinguished location in space too; that the charge and mass exist at one point and no other.

We could say that. There is no experiment which mandates that we adopt this viewpoint, but there also is no experiment—or conceptual consideration—which goes against it. And, it seems to be a bit easier on the mind.

4.2 How quantum gravity becomes ridiculous simple:

If we thus adopt the viewpoint that the mass is where the electrostatic singularity is, then the issue of quantum gravity becomes ridiculously simple… assuming that you have developed a theory to multi-scale-wise gross out classical magnetism from the more basic QM formalism, in the first place.

Why would it make the quantum gravity simple?

Gravity is just a force between two point particles of electrons (or protons), and, you could directly include it in your QM if your computer’s floating point arithmetic allows you to deal with it.

As an engineer, I wouldn’t bother.

But, basically, that’s the only physics-wise relevance of quantum gravity.

4.3 What is the real relevance of quantum gravity?

The real reason behind the attempts to build a theory of quantum gravity (by following the track of the usual kind of the relativistic QM theory) is not based in physics or nature of reality. The reasons are, say “social”.

The socially important reason to pursue quantum gravity is that it keeps physicists in employment.

Naturally, once they are employed, they talk. They publish papers. Give interviews to the media.

All this can be fine, so long as you bear in your mind the real reason at all times. A field such as quantum gravity was invented (i.e. not discovered) only in order to keep some physicists out of unemployment. There is no other reason.

Neither Wolchover nor Motl would tell you this part, but it is true.


5. So, what can we finally say regarding singularities?:

Simply this much:

Next time you run into the word “singularity,” think of those small pieces of paper and a plastic comb.

Don’t think of those advanced graphics depicting some interstellar space-ship orbiting around a black-hole, with a lot of gooey stuff going round and round around a half-risen sun or something like that. Don’t think of that.

Singularities is far more common-place than you’ve been led to think.

Your laptop or cell-phone has of the order of 10^23 number of singularities, all happily running around mostly within that small volume, and acting together, effectively giving your laptop its shape, its solidity, its form. These singularities is what gives your laptop the ability to brighten the pixels too, and that’s what ultimately allows you to read this post.

Finally, remember the definition of singularity:

A singularity is a distinguished point in an otherwise finite field where the field-strength approaches (positive or negative) infinity.

This is a mathematical characterization. Given that infinities are involved, physics can in principle have no characterization of any singularity. It’s a point which “falls out of”, i.e., is in principle excluded from, the integrated body of knowledge that is physics. Singularity is defined not on the basis of its own positive merits, but by negation of what we know to be true. Physics deals only with that which is true.

It might turn out that there is perhaps nothing interesting to be eventually found at some point of some singularity in some physics theory—classical or quantum. Or, it could also turn out that the physics at some singularity is only very mildly interesting. There is no reason—not yet—to believe that there must be something fascinating going on at every point which is mathematically described by a singularity. Remember: Singularities exist only in the abstract (limiting processes-based) mathematical characterizations, and that these abstractions arise from the known physics of the situation around the so distinguished point.

We do know a fantastically great deal of physics that is implied by the physics theories which do have singularities. But we don’t know the physics at the singularity. We also know that so long as the concept involves infinities, it is not a piece of valid physics. The moment the physics of some kind of singularities is figured out, the field strengths there would be found to be not infinities.

So, what’s singularity? It’s those pieces of paper and the comb.

Even better:

You—your body—itself carries singularities. Approx. 100 \times 10^23 number of them, in the least. You don’t have to go looking elsewhere for them. This is an established fact of physics.

Remember that bit.


6. To physics experts:

Yes, there can be a valid theory of non-relativistic quantum mechanics that incorporates gravity too.

It is known that such a theory would obviously give erroneous predictions. However, the point isn’t that. The point is simply this:

Gravity is not basically wedded to, let alone be an effect of, electromagnetism. That’s why, it simply cannot be an effect of the relativistic reformulations of the multi-scaled grossed out view of what actually is the fundamental theory of electromagnetism.

Gravity is basically an effect shown by massive objects.

Inasmuch as electrons have the property of mass, and inasmuch as mass can be thought of as existing at the distinguished point of electrostatic singularities, even a non-relativistic theory of quantum gravity is possible. It would be as simple as adding the Newtonian gravitational potential energy into the Hamiltonian for the non-relativistic quantum mechanics.

You are not impressed, I know. Doesn’t matter. My primary concern never was what you think; it always was (and is): what the truth is, and hence, also, what kind of valid conceptual structures there at all can be. This has not always been a concern common to both of us. Which fact does leave a bit of an impression about you in my mind, although it is negative-valued.

Be advised.


A song I like:

(Hindi) ओ मेरे दिल के चैन (“O mere, dil ke chain”)
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Majrooh Sultanpuri

[

I think I have run the original version by Kishore Kumar here in this section before. This time, it’s time for Lata’s version.

Lata’s version came as a big surprise to me; I “discovered” it only a month ago. I had heard other young girls’ versions on the YouTube, I think. But never Lata’s—even if, I now gather, it’s been around for some two decades by now. Shame on me!

To the n-th order approximation, I can’t tell whether I like Kishor’s version better or Lata’s, where n can, of course, only be a finite number though it already is the case that n > 5.

… BTW, any time in the past (i.e., not just in my youth) I could have very easily betted a very good amount of money that no other singer would ever be able to sing this song. A female singer, in particular, wouldn’t be able to even begin singing this song. I would have been right. When it comes to the other singers, I don’t even complete their, err, renderings. For a popular case in point, take the link provided after this sentence, but don’t bother to return if you stay with it for more than, like, 30 seconds [^].

Earlier, I would’ve expected that even Lata is going to fail at the try.

But after listening to her version, I… I don’t know what to think, any more. May be it’s the aforementioned uncertainty which makes all thought cease! And thusly, I now (shamelessly and purely) enjoy Lata’s version, too. Suggestion: If you came back from the above link within 30 seconds, you follow me, too.

]

 

 

Fundamental Chaos; Stable World

Before continuing to bring my QM-related tweets here, I think I need to give some way (even if very cartoonish) to help visualize the kind of physics I have in mind for my approach to QM. But before I am able to do that, I need to introduce (at least some of) you to my way of thinking on these matters. An important consideration in this regard is what I will cover in this post, so that eventually (may be after one more post or so) we could come back to my tweets on QM proper.


Chaos in fundamental theory vs. stability evident in world:

If the physical reality at its deepest level—i.e. at the quantum mechanical level—shows, following my new approach [^], a nonlinear dynamics, i.e. catastrophic changes that can be characterized as being chaotic, then how come the actual physical world remains so stable? Why is it so boringly familiar-looking a place? … A piece of metal like silver kept in a cupboard stays just that way for years together, even for centuries. It stays the same old familiar object; it doesn’t change into something unrecognizable. How come?

The answer in essence is: Because of the actual magnitudes of the various quantities that are involved in the dynamics, that’s why.

To help understand the answer, it is best to make appeal not directly to quantum-mechanical calculations or even simulations, but, IMHO, to the molecular dynamics simulations.


QM calculations are often impossible and simulations are very hard:

Quantum-mechanical calculations can’t handle the nonlinear QM (of the kind proposed by me), full stop.

For that matter, even if you follow the mainstream QM (as given in textbooks), calculations are impossible for even the simplest possible systems like just two interacting electrons in an infinite potential box. No wonder even a single helium atom taken in isolation poses a tough challenge for it [^]. With all due respect (because it was as far back as in 1929), all that even Hylleraas [^] could manage, by way of manual calculations (with a mechanical calculator) was determination of only the energy of the ground-state of the He atom, not a direct expression for its wave-function—including, of course, the latter’s time-dependence.

In fact, even simulations today have to go through a lot of hoops—the methods to simulate QM are rather indirect. They don’t even aim to get out wavefunctions as the result.

Even the density functional theory (DFT), a computational technique that got its inventors a physics Nobel [^], introduces and deals with only the electron density, not wavefunctions proper—in case you never noticed it. Personally, in my limited searches, I haven’t found anyone giving even an approximate expression for the Helium wavefunction itself. [If someone can point me to the literature, please do; thanks in advance!]

To conclude, quantum-mechanical calculations are so tough that direct simulations are not available for even such simplest wavefunctions as that of the helium atom. And mind you, helium is the second simplest atom in the universe, and it’s just an atom—not a molecule, a nano-structure, or a photo-multiplier tube.


Molecular dynamics as a practically accessible technique:

However, for developing some understanding of how systems are put together and work at the atomic scale, we can make use of the advances made over the past 60–70 years of computational modeling. In particular, we can derive some very useful insights by using the molecular dynamics simulations [^] (MD for short).

It is a fairly well-established fact that MD simulations, although classical in nature, are sufficiently close to the ab initio quantum-mechanical calculations too, as to be practically quite useful. They have proved their utility for at least some “simpler” systems and for certain practical purposes, especially in the condensed matter physics (even if not for more ambitious goals like automated drug discovery).

See the place of MD in the various approaches involved in the multi-scale modeling, here. [^]. Note that MD is right next to QM.

So, we can use MD simulations in order to gain insights into our above question, viz. the in-principle nonlinearity at the most basic level of QM vs. the obvious stability of the real world.


Some details of the molecular dynamics technique:

In molecular dynamics, what you essentially have are atomic nuclei, regarded as classical point-particles, which interact with each other via a classical “springy” potential. The potential goes through a minimum, i.e., a point of equilibrium (where the forces are zero).

Imagine two hard steel balls connected via a mechanical spring that has some neutral length. If you compress the balls together, the spring develops forces which try to push the balls apart. If you stretch the balls apart, the spring develops opposite kind of forces which tend to pull the balls back together. Due to their inertia, when the balls are released from an initial position of a stretch/compression, the balls can overshoot the intermediate position of neutrality, which introduces oscillations.

The MD technique is somewhat similar. In the simple balls + mechanical spring system discussed above, the relation of force vs. separation is quite simple: it’s linear. F = -k(x - x_0). In contrast, The inter-nucleus potential used in the molecular dynamics is more complicated. It is nonlinear. However, it still has this feature of a potential valley, which implies a neutral position. See the graph of the most often used potential, viz., the Lennard-Jones potential [^].

In conducting the MD simulation, you begin with a large number of such nuclei, taken as the classical point-particles of definite mass. Although in terms of the original idea, these point-particles represent the nuclei of atoms (with the inter-nuclear potential field playing the role of the \Psi wavefunction), the literature freely uses the term “atoms” for them.

The atoms in an MD simulation are given some initial positions (which do not necessarily lie at the equilibrium separations), and some initial velocities (which are typically random in magnitude and direction, but falling into a well-chosen band of values). The simulation consists of following Newton’s second law: F = ma. Time is discretized, typically in steps of uniform durations. Forces on atoms are calculated from their instantaneous separations. These forces (accelerations) are integrated over a time-step to obtain velocities, which are then again integrated over time to obtain changes in atomic positions over the time-step. The changed positions imply changes in instantaneous forces, and thus makes the technique nonlinear. The entire loop is repeated at each time-step.

As the moving atoms (nuclei) change their positions, the system of their overall collection changes its configuration.

If the numerical ranges of the forces / accelerations / velocities / displacements are small enough, then even if the nuclei undergo changes in their positions with the passage of a simulation, their overall configuration still remains more or less the same. The initially neighbouring atoms remain in each other’s neighbourhood, even if individually, they might be jiggling a little here and there around their equilibrium positions. Such a dynamical state in the simulation corresponds to the solid state.

If you arrange for a gradual increase in the velocities (say by effecting an increase in an atom’s momentum when it bumps against the boundaries of the simulation volume, or even plain by just adding a random value from a distribution to the velocities of all the atoms at regular time intervals), then statistically, it is the same as increasing the temperature of the system.

When the operating velocities become large enough (i.e. when the “temperature” becomes high enough), the configuration becomes such that the moving atoms can now slip past their previous neighbours, and form a new neighbourhood around a new set of atoms. However, their velocities are still small enough that their overall assembly does not “explode;” the assembly continues to occupy roughly the same volume, though it may change its shape. Such a dynamic state corresponds to the liquid state.

Upon further increasing the temperature (i.e. velocities), the atoms now begin to dash around with such high speeds that they can overcome the pull of their neighbouring atoms, and begin to escape into the empty space outside the assemblage. The assembly taken as a whole ceases to occupy a definite sub-region inside the simulation chamber. Instead, the atoms now shoot across the entire chamber. Of course this is nothing but the gaseous state. Impermeable boundaries have to be assumed to keep the atoms inside the finite region of simulation. (Actually, similar, even for the liquid state.) The motion of the atoms in the gaseous phase looks quite chaotic, even if in a statistical sense, certain macro-level properties like pressure are being maintained lawfully. The kinetic energy, in particular, stays constant for an isolated system (within the numerical errors) even in the gaseous state.

While there are tons of resources on the ‘net for MD, here is one particularly simple but accurate enough a Python code, with good explanations [^]. I especially liked it because unlike so many other “basic” MD codes, this one even shows the trick of shifting the potential so as to effectively cut it off to a finite radius in a smooth manner—many introductory MD codes do it rather crudely, by directly cutting off the potential (thereby leaving a discontinuity of the vertical jump in the potential field). [In theory, the potential goes to infinity, but in practice, you need to cut it off just so as to keep the computational costs down.]

Here is a short video showing an MD simulation of melting of ice (and I guess, also of evaporation) [^].  It has taken into account the dipole nature of the water molecules too.

The entire sequence can of course be reversed. You can always simulate a gas-to-liquid transition, and further, you can also simulate solidification. Here is a video showing the reverse phase-change for water: [^]


The points to gather from the MD simulations, for our purposes:

MD simulations of even gases retains a certain orderliness. Despite all the random-looking motions, they still are completely lawful, and the laws are completely deterministic. The liquid state certainly shows much a better degree of orderliness as compared to that in the gaseous state. The solid state shows some remnants of the random-looking motion, but these motions are now very highly localized, and so, the assemblage as a whole looks very orderly, stable. It not only preserves its volume, it wouldn’t show even just “flowing” if you were to program the simulation to have a tilting of the simulation chamber incorporated in it.

Now the one big point I want you to note is this.

Even in the very stable, orderly looking simulation of the solid state, the equations governing the dynamics of all the individual atoms still are nonlinear [^], i.e., they still are chaotic. It is the chaotic equations which produce the very orderly solid state.

MD simulations would not be capable of simulating phase transitions (from solid to liquid to gas etc.) using just identical balls interacting via simple pair-wise interactions, if the basic equations didn’t have a nonlinearity built into them. [I made a tweet to this effect last month, on 02 August 2019.]

So, even the very stable-looking solid state is maintained by the assemblage only by following the same nonlinear dynamical laws as required which allow phase transitions to occur and show the evident randomness for the gaseous state.

It’s just that, when the the parameters like velocity and acceleration (determined by the potential) fall into certain ranges of small enough values, then even if the governing equation remains nonlinear, the dynamical state automatically gets confined to a regime of the highly orderly and stable solid state.

So, the lesson is:

Even if the dynamical nonlinearity in the governing laws sure implies instabilities in principle, what matters is the magnitudes of parameters (here, prominently, the velocities i.e. the “temperature” of simulation). The operative numerical magnitudes of the parameters directly determine the regime of the dynamical state. The regime can correspond to a very highly ordered state too.

Ditto, for the actual world.

In fact, something stronger can be said: If the MD equations were not to be nonlinear, if they were not to be chaotic, then they would fail to reproduce not only phase transitions (like solid to liquid etc.), but also such utterly orderly behaviour as the constancy of the temperature for these phase-transitions. Even stronger: The numerical values of the parameters don’t have to be exactly equal to some critical values. Even if the parameter values vary a lot, so long as they fall into a certain range, the solution regime (the qualitative behaviour of the solution) remains unchanged, stable!

In MD, chaotic equations not only ensure that phase transitions like melting can at all occur in a simulation, their specific nature even ensures the exact constancy of the temperature for phase changes. Parameter values aren’t even required to be exactly equal to some critical values; they can vary a lot (within a certain range), and even then, the solution regime would remain unchanged—it would still show the stability of the qualitative behaviour. 

Ditto, for the quantum mechanical simulations using my new approach. (Though I haven’t done a simulation, the equations show a similar kind of a nonlinearity.)

In my approach, quantum mechanical instability is ever present in each part of the physical world. However, the universe that we live in simply exists (“has been put together”) in such a way that the numerical values of the parameters actually operative are such that the real world shows the same feature of stability, as in the MD simulations.


The example of a metal piece left alone for a while:

If you polish and buff, or chemically (or ultrasonically) clean, a piece of metal to a shining bright state, and then leave it alone for a while, it turns dull over a period of time. That’s because of corrosion.

Corrosion is, chemically, a process of oxidation. Oxygen atoms from the air react with those pure metal atoms which are exposed at a freshly polished surface. This reaction is, ultimately, completely governed by the laws of quantum mechanics—which, in my approach has a nonlinearity of a specific kind. Certain numerical parameters control the speed with which the quantum-mechanical rearrangements of the wavefunction governing the oxygen and metal atoms proceeds.

The world we live in happens to carry those values for such parameters that corrosion turns out to be a slow process. Also, it turns out to be a process that mostly affects only the surface of a metal (in the solid state). The dynamical equations of quantum mechanics, even if nonlinear, are such that the corrosion cannot penetrate very deep inside the metal—the values of the governing parameters are such that oxygen atoms cannot diffuse so easily into the interior regions all that easily. If the metal were to be in a liquid state, it would be altogether a different matter—again, another range of the nonlinear parameters, that’s all.

So, even if it’s a nonlinear (“chaotic” or even “randomness-possessing”) quantum-mechanical evolution, the solid metal piece does not corrode all the way through.

That’s why, you can always polish your family silver, or the copper utensils used in “poojaa” (worship), and use them again and again.

The world is a remarkably stable place to be in.


Concluding…:

So what’s the ultimate physics lesson we can draw from this story?

It’s this:

In the end, the qualitative nature of the solutions to physical problems is not determined solely by the kind of mathematical laws which do govern that phenomenon; the nature of the constituents of a system (what kind of objects there are in it), the particulars of their configurations, and the numerical ranges of the parameters as are operative during their interactions, etc., also matter equally well—if not even more!

Don’t fall for those philosophers, sociologists, humanities folks in general (and even some folks employed as “scientists”) who snatch some bit of the nonlinear dynamics or chaos theory out of its proper context, and begin to spin a truly chaotic yarn of utter gibberish. They simply don’t know any better. That’s another lesson to be had from this story. There is a huge difference between the cultural overtones associated with the word “chaos” and the meaning the same term has in absolutely rigorous studies in physics. Hope I will never have to remind you on this one point.


A song I like:

(Marathi) “shraavaNaata ghana niLaa barasalaa…”
Lyrics: Mangesh Padgaonkar
Music: Shreenivas Khale
Singer: Lata Mangeshkar
[Credits listed, happily, in a random order.]