“Simulating quantum ‘time travel’ disproves butterfly effect in quantum realm”—not!

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].

This post is based on a series of tweets I made today. The original Twitter thread is here [^]. I have made quite a few changes in posting the same thought here. Further, I am also noting some addenda here (which are not there in the original thread).

Anyway, here we go!

1. The butterfly effect and QM: a new paper that (somehow) caught my fancy:

1.1. Why this news item interested me in the first place:

Nonlinearity in the wavefunction \Psi, as proposed by me, forms the crucial ingredient in my new approach to solving the QM measurement problem. So, when I spotted this news item [^] today, it engaged my attention immediately.

The opening line of the news item says:

Using a quantum computer to simulate time travel, researchers have demonstrated that, in the quantum realm, there is no “butterfly effect.”

[Emphasis in bold added by me.]

The press release by LANL itself is much better worded (PDF) [^]. In the meanwhile, I also tried to go through the arXiv version of the paper, here [^].

I don’t think I understand the paper in its entirety. (QC and all is not a topic of my main interests.) However, I do think that the following analogy applies.:

1.2. A (way simpler) analogy to understand the situation described in the paper:

The whole thing is to do with your passport-size photo, called “P”.

Alice begins with “P”, which is given in the PNG/BMP format. [Should the usage be the Alice? I do tend to think so! Anyway…]

She first applies a 2D FFT to it, and saves the result, called “FFT-P”, in a folder called “QC” on her PC. Aside: FFT’ed photos look like dots that show a “+”-like visual structure. Note, Alice saves both the real and the imaginary parts of the FFT-ed image. This assumption is important.

She then applies a further sequence of linear, lossless, image transformations to “FFT-P”. Let’s call this ordered set of transformations “T”. Note, “T” is applied to “FFT-P”, not to “P” itself.

As a result of applying the “T” transformations, she obtains an image which she saves to a file called “SCR-FFT-P”. This image totally looks like random dots to the rest of us, because the “T” transformations are such that they scramble whatever image is fed to them. Hence the prefix “SCR”, short for “scrambled”, in the file-name.

But Alice knows! She can always apply the same sequence of transformation but in the reverse direction. Let’s call this reverse transformation “T-inverse”.

Each step of “T” is reversible—that’s what “linear” and “lossless” transformation means! (To contrast, algorithms like “hi-pass” or “low-pass” filtering, or operators like the gradient or Laplacian are not loss-less.)

Since “T” is reversible, starting with “SCR-FFT-P”, Alice can always apply “T-inverse”, and get back to the original 2D FFT representation, i.e., to “FFT-P”.

All this is the normal processing—whether in the forward direction or in the reverse direction.

1.3. Enter [the [?]] Bob:

As is customary in the literature on the QC/entanglement, Bob enters the scene now! Alice and Bob work together.

Bob hates you. That’s because he believes in every claim made about QC, but you don’t. That’s why, he experiences an inner irrepressible desire to do some damage to your photograph, during its processing.

So, to, err…, “express” himself, Bob comes early to office, gains access to Alice’s “QC” folder, and completely unknown to her, he modifies a single pixel of the “FFT-P” image stored there, and even saves it. Remember, this is the FFT-ed version of your original photo “P”.

Let’s call the tampered version: “B-FFT-P”. On the hard-disk, it still carries the name “FFT-P”. But its contents are modified, and so, we need another name to denote this change of the state of the image.

1.4. What happens during Alice’s further processing?

Alice comes to the office a bit later, and soon begins her planned work for the day, which consists of applying the “T” transformation to the “FFT-P” image. But since the image has been tampered with by Bob, what she ends up manipulating is actually the “B-FFT-P” image. As a result of applying the (reversible) scrambling operations of “T”, she obtains a new image, and saves it to the hard-disk as “SCR-B-FFT-P”.

But something is odd, she feels. So, just to be sure, she decides to check that everything is OK, before going further.

So, she applies “T-inverse” operation to the “SCR-B-FFT-P” file, and obtains the “B-FFT-P” image back, which she saves to a file of name “Recovered FFT-P”. Observe, contents-wise, it is exactly the same as “B-FFT-P”, though Alice still believes it is identical to “FFT-P”.

Now, on a spur of the moment, she decides also to apply the reverse-FFT operation to “Recovered FFT-P”, i.e., to the Bob-tampered version of the FFT-ed version of your original photo. She saves the fully reversed image as “Recovered P”.

Just to be sure, she then runs some command that does a binary bit-wise comparison between “Recovered P” and the original “P”.

We know that they are not the same. Alice discovers this fact, but only at this point of time.

1.5. The question that the paper looks into:

If I understand it right, what the paper now ponders over is this question:

How big or small is the difference between the two images: “Recovered P” and the original “P”?

The expected answer, of course, is:

Very little.

The reason to keep such an expectation is this: FFT distributes the original information of any locality over the entire domain in the FFT-ed image. Hence, during reverse processing, each single pixel in the FFT-ed image maps back to all the pixels in the original image. [Think “holographic” whatever.] Therefore, tampering with just one pixel of the FFT-ed representation does not have too much effect in the recovered original image.

Hence, Alice is going to recover most of the look and feel of your utterly lovely, Official, passport-size visage! That is what is going to happen even if she in reality starts only from the scrambled tampered state “SCR-B-FFT-P”, and not the scrambled un-tampered state “SCR-FFT-P”. You would still be very much recognizable.

In fact, due to the way FFT works, the difference between the original photo and the recovered photo goes on reducing as the sheer pixel size of the original image goes on increasing. That’s because, regardless of the image size, Bob always tampers only one pixel at a time. So, the percentage tampering goes on reducing with an increase in the resolution of the original image.

1.6. The conclusion that the paper draws from the above:

Let’s collect the undisputable facts together:

  • There is very little difference between the recovered image, and the original image.
  • Whatever be the difference, it goes on reducing with the size of the original image.

The paper now says, IMO quite properly, that Bob’s tampering of the single pixel is analogous to his making a QM measurement, and thereby causing a permanent change to the concerned (“central”) qubit.

But then, the paper draws the following conclusion:

The Butterfly Effect does not apply to QM as such; it applies only to classical mechanics.

Actually, the paper is a bit more technical than that. In fact, I didn’t go through it fully because even if I were to, I wouldn’t understand all of it. QC is not a topic of my primary research interests, and I have never studied it systematically.

But still, yes, I do think that the above is the sort of logic on which the paper relies, to draw the conclusion which it draws.

2. My take on the paper:

2.1 It’s an over-statement:

Based on what I know, and my above, first, take, I do think that:

The paper makes an over-statement. The press release then highlights this “over” part. Finally, the news item fully blows up the same, “over” part.

Why do I think so? Here is my analysis..

If the Butterfly Effect produced due to nonlinearity is fully confined only to making an irreversible (or at least exponentially divergent) change only to a single pixel in the FFT representation of the original image (or alternatively, even if we didn’t look into alternative analgoy in which what Bob tampers is the original photograph but each subsequent processing involves only an FFT-ed version), then any and all of the further steps of linear and reversible transformations wouldn’t magnify the said tampering.

Why not?

Because all the further steps are prescribed to be linear (and in fact even reversible), that’s why!

In other words, what the paper says boils down to a redundancy (or, a re-statement of the same facts):

A linear and reversible transformation is emphatically not a non-linear and exponential divergent one (as in the butterfly effect).

That’s what the whole point of the paper seems to be!

2.2. The actual processing described in the paper does not at all involve the butterfly effect:

Realize, the only place the butterfly effect can at all occur during the entire processing is as a mechanism by which Bob might tamper with that single pixel.

Now, of course, the paper doesn’t say so. The paper only says that there is a tampering of a qubit via a measurement effected on it (with all other qubits, constituting “the bath” being left alone).

But, yes, I have proposed this idea that the measurement process itself progresses, within the detector, via the butterfly effect. I identified it as such during my Outline document posted at iMechanica, here (PDF) [^].

Of course, I stand ready to be corrected, if I am wrong anywhere in the fundamentals of my analysis.

2.3. I didn’t say anything about the “time-travel” part:

That’s right. The reason is: there is no real time-travel here anyway!

Hmmm… Explaining why would unnecessarily consume my time. … Forget it! Just remember: There is no time-travel here, not even a time-reversal, for that matter. In the first half of the processing by Alice (and may be with tampering by Bob), each step occurs some finite time after the completion of previous step. In the second half of the processing, again, each step of the inverse-processing occurs some finite time after the completion of the previous step. What reverses is the sequence of operators, but not time. Time always flows steadily in the forward direction.

Enough said.

2.4. Does my critique reflect on the paper taken as a whole?

I did manage to avoid Betteridge’s law [^] thus far, but can’t, any more!

The answer seems to be: “no”, or at least: “I didn’t mean that“.

The thing is this: This is a paper from the field of Quantum Computer/Quantum Information Science—which is not at all a field of my knowledge (let alone expertize). The paper reports on a simulation the authors conducted. I am unable to tell how valuable this particular simulation is, in the overall framework of QInfoScience.

However, as a computational modelling and simulation engineer myself, I can tell this much: Some times, even a simple (stupid!)-looking simulation actually is implemented merely in order to probe on some aspect that no one else has thought of. The simulation is not an end in itself, but merely a step in furthering research. The idea is to explore a niche and to find / highlight some gap in knowledge. In topics that are quite complicated, isolation of one aspect at a time, afforded by a simulation, can be of great help.

(I can cite an example of a very simple-looking simulation, actually a stupid-looking simulation, from my own PhD time research: I had a conference paper on simulating a potential field using random walk and comparing its results with a self-implemented FEM solver. The rather coluorful Gravatar icon which you see (the one which appears in the browser bar when you view my posts here) was actually one of the results I had reported in this preliminary exploration of what eventually became my PhD research.)

Coming back to this paper, it’s not just possible but quite likely that the authors are reporting something that has implications for much more “heavy-duty” topics, say topics like quantum error corrections, where and when they are necessary, the minimum efficiency they must possess, in what kind of architecture/processing, and whatnot. I can’t tell, but this is the nature of simulations. Sometimes, they look simple, but their implications can be quite profound. I am in no position to judge the merits of this paper, from this viewpoint.

At the same time, I also think that probing this idea of measuring just one qubit and tracing its effects on the nearby “bath” of qubits can have good merits. (I vaguely recall the discussions, some time ago, of “pointer states” and all that.)

Yet, of course, I do have a critical comment to make regarding this paper. But my comment is entirely limited to what the paper says regarding the foundational aspects of QM and the relevance of chaos / nonlinear science in QM. With the kind of nonlinearity in \Psi which I have proposed [^], I can clearly see that you can’t say that just because the mainstream QM theory is linear, therefore everything about quantum phenomena has to be linear. No, this is an unwarranted assumption. It was from this viewpoint that I thought that the implication concerning the foundational aspects was not acceptable. That’s why (and how) I wrote the tweets-series and this post.

All in all, my critique is limited to saying that a nonlinearity in \Psi, and hence the butterfly effect, is not only possible in QM, but it is crucial in correctly addressing the measurement problem right. I don’t have any other critique to offer regarding any of the other aspects of the reported work.

Hope this clarifies.

And, to repeat: Of course, I stand ready to be corrected, if I have gone wrong anywhere in the fundamentals of my analysis regarding the foundational issues too.

3. An update on my own research:

3.1. My recent tweets:

Recently (on 23 July 2020), I also tweeted a series [^] regarding the on-going progress in my new approach. Let me copy-paste the tweets (not because the wording is great, but because I have to finish writing this post, somehow!). I have deleted the tweet-continuation numbers, but otherwise kept the content as is:

Regarding my new approach to QM. I think I still have a lot of work to do. Roughly, these steps:

1. Satisfy myself that in simplest 1D toy systems (PIB, QHO), x-axis motion of the particle (charge-singularity) occurs as it should, i.e., such that operators for momentum, 1/

position, & energy have *some* direct physical meaning.

2. Using these ideas, model the H atom in a box with PBCs (i.e., an infinite lattice of repeating finite volumes/cells), and show that the energy of electron obtained in new approach is identical to that in the std. QM (which uses reduced mass, nucleus-relative x, no explicit particle positions, only electron’s energy).

3. Possibly, some other work/model.

4. Repeat 2. for modelling two *interacting* electrons (or the He atom) in a box with PBCs.

Turned out that I got stuck for the past one month+ right at step no. 1!

However, looks like I might have finally succeeded in putting things together right—with one e- in a box, at least.

In the process, found some errors in my post from the ontologies series, part 10.

Will post the corrections at my blog a bit later.

Tentatively, have decided to try and wrap up everything within 4–6 weeks.

So, either I report success with my new approach by, say 1st week of September or so, or I give up (i.e. stop work on QM for at least few months).

But yes, there does seem to be something to it—to the modelling ideas I tried recently. Worth pursuing for a few weeks at least.

… At least I get energy and probability right, which in a way means also position. But I am not fully happy with momentum, even though I get the right numerical values for it, and so, the thinking required is rather on the conceptual–physical side … There *are* “small” issues like these.

But yes,

(1) I’m happy to have spotted definite errors in my own previous documentation—Ontologies series, part 10, as also in the Outline doc. (PDF here [^]) ,
(2) I’m happy to have made a definite progress in the modelling with the new approach.

Bottomline: I don’t have to give up my new approach. Not right away. Have to work on it for another month at least.

3.2. Some comments on the tweets:

I need to highlight the fact that I have spotted some definite errors, both in the Ontologies series (part 10), and in the Outline document.

In particular:

3.2.1. In the Ontologies series, part 10, I had put forth an argument that it’s a complex-valued energy that gets conserved. I am not so sure of that any more, and am going over the whole presentation once again. (The part up covering the PIB modelling from that post is more or less OK.)

3.3.2. In the Outline document, I had said:

“The measurement process is nondestructive of the state of the System. It produces catastrophic changes only in the Instrument”

I now think that this description is partly wrong. Yes, a measurement has to produce catastrophic changes in the Instrument. But now, the view I am developing amounts to saying that the state of the System also undergoes a permanent change during measurement, though such a change is only partial.

3.3. Status of my research:

I am working through the necessary revision of all such points. I am also working through simulations and all. I hope to have another document and a small set of simulations (as spelt in the immediately preceding Twitter thread) soon. The document would still be preliminary, but it is going to be more detailed.

In particular, I would be covering the topic of the differences between the tensor product states (e.g. two non-interacting electrons) in a box vs. the entangled states of two electrons. Alternatively, may be, treating the proton as a quantum object (having its own wavefunction), and thus, simulating only the Hydrogen atom, but with my new approach. Realize, when you treat the proton quantum mechanically, allowing its singularity to move, it becomes a two-particle system.

So, a two-particle system is the minimum required for validation of my new approach.

For convenience of simulation, i.e. especially to counter the difficulties (introduced by boundaries due to discretization of space via FDM mesh), I am going to put the interacting pair of particles inside a box but with periodic boundary conditions (PBCs for short). Ummm… This word, “PBCs”, has been used in two different senses: 1. To denote a single, representative, finite-sized unit cell from an infinitely repeated lattice of such cells, and 2. Ditto, but with further physical imagination that this constitutes a “particle on a ring” so that computations of the orbital angular momentum too must enter the simulation. Here, I am going to stick to PBCs in the sense 1., and not in the sense 2.

I hope to have something within a few weeks. May be 3–4 weeks, perhaps as early as 2–3 weeks. The trouble is, as I implement some simulation, some new conceptual aspects crop up, and by the time I finish ironing out the wrinkles in the conceptual framework, the current implementation turns out to be not very suitable to accommodate further changes, and so, I have to implement much of the whole thing afresh again. Re-implementation is not a problem, at least not a very taxing one (though it can get tiring). The real problems are the conceptual ones.

For instance, it’s only recently that I’ve realized that there is actually a parallel in my approach to Feynman’s idea of an electron “smelling” its neighbourhood around. In Feynman’s version, the electron not only “smells” but also “runs everywhere” at the same time, with the associated “amplitudes” cancelling out / reinforcing at various places. So, he had a picture of the electron that is not a localized particle and yet smells only a local neighbourhood at each point in the domain. He could not remove this contradiction.

I thought that I had fully removed such contradictions, but only to realize, at this (relatively “late”) stage that while “my electron” is a point-particle (in the sense that the singularity in the potential energy field is localized at a point), it still retains the sense of the “smell”. The difference being, now it can smell the entire universe (action at a distance, i.e. IAD). I knew that so long as I use the Fourier theory the IAD would be there. But it was part-surprise and part-delight for me to notice that even “my” electron must have such a “nose”.

Another thing I learnt was that even if I am addressing only the spinless electron, looks like, my framework seems to very easily and naturally incorporates the spin too, at least so long as I remain in 1D. I had just realized it, and soon (within days) came Dr. Woit’s post “What is “spin”?” [^]. I don’t understand it fully, but now that I see this way of putting things, that’s another detour for me.

All in all, working out conceptual aspects is taking time. Further, simply due to the rich inter-connections of concepts, I am afraid that even if I publish a document, it’s not going to be “complete”, in the sense, I wouldn’t be able to insert in it everything that I have understood by now. So, I am aiming to simply put out something new, rather than something comprehensive. (I am not even thinking of having anything “well polished” for months, even an year or so!)

Alright, so there. May be I won’t be blogging for a couple of weeks. But hopefully, I will have something to put out within a month’s time or so…

In the meanwhile, take care, and bye for now…

A song I like:

(Hindi) जादूगर तेरे नैना, दिल जायेगा बच के कहाँ (“jaadugar tere nainaa, dil jaayegaa…”)
Singers: Kishore Kumar, Lata Mangeshkar
Music: Laxmikant-Pyarelal
Lyrics: Rajinder Krishen

[Another song from my high-school days that somehow got thrown up during the recent lockdowns. … When there’s a lockdown in Pune, the streets (and the traffic) look (and “hear”) more like the small towns of my childhood. May be that’s why!]

[Some very minor editing may be effected, but I really don’t have much time—rather, any enthusiasm—for it! So, drop a line if you find something confusing… Take care and bye for now…]


Fundamental Chaos; Stable World

Before continuing to bring my QM-related tweets here, I think I need to give some way (even if very cartoonish) to help visualize the kind of physics I have in mind for my approach to QM. But before I am able to do that, I need to introduce (at least some of) you to my way of thinking on these matters. An important consideration in this regard is what I will cover in this post, so that eventually (may be after one more post or so) we could come back to my tweets on QM proper.

Chaos in fundamental theory vs. stability evident in world:

If the physical reality at its deepest level—i.e. at the quantum mechanical level—shows, following my new approach [^], a nonlinear dynamics, i.e. catastrophic changes that can be characterized as being chaotic, then how come the actual physical world remains so stable? Why is it so boringly familiar-looking a place? … A piece of metal like silver kept in a cupboard stays just that way for years together, even for centuries. It stays the same old familiar object; it doesn’t change into something unrecognizable. How come?

The answer in essence is: Because of the actual magnitudes of the various quantities that are involved in the dynamics, that’s why.

To help understand the answer, it is best to make appeal not directly to quantum-mechanical calculations or even simulations, but, IMHO, to the molecular dynamics simulations.

QM calculations are often impossible and simulations are very hard:

Quantum-mechanical calculations can’t handle the nonlinear QM (of the kind proposed by me), full stop.

For that matter, even if you follow the mainstream QM (as given in textbooks), calculations are impossible for even the simplest possible systems like just two interacting electrons in an infinite potential box. No wonder even a single helium atom taken in isolation poses a tough challenge for it [^]. With all due respect (because it was as far back as in 1929), all that even Hylleraas [^] could manage, by way of manual calculations (with a mechanical calculator) was determination of only the energy of the ground-state of the He atom, not a direct expression for its wave-function—including, of course, the latter’s time-dependence.

In fact, even simulations today have to go through a lot of hoops—the methods to simulate QM are rather indirect. They don’t even aim to get out wavefunctions as the result.

Even the density functional theory (DFT), a computational technique that got its inventors a physics Nobel [^], introduces and deals with only the electron density, not wavefunctions proper—in case you never noticed it. Personally, in my limited searches, I haven’t found anyone giving even an approximate expression for the Helium wavefunction itself. [If someone can point me to the literature, please do; thanks in advance!]

To conclude, quantum-mechanical calculations are so tough that direct simulations are not available for even such simplest wavefunctions as that of the helium atom. And mind you, helium is the second simplest atom in the universe, and it’s just an atom—not a molecule, a nano-structure, or a photo-multiplier tube.

Molecular dynamics as a practically accessible technique:

However, for developing some understanding of how systems are put together and work at the atomic scale, we can make use of the advances made over the past 60–70 years of computational modeling. In particular, we can derive some very useful insights by using the molecular dynamics simulations [^] (MD for short).

It is a fairly well-established fact that MD simulations, although classical in nature, are sufficiently close to the ab initio quantum-mechanical calculations too, as to be practically quite useful. They have proved their utility for at least some “simpler” systems and for certain practical purposes, especially in the condensed matter physics (even if not for more ambitious goals like automated drug discovery).

See the place of MD in the various approaches involved in the multi-scale modeling, here. [^]. Note that MD is right next to QM.

So, we can use MD simulations in order to gain insights into our above question, viz. the in-principle nonlinearity at the most basic level of QM vs. the obvious stability of the real world.

Some details of the molecular dynamics technique:

In molecular dynamics, what you essentially have are atomic nuclei, regarded as classical point-particles, which interact with each other via a classical “springy” potential. The potential goes through a minimum, i.e., a point of equilibrium (where the forces are zero).

Imagine two hard steel balls connected via a mechanical spring that has some neutral length. If you compress the balls together, the spring develops forces which try to push the balls apart. If you stretch the balls apart, the spring develops opposite kind of forces which tend to pull the balls back together. Due to their inertia, when the balls are released from an initial position of a stretch/compression, the balls can overshoot the intermediate position of neutrality, which introduces oscillations.

The MD technique is somewhat similar. In the simple balls + mechanical spring system discussed above, the relation of force vs. separation is quite simple: it’s linear. F = -k(x - x_0). In contrast, The inter-nucleus potential used in the molecular dynamics is more complicated. It is nonlinear. However, it still has this feature of a potential valley, which implies a neutral position. See the graph of the most often used potential, viz., the Lennard-Jones potential [^].

In conducting the MD simulation, you begin with a large number of such nuclei, taken as the classical point-particles of definite mass. Although in terms of the original idea, these point-particles represent the nuclei of atoms (with the inter-nuclear potential field playing the role of the \Psi wavefunction), the literature freely uses the term “atoms” for them.

The atoms in an MD simulation are given some initial positions (which do not necessarily lie at the equilibrium separations), and some initial velocities (which are typically random in magnitude and direction, but falling into a well-chosen band of values). The simulation consists of following Newton’s second law: F = ma. Time is discretized, typically in steps of uniform durations. Forces on atoms are calculated from their instantaneous separations. These forces (accelerations) are integrated over a time-step to obtain velocities, which are then again integrated over time to obtain changes in atomic positions over the time-step. The changed positions imply changes in instantaneous forces, and thus makes the technique nonlinear. The entire loop is repeated at each time-step.

As the moving atoms (nuclei) change their positions, the system of their overall collection changes its configuration.

If the numerical ranges of the forces / accelerations / velocities / displacements are small enough, then even if the nuclei undergo changes in their positions with the passage of a simulation, their overall configuration still remains more or less the same. The initially neighbouring atoms remain in each other’s neighbourhood, even if individually, they might be jiggling a little here and there around their equilibrium positions. Such a dynamical state in the simulation corresponds to the solid state.

If you arrange for a gradual increase in the velocities (say by effecting an increase in an atom’s momentum when it bumps against the boundaries of the simulation volume, or even plain by just adding a random value from a distribution to the velocities of all the atoms at regular time intervals), then statistically, it is the same as increasing the temperature of the system.

When the operating velocities become large enough (i.e. when the “temperature” becomes high enough), the configuration becomes such that the moving atoms can now slip past their previous neighbours, and form a new neighbourhood around a new set of atoms. However, their velocities are still small enough that their overall assembly does not “explode;” the assembly continues to occupy roughly the same volume, though it may change its shape. Such a dynamic state corresponds to the liquid state.

Upon further increasing the temperature (i.e. velocities), the atoms now begin to dash around with such high speeds that they can overcome the pull of their neighbouring atoms, and begin to escape into the empty space outside the assemblage. The assembly taken as a whole ceases to occupy a definite sub-region inside the simulation chamber. Instead, the atoms now shoot across the entire chamber. Of course this is nothing but the gaseous state. Impermeable boundaries have to be assumed to keep the atoms inside the finite region of simulation. (Actually, similar, even for the liquid state.) The motion of the atoms in the gaseous phase looks quite chaotic, even if in a statistical sense, certain macro-level properties like pressure are being maintained lawfully. The kinetic energy, in particular, stays constant for an isolated system (within the numerical errors) even in the gaseous state.

While there are tons of resources on the ‘net for MD, here is one particularly simple but accurate enough a Python code, with good explanations [^]. I especially liked it because unlike so many other “basic” MD codes, this one even shows the trick of shifting the potential so as to effectively cut it off to a finite radius in a smooth manner—many introductory MD codes do it rather crudely, by directly cutting off the potential (thereby leaving a discontinuity of the vertical jump in the potential field). [In theory, the potential goes to infinity, but in practice, you need to cut it off just so as to keep the computational costs down.]

Here is a short video showing an MD simulation of melting of ice (and I guess, also of evaporation) [^].  It has taken into account the dipole nature of the water molecules too.

The entire sequence can of course be reversed. You can always simulate a gas-to-liquid transition, and further, you can also simulate solidification. Here is a video showing the reverse phase-change for water: [^]

The points to gather from the MD simulations, for our purposes:

MD simulations of even gases retains a certain orderliness. Despite all the random-looking motions, they still are completely lawful, and the laws are completely deterministic. The liquid state certainly shows much a better degree of orderliness as compared to that in the gaseous state. The solid state shows some remnants of the random-looking motion, but these motions are now very highly localized, and so, the assemblage as a whole looks very orderly, stable. It not only preserves its volume, it wouldn’t show even just “flowing” if you were to program the simulation to have a tilting of the simulation chamber incorporated in it.

Now the one big point I want you to note is this.

Even in the very stable, orderly looking simulation of the solid state, the equations governing the dynamics of all the individual atoms still are nonlinear [^], i.e., they still are chaotic. It is the chaotic equations which produce the very orderly solid state.

MD simulations would not be capable of simulating phase transitions (from solid to liquid to gas etc.) using just identical balls interacting via simple pair-wise interactions, if the basic equations didn’t have a nonlinearity built into them. [I made a tweet to this effect last month, on 02 August 2019.]

So, even the very stable-looking solid state is maintained by the assemblage only by following the same nonlinear dynamical laws as required which allow phase transitions to occur and show the evident randomness for the gaseous state.

It’s just that, when the the parameters like velocity and acceleration (determined by the potential) fall into certain ranges of small enough values, then even if the governing equation remains nonlinear, the dynamical state automatically gets confined to a regime of the highly orderly and stable solid state.

So, the lesson is:

Even if the dynamical nonlinearity in the governing laws sure implies instabilities in principle, what matters is the magnitudes of parameters (here, prominently, the velocities i.e. the “temperature” of simulation). The operative numerical magnitudes of the parameters directly determine the regime of the dynamical state. The regime can correspond to a very highly ordered state too.

Ditto, for the actual world.

In fact, something stronger can be said: If the MD equations were not to be nonlinear, if they were not to be chaotic, then they would fail to reproduce not only phase transitions (like solid to liquid etc.), but also such utterly orderly behaviour as the constancy of the temperature for these phase-transitions. Even stronger: The numerical values of the parameters don’t have to be exactly equal to some critical values. Even if the parameter values vary a lot, so long as they fall into a certain range, the solution regime (the qualitative behaviour of the solution) remains unchanged, stable!

In MD, chaotic equations not only ensure that phase transitions like melting can at all occur in a simulation, their specific nature even ensures the exact constancy of the temperature for phase changes. Parameter values aren’t even required to be exactly equal to some critical values; they can vary a lot (within a certain range), and even then, the solution regime would remain unchanged—it would still show the stability of the qualitative behaviour. 

Ditto, for the quantum mechanical simulations using my new approach. (Though I haven’t done a simulation, the equations show a similar kind of a nonlinearity.)

In my approach, quantum mechanical instability is ever present in each part of the physical world. However, the universe that we live in simply exists (“has been put together”) in such a way that the numerical values of the parameters actually operative are such that the real world shows the same feature of stability, as in the MD simulations.

The example of a metal piece left alone for a while:

If you polish and buff, or chemically (or ultrasonically) clean, a piece of metal to a shining bright state, and then leave it alone for a while, it turns dull over a period of time. That’s because of corrosion.

Corrosion is, chemically, a process of oxidation. Oxygen atoms from the air react with those pure metal atoms which are exposed at a freshly polished surface. This reaction is, ultimately, completely governed by the laws of quantum mechanics—which, in my approach has a nonlinearity of a specific kind. Certain numerical parameters control the speed with which the quantum-mechanical rearrangements of the wavefunction governing the oxygen and metal atoms proceeds.

The world we live in happens to carry those values for such parameters that corrosion turns out to be a slow process. Also, it turns out to be a process that mostly affects only the surface of a metal (in the solid state). The dynamical equations of quantum mechanics, even if nonlinear, are such that the corrosion cannot penetrate very deep inside the metal—the values of the governing parameters are such that oxygen atoms cannot diffuse so easily into the interior regions all that easily. If the metal were to be in a liquid state, it would be altogether a different matter—again, another range of the nonlinear parameters, that’s all.

So, even if it’s a nonlinear (“chaotic” or even “randomness-possessing”) quantum-mechanical evolution, the solid metal piece does not corrode all the way through.

That’s why, you can always polish your family silver, or the copper utensils used in “poojaa” (worship), and use them again and again.

The world is a remarkably stable place to be in.


So what’s the ultimate physics lesson we can draw from this story?

It’s this:

In the end, the qualitative nature of the solutions to physical problems is not determined solely by the kind of mathematical laws which do govern that phenomenon; the nature of the constituents of a system (what kind of objects there are in it), the particulars of their configurations, and the numerical ranges of the parameters as are operative during their interactions, etc., also matter equally well—if not even more!

Don’t fall for those philosophers, sociologists, humanities folks in general (and even some folks employed as “scientists”) who snatch some bit of the nonlinear dynamics or chaos theory out of its proper context, and begin to spin a truly chaotic yarn of utter gibberish. They simply don’t know any better. That’s another lesson to be had from this story. There is a huge difference between the cultural overtones associated with the word “chaos” and the meaning the same term has in absolutely rigorous studies in physics. Hope I will never have to remind you on this one point.

A song I like:

(Marathi) “shraavaNaata ghana niLaa barasalaa…”
Lyrics: Mangesh Padgaonkar
Music: Shreenivas Khale
Singer: Lata Mangeshkar
[Credits listed, happily, in a random order.]


A series of posts on a few series of tweets (by me) on (my research on foundations of) QM—2

OK, after a long hiatus (mainly due to viral fever and cough etc.), I am back into the game. This post of course continues from the previous one in this series, i.e., the very last one.

On 18 July 2019 I then posted the next two tweets, now about my new approach (as in the Outline document [^]):

3/4. My new approach is something like: Quanta as Discrete Sets for the States of Fields & Changes in Them. (Hard to form a neat short-form.) I’ve abandoned the idea of the spatially delimited quantum particles—whether photons (Einstein), or others too (Feynman).

4/4. Instead, I have singular potential (V) fields for protons and electrons. These fields are continuous at all points other than their instantaneous positions where they are singular. I also have the ever continuous \Psi as a physically existing attribute / characteristic of the background object. This is quite like how stress / strain are attributes of a continuum.