Now I am become Bohmianism

1. About the title of this post:

Just before this Diwali, I had tweeted that I had made a resolution. The tweets went like this:

Let me note the text portions of these tweets (just in case I delete these some time later or so).

3:29 PM 13 Nov. 2020:

This year, Pune directly went from the monsoon air to the Diwali air. We seem to have tunnelled through the October heat!

3:55 PM, 13 Nov. 2020:

#Deepavali #Diwali #deepavali2020 #Diwali2020

[Diya lamp emoji, 3 times]

This is the *third* straight Diwali that I go jobless.

3:56 PM, 13 Nov. 2020:

My Diwali Resolution:

“Be a Bohmian (https://www.google.com/search?q=Bohmian+mechanics)

[Yes, there are going to be the usual New Year’s Resolutions according to the Western calender too!]

Alright.

We will come to the “tunnelling” part later. Also, the tweet related to my jobless-ness. [If the Indian IT industry has any sense of shame left at all, they would have prevented this circumstance. But more on this, too, later.]

For the time being, I want to focus on the last tweet, and say that, accordingly:

Now I am become Bohmianism.

As to the quaint grammar used in the expression, first consult this Wired article [^], also the Q&A at the Quora [^].

As to why I use “Bohmianism” instead of “a Bohmian”: Well, to know that, you have to understand Sanskrit. If you do, then refer to the Gita, Chapter 11, verse 32, the compound phrase “कालोऽस्मि” (“kaalo smi”). I just tried to keep a similar grammatical form. … But let me hasten to add that I am not a Sanskrit expert, and so, going wrong is always a possibility. However, I also think that here I have not.

Hence the title of this post.

Now, going over to the Bohmianism i.e. the Bohmian mechanics proper…


2. Material on the Bohmian mechanics (BM):

The following is a partial list of papers and other material on BM that I have downloaded. I am giving you the list in a roughly chronological order. However, my reading isn’t going to be in any particular order. I have not read them all yet. In fact, I’ve just got going with some them, as of now.

Also note, I expect that

  • Some of this material might have become outdated by now
  • I may run into some other related topics as my studies progress

Alright. On to the list…


2.1 Student theses:

Antony Valentini (1992) “On the pilot-wave theory of classical, quantum and subquantum physics,” Ph.D. Thesis, International School for Advanced Studies, Trieste

Caroline Colijn (2003) “The de Broglie-Bohm causal interpretation of quantum mechanics and its application to some simple systems,” Ph.D. Thesis, University of Waterloo.

Paulo Machado (2007) “Computational approach to Bohm’s quantum mechanics,” Ph.D. Thesis, McMaster University

Jeff Timko (2007) “Bohmian trajectories of the two-electron helium atom,” Master’s Thesis, University of Waterloo

Leopold Kellers (2017) “Making use of quantum trajectories for numerical purposes,” Master’s Thesis, Technische Universität München


2.2. Code:

Dane Odekirk (2012) “Python calculations of Bohmian trajectories,” GitHub, 12 December 2012. https://github.com/daneodekirk/bohm


2.3. Papers:

C. Philippidis, C. Dewdney and B. J. Hiley (1978) “Quantum interference and the quantum potential,” https://www.researchgate.net/publication/225228072

Berthold-Georg Englert, Marlan O. Scully, Georg Sussmann and Herbert Walther (1992) “Surrealistic Bohm trajectories,” Z. Naturforsch. 47 a, 1175–1186.

Robert E. Wyatt and Eric R. Bittner (2003) “Quantum mechanics with trajectories: quantum trajectories and adaptive grids,” arXiv:quant-phy/0302088v1 11 Feb 2003

Roderich Tumulka (2004) “Understanding Bohmian mechanics: A dialogue,” Am. J. Phys., vol. 72, no. 9, September 2004, pp. 1220–1226.

D.-A. Deckert, D. Dürr, P. Pickl (2007) “Quantum dynamics with Bohmian trajectories,” arXiv:quant-phy/0701190v2 13 May 2007

Guido Bacciiagaluppi and Antony Valentini (2009) “Quantum theory at the crossroads: Reconsidering the 1927 Solvay conference,” Cambridge UP, ISBN: 9780521814218 arXiv:quant-ph/0609184v2 24 Oct 2009 [Note: This is actually a book.]

M. D. Towler and N. J. Russell (2011) “Timescales for dynamical relaxation to the Born rule,” arXiv:1103.1589v2 [quant-ph] 27 Sep 2011

Michael Esfeld, Dustin Lazarovici, Mario Hubert, Detlef Dürr (2012) “The ontology of Bohmian mechanics,” preprint, British Journal for the Philosophy of Science

Travis Norsen (2013) “The pilot-wave perspective on quantum scattering and tunneling,” m. J. Phys., vol. 81, no. 4, April 2013, pp. 258–266. arXiv:1210.7265v2 [quant-ph] 9 Jan 2013

Travis Norsen (2013) “The pilot-wave perspective on spin,” arXiv:1305.1280v2 [quant-ph] 10 Sep 2013

Kurt Jung (2013) “Is the de Broglie-Bohm interpretation of quantum mechanics really plausible?,” Journal of Physics: Conference Series 442 (2013) 012060 doi:10.1088/1742-6596/442/1/012060

Samuel Colin and Antony Valentini (2014) “Instability of quantum equilibrium in Bohm’s dynamics,” Proc. R. Soc. A 470: 20140288. http://dx.doi.org/10.1098/rspa.2014.0288

W. B. Hodge, S. V. Migirditch and W. C. Kerr (2014) “Electron spin and probability current density in quantum mechanics,” Am. J. Phys., vol. 82, no. 7, July 2014, pp. 681–690

B. Zwiebach (2016) “Lecture 6,” Course Notes for MIT 8.04 Quantum Physics, Spring 2016.

Basil J. Hiley and Peter Van Reeth (2018) “Quantum trajectories: real or surreal?,” Entropy vol. 20, pp. 353 doi:10.3390/e20050353

Oliver Passon (2018) “On a common misconception regarding the de Broglie-Bohm theory,” Entropy vol. 20, no. 440. doi:10.3390/e20060440


2.4. Advanced papers:

Asher Yahalom (2018) “The fluid dynamics of spin,” Molecular Physics, April 2018, doi: 10.1080/00268976.2018.1457808. https://www.researchgate.net/publication/324512014, arXiv:1802:09331v1 [physics.flu-dyn] 3 Feb 2018

Siddhant Das and Detlef Dürr (2019) “Arrival time distributions of spin-1/2 particles,” Scientific Reports, https://doi.org/10.1038/s41598-018-38261-4

Siddhant Das, Markus Nöth, and Detlef Dür (2019) “Exotic Bohmian arrival times of spin-1/2 particles I—An analytical treatment,” arXiv:1901.08672v1 [quant-ph] 24 Jan 2019


2.5. Nonlinearity in the Bohmian mechanics:

To my surprise, I found that a form of non-linearity has been found to come up in the Bohmian mechanics too. I am sure it must have come as a surprise to many others too. [I will comment on this aspect quite some time later. For the time being, let me list some of the papers/presentations I’ve found so far.]

Sheldon Goldstein (1999) “Absence of chaos in Bohmian dynamics,” arXiv:quant-ph/9901005v1 6 Jan 1999

S. Sengupta, A. Poddar and P. K. Chattaraj (2000) “Quantum manifestations of the classical chaos in an undamped Duffing oscillator in presence of an external field: A quantum theory of motion study,” Indian Journal of Chemistry, vol. 39A, Jan–March 2000, pp. 316–322

A. Benseny, G. Albareda, A. S. Sanz, J. Mompart, and X. Oriols (2014) “Applied Bohmian mechanics,” arXiv:1406.3151v1 [quant-ph] 12 Jun 2014

Athanasios C. Tzemos (2016) “The mechanism of chaos in 3-D Bohmian trajectories,” Poster Presentation, https://www.researchgate.net/publication/305317081

Athanasios C. Tzemos (2018) “3-d Bohmian chaos: a short review,” Presentation Slides, RCAAM, Academy Of Athens

Athanasios C. Tzemos (2019) “Quantum entanglement and Bohmian Mechanics,” Presentation Slides 17 July 2019, RCAAM of the Academy of Athens

Klaus von Bloh (2020) “Bohm trajectories for the noncentral Hartmann potential,” Wolfram demonstration projects, https://www.researchgate.net/publication/344171771 (August 2020)

G. Contopoulos and A. C. Tzemos (2020) “Chaos in Bohmian quantum mechanics: a short review,” arXiv:2009.05867v1 [quant-ph] 12 Sep 2020


3. What happens to my new approach?

It was only yesterday that a neat thing struck me. Pending verification via simulations, it has the potential to finally bring together almost all of my research on the spinless particles. I’ve noted this insight in the hand-written journal (i.e. research notebook) that I maintain. I will be developing this idea further too. After all, Bohmians do study mainstream quantum mechanics and other interpretations, don’t they?

Due to the RSI, the simulations, however, will have to wait further. (The status is more or less the same. If I type for 2–3 hours, it’s easily possible that I can’t do much anything for the next 2–3 days.)

OK. Take care and bye for now.


A song I like:

(Hindi) देखा ना हाय रे सोचा ना (“dekhaa naa haay re sochaa naa”)
Singer: Kishore Kumar
Music: R. D. Burman
Lyrics: Rajinder Krishan

[Another song I used to love in my high-school days—who wouldn’t? … And, of course, I still do! A good quality audio I found is here [^]. I had not watched this movie until about a decade ago, on a CD (or may be on the TV). I’ve forgotten the movie by now. I don’t mind giving you the link for the video of this song; see here [^]. (In any case, it’s at least 3 orders of magnitude better than any so-called Lyrical Video Saregama has released for any song. The very idea of the Lyrical is, IMO, moronic.)]

 

 

“Simulating quantum ‘time travel’ disproves butterfly effect in quantum realm”—not!

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


This post is based on a series of tweets I made today. The original Twitter thread is here [^]. I have made quite a few changes in posting the same thought here. Further, I am also noting some addenda here (which are not there in the original thread).

Anyway, here we go!


1. The butterfly effect and QM: a new paper that (somehow) caught my fancy:

1.1. Why this news item interested me in the first place:

Nonlinearity in the wavefunction \Psi, as proposed by me, forms the crucial ingredient in my new approach to solving the QM measurement problem. So, when I spotted this news item [^] today, it engaged my attention immediately.

The opening line of the news item says:

Using a quantum computer to simulate time travel, researchers have demonstrated that, in the quantum realm, there is no “butterfly effect.”

[Emphasis in bold added by me.]

The press release by LANL itself is much better worded (PDF) [^]. In the meanwhile, I also tried to go through the arXiv version of the paper, here [^].

I don’t think I understand the paper in its entirety. (QC and all is not a topic of my main interests.) However, I do think that the following analogy applies.:

1.2. A (way simpler) analogy to understand the situation described in the paper:

The whole thing is to do with your passport-size photo, called “P”.

Alice begins with “P”, which is given in the PNG/BMP format. [Should the usage be the Alice? I do tend to think so! Anyway…]

She first applies a 2D FFT to it, and saves the result, called “FFT-P”, in a folder called “QC” on her PC. Aside: FFT’ed photos look like dots that show a “+”-like visual structure. Note, Alice saves both the real and the imaginary parts of the FFT-ed image. This assumption is important.

She then applies a further sequence of linear, lossless, image transformations to “FFT-P”. Let’s call this ordered set of transformations “T”. Note, “T” is applied to “FFT-P”, not to “P” itself.

As a result of applying the “T” transformations, she obtains an image which she saves to a file called “SCR-FFT-P”. This image totally looks like random dots to the rest of us, because the “T” transformations are such that they scramble whatever image is fed to them. Hence the prefix “SCR”, short for “scrambled”, in the file-name.

But Alice knows! She can always apply the same sequence of transformation but in the reverse direction. Let’s call this reverse transformation “T-inverse”.

Each step of “T” is reversible—that’s what “linear” and “lossless” transformation means! (To contrast, algorithms like “hi-pass” or “low-pass” filtering, or operators like the gradient or Laplacian are not loss-less.)

Since “T” is reversible, starting with “SCR-FFT-P”, Alice can always apply “T-inverse”, and get back to the original 2D FFT representation, i.e., to “FFT-P”.

All this is the normal processing—whether in the forward direction or in the reverse direction.

1.3. Enter [the [?]] Bob:

As is customary in the literature on the QC/entanglement, Bob enters the scene now! Alice and Bob work together.

Bob hates you. That’s because he believes in every claim made about QC, but you don’t. That’s why, he experiences an inner irrepressible desire to do some damage to your photograph, during its processing.

So, to, err…, “express” himself, Bob comes early to office, gains access to Alice’s “QC” folder, and completely unknown to her, he modifies a single pixel of the “FFT-P” image stored there, and even saves it. Remember, this is the FFT-ed version of your original photo “P”.

Let’s call the tampered version: “B-FFT-P”. On the hard-disk, it still carries the name “FFT-P”. But its contents are modified, and so, we need another name to denote this change of the state of the image.

1.4. What happens during Alice’s further processing?

Alice comes to the office a bit later, and soon begins her planned work for the day, which consists of applying the “T” transformation to the “FFT-P” image. But since the image has been tampered with by Bob, what she ends up manipulating is actually the “B-FFT-P” image. As a result of applying the (reversible) scrambling operations of “T”, she obtains a new image, and saves it to the hard-disk as “SCR-B-FFT-P”.

But something is odd, she feels. So, just to be sure, she decides to check that everything is OK, before going further.

So, she applies “T-inverse” operation to the “SCR-B-FFT-P” file, and obtains the “B-FFT-P” image back, which she saves to a file of name “Recovered FFT-P”. Observe, contents-wise, it is exactly the same as “B-FFT-P”, though Alice still believes it is identical to “FFT-P”.

Now, on a spur of the moment, she decides also to apply the reverse-FFT operation to “Recovered FFT-P”, i.e., to the Bob-tampered version of the FFT-ed version of your original photo. She saves the fully reversed image as “Recovered P”.

Just to be sure, she then runs some command that does a binary bit-wise comparison between “Recovered P” and the original “P”.

We know that they are not the same. Alice discovers this fact, but only at this point of time.

1.5. The question that the paper looks into:

If I understand it right, what the paper now ponders over is this question:

How big or small is the difference between the two images: “Recovered P” and the original “P”?

The expected answer, of course, is:

Very little.

The reason to keep such an expectation is this: FFT distributes the original information of any locality over the entire domain in the FFT-ed image. Hence, during reverse processing, each single pixel in the FFT-ed image maps back to all the pixels in the original image. [Think “holographic” whatever.] Therefore, tampering with just one pixel of the FFT-ed representation does not have too much effect in the recovered original image.

Hence, Alice is going to recover most of the look and feel of your utterly lovely, Official, passport-size visage! That is what is going to happen even if she in reality starts only from the scrambled tampered state “SCR-B-FFT-P”, and not the scrambled un-tampered state “SCR-FFT-P”. You would still be very much recognizable.

In fact, due to the way FFT works, the difference between the original photo and the recovered photo goes on reducing as the sheer pixel size of the original image goes on increasing. That’s because, regardless of the image size, Bob always tampers only one pixel at a time. So, the percentage tampering goes on reducing with an increase in the resolution of the original image.

1.6. The conclusion that the paper draws from the above:

Let’s collect the undisputable facts together:

  • There is very little difference between the recovered image, and the original image.
  • Whatever be the difference, it goes on reducing with the size of the original image.

The paper now says, IMO quite properly, that Bob’s tampering of the single pixel is analogous to his making a QM measurement, and thereby causing a permanent change to the concerned (“central”) qubit.

But then, the paper draws the following conclusion:

The Butterfly Effect does not apply to QM as such; it applies only to classical mechanics.

Actually, the paper is a bit more technical than that. In fact, I didn’t go through it fully because even if I were to, I wouldn’t understand all of it. QC is not a topic of my primary research interests, and I have never studied it systematically.

But still, yes, I do think that the above is the sort of logic on which the paper relies, to draw the conclusion which it draws.


2. My take on the paper:

2.1 It’s an over-statement:

Based on what I know, and my above, first, take, I do think that:

The paper makes an over-statement. The press release then highlights this “over” part. Finally, the news item fully blows up the same, “over” part.

Why do I think so? Here is my analysis..

If the Butterfly Effect produced due to nonlinearity is fully confined only to making an irreversible (or at least exponentially divergent) change only to a single pixel in the FFT representation of the original image (or alternatively, even if we didn’t look into alternative analgoy in which what Bob tampers is the original photograph but each subsequent processing involves only an FFT-ed version), then any and all of the further steps of linear and reversible transformations wouldn’t magnify the said tampering.

Why not?

Because all the further steps are prescribed to be linear (and in fact even reversible), that’s why!

In other words, what the paper says boils down to a redundancy (or, a re-statement of the same facts):

A linear and reversible transformation is emphatically not a non-linear and exponential divergent one (as in the butterfly effect).

That’s what the whole point of the paper seems to be!

2.2. The actual processing described in the paper does not at all involve the butterfly effect:

Realize, the only place the butterfly effect can at all occur during the entire processing is as a mechanism by which Bob might tamper with that single pixel.

Now, of course, the paper doesn’t say so. The paper only says that there is a tampering of a qubit via a measurement effected on it (with all other qubits, constituting “the bath” being left alone).

But, yes, I have proposed this idea that the measurement process itself progresses, within the detector, via the butterfly effect. I identified it as such during my Outline document posted at iMechanica, here (PDF) [^].

Of course, I stand ready to be corrected, if I am wrong anywhere in the fundamentals of my analysis.

2.3. I didn’t say anything about the “time-travel” part:

That’s right. The reason is: there is no real time-travel here anyway!

Hmmm… Explaining why would unnecessarily consume my time. … Forget it! Just remember: There is no time-travel here, not even a time-reversal, for that matter. In the first half of the processing by Alice (and may be with tampering by Bob), each step occurs some finite time after the completion of previous step. In the second half of the processing, again, each step of the inverse-processing occurs some finite time after the completion of the previous step. What reverses is the sequence of operators, but not time. Time always flows steadily in the forward direction.

Enough said.

2.4. Does my critique reflect on the paper taken as a whole?

I did manage to avoid Betteridge’s law [^] thus far, but can’t, any more!

The answer seems to be: “no”, or at least: “I didn’t mean that“.

The thing is this: This is a paper from the field of Quantum Computer/Quantum Information Science—which is not at all a field of my knowledge (let alone expertize). The paper reports on a simulation the authors conducted. I am unable to tell how valuable this particular simulation is, in the overall framework of QInfoScience.

However, as a computational modelling and simulation engineer myself, I can tell this much: Some times, even a simple (stupid!)-looking simulation actually is implemented merely in order to probe on some aspect that no one else has thought of. The simulation is not an end in itself, but merely a step in furthering research. The idea is to explore a niche and to find / highlight some gap in knowledge. In topics that are quite complicated, isolation of one aspect at a time, afforded by a simulation, can be of great help.

(I can cite an example of a very simple-looking simulation, actually a stupid-looking simulation, from my own PhD time research: I had a conference paper on simulating a potential field using random walk and comparing its results with a self-implemented FEM solver. The rather coluorful Gravatar icon which you see (the one which appears in the browser bar when you view my posts here) was actually one of the results I had reported in this preliminary exploration of what eventually became my PhD research.)

Coming back to this paper, it’s not just possible but quite likely that the authors are reporting something that has implications for much more “heavy-duty” topics, say topics like quantum error corrections, where and when they are necessary, the minimum efficiency they must possess, in what kind of architecture/processing, and whatnot. I can’t tell, but this is the nature of simulations. Sometimes, they look simple, but their implications can be quite profound. I am in no position to judge the merits of this paper, from this viewpoint.

At the same time, I also think that probing this idea of measuring just one qubit and tracing its effects on the nearby “bath” of qubits can have good merits. (I vaguely recall the discussions, some time ago, of “pointer states” and all that.)

Yet, of course, I do have a critical comment to make regarding this paper. But my comment is entirely limited to what the paper says regarding the foundational aspects of QM and the relevance of chaos / nonlinear science in QM. With the kind of nonlinearity in \Psi which I have proposed [^], I can clearly see that you can’t say that just because the mainstream QM theory is linear, therefore everything about quantum phenomena has to be linear. No, this is an unwarranted assumption. It was from this viewpoint that I thought that the implication concerning the foundational aspects was not acceptable. That’s why (and how) I wrote the tweets-series and this post.

All in all, my critique is limited to saying that a nonlinearity in \Psi, and hence the butterfly effect, is not only possible in QM, but it is crucial in correctly addressing the measurement problem right. I don’t have any other critique to offer regarding any of the other aspects of the reported work.

Hope this clarifies.

And, to repeat: Of course, I stand ready to be corrected, if I have gone wrong anywhere in the fundamentals of my analysis regarding the foundational issues too.


3. An update on my own research:

3.1. My recent tweets:

Recently (on 23 July 2020), I also tweeted a series [^] regarding the on-going progress in my new approach. Let me copy-paste the tweets (not because the wording is great, but because I have to finish writing this post, somehow!). I have deleted the tweet-continuation numbers, but otherwise kept the content as is:

Regarding my new approach to QM. I think I still have a lot of work to do. Roughly, these steps:

1. Satisfy myself that in simplest 1D toy systems (PIB, QHO), x-axis motion of the particle (charge-singularity) occurs as it should, i.e., such that operators for momentum, 1/

position, & energy have *some* direct physical meaning.

2. Using these ideas, model the H atom in a box with PBCs (i.e., an infinite lattice of repeating finite volumes/cells), and show that the energy of electron obtained in new approach is identical to that in the std. QM (which uses reduced mass, nucleus-relative x, no explicit particle positions, only electron’s energy).

3. Possibly, some other work/model.

4. Repeat 2. for modelling two *interacting* electrons (or the He atom) in a box with PBCs.

Turned out that I got stuck for the past one month+ right at step no. 1!

However, looks like I might have finally succeeded in putting things together right—with one e- in a box, at least.

In the process, found some errors in my post from the ontologies series, part 10.

Will post the corrections at my blog a bit later.

Tentatively, have decided to try and wrap up everything within 4–6 weeks.

So, either I report success with my new approach by, say 1st week of September or so, or I give up (i.e. stop work on QM for at least few months).

But yes, there does seem to be something to it—to the modelling ideas I tried recently. Worth pursuing for a few weeks at least.

… At least I get energy and probability right, which in a way means also position. But I am not fully happy with momentum, even though I get the right numerical values for it, and so, the thinking required is rather on the conceptual–physical side … There *are* “small” issues like these.

But yes,

(1) I’m happy to have spotted definite errors in my own previous documentation—Ontologies series, part 10, as also in the Outline doc. (PDF here [^]) ,
and,
(2) I’m happy to have made a definite progress in the modelling with the new approach.

Bottomline: I don’t have to give up my new approach. Not right away. Have to work on it for another month at least.

3.2. Some comments on the tweets:

I need to highlight the fact that I have spotted some definite errors, both in the Ontologies series (part 10), and in the Outline document.

In particular:

3.2.1. In the Ontologies series, part 10, I had put forth an argument that it’s a complex-valued energy that gets conserved. I am not so sure of that any more, and am going over the whole presentation once again. (The part up covering the PIB modelling from that post is more or less OK.)

3.3.2. In the Outline document, I had said:

“The measurement process is nondestructive of the state of the System. It produces catastrophic changes only in the Instrument”

I now think that this description is partly wrong. Yes, a measurement has to produce catastrophic changes in the Instrument. But now, the view I am developing amounts to saying that the state of the System also undergoes a permanent change during measurement, though such a change is only partial.

3.3. Status of my research:

I am working through the necessary revision of all such points. I am also working through simulations and all. I hope to have another document and a small set of simulations (as spelt in the immediately preceding Twitter thread) soon. The document would still be preliminary, but it is going to be more detailed.

In particular, I would be covering the topic of the differences between the tensor product states (e.g. two non-interacting electrons) in a box vs. the entangled states of two electrons. Alternatively, may be, treating the proton as a quantum object (having its own wavefunction), and thus, simulating only the Hydrogen atom, but with my new approach. Realize, when you treat the proton quantum mechanically, allowing its singularity to move, it becomes a two-particle system.

So, a two-particle system is the minimum required for validation of my new approach.

For convenience of simulation, i.e. especially to counter the difficulties (introduced by boundaries due to discretization of space via FDM mesh), I am going to put the interacting pair of particles inside a box but with periodic boundary conditions (PBCs for short). Ummm… This word, “PBCs”, has been used in two different senses: 1. To denote a single, representative, finite-sized unit cell from an infinitely repeated lattice of such cells, and 2. Ditto, but with further physical imagination that this constitutes a “particle on a ring” so that computations of the orbital angular momentum too must enter the simulation. Here, I am going to stick to PBCs in the sense 1., and not in the sense 2.

I hope to have something within a few weeks. May be 3–4 weeks, perhaps as early as 2–3 weeks. The trouble is, as I implement some simulation, some new conceptual aspects crop up, and by the time I finish ironing out the wrinkles in the conceptual framework, the current implementation turns out to be not very suitable to accommodate further changes, and so, I have to implement much of the whole thing afresh again. Re-implementation is not a problem, at least not a very taxing one (though it can get tiring). The real problems are the conceptual ones.

For instance, it’s only recently that I’ve realized that there is actually a parallel in my approach to Feynman’s idea of an electron “smelling” its neighbourhood around. In Feynman’s version, the electron not only “smells” but also “runs everywhere” at the same time, with the associated “amplitudes” cancelling out / reinforcing at various places. So, he had a picture of the electron that is not a localized particle and yet smells only a local neighbourhood at each point in the domain. He could not remove this contradiction.

I thought that I had fully removed such contradictions, but only to realize, at this (relatively “late”) stage that while “my electron” is a point-particle (in the sense that the singularity in the potential energy field is localized at a point), it still retains the sense of the “smell”. The difference being, now it can smell the entire universe (action at a distance, i.e. IAD). I knew that so long as I use the Fourier theory the IAD would be there. But it was part-surprise and part-delight for me to notice that even “my” electron must have such a “nose”.

Another thing I learnt was that even if I am addressing only the spinless electron, looks like, my framework seems to very easily and naturally incorporates the spin too, at least so long as I remain in 1D. I had just realized it, and soon (within days) came Dr. Woit’s post “What is “spin”?” [^]. I don’t understand it fully, but now that I see this way of putting things, that’s another detour for me.

All in all, working out conceptual aspects is taking time. Further, simply due to the rich inter-connections of concepts, I am afraid that even if I publish a document, it’s not going to be “complete”, in the sense, I wouldn’t be able to insert in it everything that I have understood by now. So, I am aiming to simply put out something new, rather than something comprehensive. (I am not even thinking of having anything “well polished” for months, even an year or so!)

Alright, so there. May be I won’t be blogging for a couple of weeks. But hopefully, I will have something to put out within a month’s time or so…

In the meanwhile, take care, and bye for now…


A song I like:

(Hindi) जादूगर तेरे नैना, दिल जायेगा बच के कहाँ (“jaadugar tere nainaa, dil jaayegaa…”)
Singers: Kishore Kumar, Lata Mangeshkar
Music: Laxmikant-Pyarelal
Lyrics: Rajinder Krishen

[Another song from my high-school days that somehow got thrown up during the recent lockdowns. … When there’s a lockdown in Pune, the streets (and the traffic) look (and “hear”) more like the small towns of my childhood. May be that’s why!]


[Some very minor editing may be effected, but I really don’t have much time—rather, any enthusiasm—for it! So, drop a line if you find something confusing… Take care and bye for now…]

 

Python scripts for simulating QM, part 2: Vectorized code for the H atom in a 1D/2D/3D box. [Also, un-lockdown and covid in India.]

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


1.The first cut for the H atom in a 3D box:

The last time [^], I spoke of an enjoyable activity, namely, how to make the tea (and also how to have it).

Talking of other, equally enjoyable things, I have completed the Python code for simulating the H atom in a 3D box.

In the first cut for the 3D code (as also in the previous code in this series [^]), I used NumPy’s dense matrices, and the Python ``for” loops. Running this preliminary code, I obtained the following colourful diagrams, and twitted them:

H atom in a 3D box of 1 angstrom sides. Ground state (i.e., 1s, eigenvalue index = 0). Contour surfaces with wiremesh. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. Ground state (i.e., 1s, eigenvalue index = 0). All contours taken together show a single stationary state. Contour surfaces plotted with wiremesh. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. A 'p' state (eigenvalue index = 2). Contour surfaces with the Gouraud interpolation. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. A ‘p’ state (eigenvalue index = 2). All contours taken together show a single stationary state. Contour surfaces with the Gouraud interpolation. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. A 'p' state (eigenvalue index = 2). Contour surfaces with wiremesh. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. A ‘p’ state (eigenvalue index = 2). All contours taken together show a single stationary state. Contour surfaces with wiremesh. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. Another 'p' state (eigenvalue index = 3). Contour surfaces with the Gourauad interpolation. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. Another ‘p’ state (eigenvalue index = 3). All contours taken together show a single stationary state. Contour surfaces with the Gourauad interpolation. Plotted with Mayavi’s mlab.contour3d().

 

OK, as far as many (most?) of you are concerned, the enjoyable part of this post is over. So, go read something else on the ‘net.


Coming back to my enjoyment…

2. Sparse matrices. Vectorization of code:

After getting to the above plots with dense matrices and Python “for” loops, I then completely rewrote the whole code using SciPy‘s sparse matrices, and put a vectorized code in place of the Python “for” loops. (As a matter of fact, in the process of rewriting, I seem to have deleted the plots too. So, today, I took the above plots from my twitter account!)

2.1 My opinions about vectorizing code, other programmers, interviews, etc:

Vectorization was not really necessary in this problem (an eigenvalue problem), because even if you incrementally build the FD-discretized Hamiltonian matrix, it takes less than 1 percent of the total execution time. 99 % of the execution time is spent in the SciPy library calls.

Python programmers have a habit of always looking down on the simple “for” loops—and hence, on any one who writes them. So, I decided to write this special note.

The first thing you do about vectorization is not to begin wondering how best to implement it for a particular problem. The first thing you do is to ask yourself: Is vectorization really necessary here? Ditto, for lambda expressions. Ditto, for list comprehension. Ditto for itertools. Ditto for almost anything that is a favourite of the dumb interviewers (which means, most Indian Python programmers).

Vectorized codes might earn you brownie points among the Python programmers (including those who interview you for jobs). But such codes are more prone to bugs, harder to debug, and definitely much harder to understand even by you after a gap of time. Why?

That’s because practically speaking, while writing in Python, you hardly if ever define C-struct like things. Python does have classes. But these are rather primitive classes. You are not expected to write code around classes, and then put objects into containers. Technically, you can do that, but it’s not at all efficient. So, practically speaking, you are almost always into using the NumPy ndarrays, or similar things (like Pandas, xarrays, dasks, etc.).

Now, once you have these array-like thingies, indexing becomes important. Why? Because, in Python, it is the design of a number of arrays, and the relationships among their indexing scheme which together “defines” the operative data structures. Python, for all its glory, has this un-removable flaw: The design of the data structures is always implicit; never directly visible through a language construct.

So, in Python, it’s the indexing scheme which plays the same part as the classes, inheritance, genericity play in C++. But it’s implicit. So, how you implement the indexing features becomes of paramount importance.

And here, in my informed opinion, the Python syntax for the slicing and indexing operations has been made unnecessarily intricate. I, for one, could easily design an equally powerful semantics that comes with a syntax that’s much easier on the eye.

In case some professionally employed data scientist (especially a young Indian one) takes an offence to my above claim: Yes, I do mean what I say above. And, I also know what I am talking about.

Though I no longer put it on my CV, once, in the late 1990s, I had implemented a Yacc-like tool to output table-driven parser for the LALR-1 languages (like Java and C++). It would take a language specification in the EBNF (Extended Backus-Noor Form) as the input file, and produce the tables for table-driven parsing of that language. I had implemented this thing completely on my own, looking just at the Dragon Book (Aho, Sethi, Ullman). I haven’t had a CS university education. So, I taught myself the compilers theory, and then, began straight implementing it.

I looked at no previous code. And even if I were to look at something, it would have been horrible. These were ancient projects, written in C, not in C++, and written using arrays, no STL containers like “map”s. A lot of hard-coding, pre-proc macros, and all that. Eventually, I did take a look at the others’ code, but it was only in the verification stage. How did my code fare? Well, I didn’t have to change anything critical.

I had taken about 8 months for this exercise (done part time, on evenings, as a hobby). The closest effort was by some US mountain-time university group (consisting of a professor, one or two post-docs, and four-five graduate students). They had taken some 4 years to reach roughly the same place. To be fair, their code had many more features. But yes, both their code and mine addressed those languages which belonged to the same class of grammar specification, and hence, both would have had the same parsing complexity.

I mention it all it here mainly in order to “assure” the Indian / American programmers (you know, those BE/BS CS guys, the ones who are right now itching to fail me in any interview, should their HR arrange one for me in the first place) that I do know a bit about it when I was talking about the actual computing operations on one hand, and the mere syntax for those operations on the other. There are a lot of highly paid Indian IT professionals who never do learn this difference (but take care to point out that your degree isn’t from the CS field).

So, my conclusion is that despite all its greatness (and I do actually love Python), its syntax does have some serious weaknesses. Not just idiosyncrasies (which are fine) but actual weaknesses. The syntax for slicing and indexing is a prominent part of it.

Anyway, coming back to my present code (for the H atom in the 3D box, using finite difference method), if the execution time was so short, and if vectorization makes a code prone to bugs (and difficult to maintain), why did I bother implementing it?

Two reasons:

  1. I wanted to have a compact-looking code. I was writing this code mainly for myself, so maintenance wasn’t an issue.
  2. In case some programmer / manager interviewing me began acting over-smart, I wanted to have something which I could throw at his face. (Recently, I ran into a woman who easily let out: “What’s there in a PoC (proof of concept, in case you don’t know)? Any one can do a PoC…” She ranted on a bit, but it was obvious that though she has been a senior manager and all, and lists managing innovations and all, she doesn’t know. There are a lot of people like her in the Indian IT industry. People who act over-smart. An already implemented vectorized code, especially one they find difficult to read, would be a nice projectile to have handy.

2.2 Sparse SciPy matrices:

Coming back to the present code for the H atom: As I was saying, though vectorization was not necessary, I have anyway implemented the vectorization part.

I also started using sparse matrices.

In case you don’t know, SciPy‘s and NumPy‘s sparse matrix calls look identical, but they go through different underlying implementations.

From what I have gathered, it seems safe to conclude this much: As a general rule, if doing some serious work, use SciPy’s calls, not NumPy’s. (But note, I am still learning this part.)

With sparse matrices, now, I can easily go to a 50 \times 50 \times  50 domain. I haven’t empirically tested the upper limit on my laptop, though an even bigger mesh should be easily possible. In contrast, earlier, with dense matrices, I was stuck at at most at a 25 \times 25 \times 25 mesh. The execution time too reduced drastically.

In my code, I have used only the dok_matrix() to build the sparse matrices, and only the tocsr(), and tocoo() calls for faster matrix computations in the SciPy eigenvalue calls. These are the only functions I’ve used—I haven’t tried all the pathways that SciPy opens up. However, I think that I have a pretty fast running code; that the execution time wouldn’t improve to any significant degree by using some other combination of calls.

2.3 A notable curiosity:

I also tried, and succeeded to a great degree, in having an exactly identical code for all dimensions: 1D, 2D, 3D, and then even, in principle, ND. That is to say, no “if–else” statements that lead to different execution paths depending on the dimensionality.

If you understand what I just stated, then you sure would want to have a look at my code, because nothing similar exists anywhere on the ‘net (i.e., within the first 10 pages thrown up by Google during several differently phrased searches covering many different domains).

However, eventually, I abandoned this approach, because it made things too complicated, especially while dealing with computing the Coulomb fields. The part dealing with the discretized Laplacian was, in contrast, easier to implement, and it did begin working fully well, which was when I decided to abandon this entire approach. In case you know a bit about this territory: I had to liberally use numpy.newaxis.

Eventually, I came to abandon this insistence on having only a single set of code lines regardless of the dimensionality, because my programmer’s/engineer’s instincts cried against it. (Remember I don’t even like the slicing syntax of Python?) And so, I scrapped it. (But yes, I do have a copy, just in case someone wants to have a look.)

2.4 When to use the “for” loops and when to use slicing + vectorization: A good example:

I always try to lift code if a suitable one is available ready made. So, I did a lot of search for Python/MatLab code for such things.

As far as the FD implementations of the Laplacian go, IMO, the best piece of Python code I saw (for this kind of a project) was that by Prof. Christian Hill [^]. His code is available for free from the site for a book he wrote; see here [^] for an example involving the finite difference discretization of the Laplacian.

Yes, Prof. Hill has wisely chosen to use only the Python “for” loops when it comes to specifying the IC. Thus, he reserves the vectorization only for the time-stepping part of the code.

Of course, unlike Prof. Hill’s code (transient diffusion), my code involves only eigenvalue computations—no time-stepping. So, one would be even more amply justified in using only the “for” loops for building the Laplacian matrix. Yet, as I noted, I vectorized everything in my code, merely because I felt like doing so. It’s during vectorization that the problem of differing dimensionality came up, which I solved, and then abandoned.

2.5 Use of indexing matrices:

While writing my code, I figured out that a simple trick with using index matrices and arrays makes the vectorization part even more compact (and less susceptible to bugs). So, I implemented this approach—indexing matrices and arrays.

“Well, this is a very well known approach. What’s new?” you might ask. The new part is the use of matrices for indexing, not arrays. Very well known, sure. But very few people use it anyway.

Again, I was cautious. I wrote the code, saw it a couple of days later again, and made sure that using indices really made the code easier to understand—to me, of course. Only then I decided to retain it.

By using the indexing matrices, the code indeed becomes very clean-looking. It certainly looks far better (i.e. easier to grasp structure) than the first lines of code in Prof. Hill’s “do_timestep” function [^].

2.6 No code-drop:

During my numerous (if not exhaustive) searches, I found that no one posts a 3D code for quantum simulation that also uses finite differences (i.e. the simplest numerical technique).

Note, people do post codes for 3D, but these are only for more complicated approaches like: FDTD (finite difference time domain), FEM, (pseudo)spectral methods, etc. People also post code for FDM, when the domain is 1D. But none posts a code that is both FD and 2D/3D. People only post the maths for such a case. Some rare times, they also post the results of the simulations. But they don’t post the 3D FDM code. I don’t know the reason for this.

May be there is some money to be made if you keep some such tricks all to yourself?

Once this idea occurred to me, it was impossible for me not to take it seriously. … You know that I have been going jobless for two years by now. And, further, I did have to invest a definite amount of time and effort in getting those indexing matrices working right so that the vectorization part becomes intuitive.

So, I too have decided not to post my 3D code anywhere on the ‘net for free. Not immediately anyway. Let me think about it for a while before I go, post my code.


3. Covid in India:

The process of unlocking down has begun in India. However, the numbers simply aren’t right for any one to get relaxed (except for the entertainment sections of the Indian media like the Times of India, Yahoo!, etc.).

In India, we are nowhere near turning the corner. The data about India are such that even the time when the flattening might occur, is not just hard to predict, but with the current data, it is impossible.

Yes, I said impossible. I could forward reasoning grounded in sound logic and good mathematics (e.g., things like Shannon’s theorem, von Neumann’s errors analysis, etc.), if you want. But I think to any one who really knows a fair amount of maths, it’s not necessary. I think they will understand my point.

Let me repeat: The data about India are such that even the time when the flattening might occur, is not just hard to predict, but with the current data, it is impossible.

India’s data show a certain unique kind of a challenge for the data scientist—and it definitely calls for some serious apprehension by every one concerned. The data themselves are such that predictions have to be made very carefully.

If any one is telling you that India will cross (or has already crossed), say, more than 20 lakh cases, then know that he/she is not speaking from the data, the population size, the social structures, the particular diffusive dynamics of this country, etc. He/she is talking purely from imagination—or very poor maths.

Ditto, if someone tells you that there are going be so many cases in this city or that, by this date or that, if the date runs into, say, August.

Given the actual data, in India, projections about number of cases in the future are likely to remain very tentative (having very big error bands).

Of course,  you may still make some predictions, like those based on the doubling rate. You would be even justified in using this measure, but only for a very short time-span into the future. The reason is that India’s data carry these two peculiarities:

  1. The growth rate has been, on a large enough scale, quite steady for a relatively longer period of time. In India, there has been no exponential growth with a very large log-factor, not even initially (which I attribute to an early enough a lock-down). There also has been no flattening (for whatever reasons, but see the next one).
  2. The number of cases per million population still remains small.

Because of 1., the doubling rate can serve as a good short-term estimator when it comes to activities like large-scale resource planning (but it would be valid only for the short term). You will have to continuously monitor the data, and be willing to adjust your plans. Yet, the fact is also that the doubling rate has remained steady long enough that it can certainly be used for short-term planning (including by corporates).

However, because of 2., everyone will have to revise their estimates starting from the third week of June, when the effects of the un-locking down begin to become visible (not just in the hospitals or the quarantine centres, but also in terms of aggregated numbers).

Finally, realize that 1. matters only to the policy-makers (whether in government or in corporate sectors).

What matters to the general public at large is this one single question: Have we turned around the corner already? if not, when will we do that?

The short answers are: “No” and  “Can’t Tell As of Today.”

In India’s case the data themselves are such that no data scientist worth his salt would be able to predict the time of flattening with any good accuracy—as of today. Nothing clear has emerged, even after 2.5 months, in the data. Since this sentence is very likely to be misinterpreted, let me explain.

I am not underestimating the efforts of the Indian doctors, nurses, support staff, police, and even other government agencies. If they were not to be in this fight, the data would’ve been far simpler to analyse—and far more deadly.

Given India’s population size, its poverty, its meagre medical resources, the absence of civic discipline, the illiteracy (which makes using symbols for political parties indispensable at the time of elections)… Given all such factors, the very fact that India’s data even today (after 2.5 months) still manages to remain hard to analyse suggests, to my mind, this conclusion:

There has been a very hard tussle going on between man and the virus so that no definitive trend could emerge either way.

There weren’t enough resources so that flattening could occur by now. If you kept that expectation to begin with, you were ignoring reality. 

However, in India, the fight has been such that it must have been very tough on the virus too—else, the exponential function is too bad for us, and it is too easy for the virus.

The inability to project the date by which the flattening might be reached, must be seen in such a light.

The picture will become much clearer starting from two weeks in the future, because it would then begin reflecting the actual effects that the unlocking is producing right now.

So, if you are in India, take care even if the government has now allowed you to step out, go to office, and all that. But remember, you have to take even more care than you did during the lock-down, at least for the next one month or so, until the time that even if faint, some definitely discernible trends do begin to emerge, objectively speaking.

I sincerely hope that every one takes precautions so that we begin to see even just an approach towards the flattening. Realize, number of cases and number deaths increase until the flattening occurs. So, take extra care, now that the diffusivity of people has increased.

Good luck!


A song I like:

(Western, instrumental): Mozart, Piano concerto 21, k. 467, second movement (andante in F major).

Listen, e.g., at this [^] YouTube viedo.

[ I am not too much into Western classical, though I have listened to a fair deal of it. I would spend hours in UAB’s excellent music library listening to all sorts of songs, though mostly Western classical. I would also sometimes make on-the-fly requests to the classical music channel of UAB’s radio station (or was it a local radio station? I no longer remember). I didn’t always like what I listened to, but I continuing listening a lot anyway.

Then, as I grew older, I began discovering that, as far as the Western classical music goes, very often, I actually don’t much appreciate even some pieces that are otherwise very highly regarded by others. Even with a great like Mozart, there often are places where I can’t continue to remain in the flow of the music. Unknowingly, I come out of the music, and begin wondering: Here, in this piece, was the composer overtaken by a concern to show off his technical virtuosity rather than being absorbed in the music? He does seem to have a very neat tune somewhere in the neighbourhood of what he is doing here. Why doesn’t he stop tinkling the piano or stretching the violin, stop, think, and resume? I mean, he was composing music, not just blogging, wasn’t he?

The greater the composer or the tune suggested by the piece, the greater is this kind of a disappointment on my part.

Then, at other times, these Western classical folks do the equivalent of analysis-paralysis. They get stuck into the same thing for seemingly forever. If composing music is difficult, composing good music in the Western classical style is, IMHO, exponentially more difficult. That’s the reason why despite showing a definite “cultured-ness,” purely numbers-wise, most Western classical music tends to be boring. … Most Indian classical music also tends to be very boring. But I will cover it on some other day. Actually, one day won’t be enough. But still, this post is already too big…

Coming to the Western classical, Mozart, and the song selected for this time: I think that if Mozart were to do something different with his piano concerto no. 20 (k. 466), then I might have actually liked it as much as k. 467, perhaps even better. (For a good YouTube video on k. 466, see here [^].)

But as things stand, it’s k. 467. It is one of the rarest Western (or Eastern) classical pieces that can auto ride on my mind at some unpredictable moments; also one of the rare pieces that never disappoint me when I play it. Maybe that’s because I don’t play it unless I am in the right mood. A mood that’s not at all bright; a mood that suggests as if someone were plaintively raising the question “why? But why?”. (Perhaps even: “But why me?”) It’s a question not asked to any one in particular. It’s a question raised in the midst of continuing to bear either some tragic something, or, may be, a question raised while in the midst of having to suffer the consequences of someone else’s stupidity or so. … In fact, it’s not even a question explicitly raised. It’s to do with some feeling which comes before you even become aware of it, let alone translate it into a verbal question.  I don’t know, the mood is something like that. … I don’t get in that kind of a mood very often. But sometimes, this kind of a mood is impossible to avoid. And then, if the outward expression of such a mood also is this great, sometimes, you even feel like listening to it… The thing here is, any ordinary composer can evoke pathos. But what Mozart does is in an entirely different class. He captures the process of forming that question clearly, alright. But he captures the whole process in such a subdued manner. Extraordinary clarity, and extraordinary subdued way of expressing it. That’s what appeals to me in this piece… How do I put it?… It’s the epistemological clarity or something like that—I don’t know. Whatever be the reason, I simply love this piece. Even if I play it only infrequently.

Coming back to the dynamic k. 466 vs. the quiet, sombre, even plaintive k. 467, I think, the makers of the “Elvira Madigan” movie were smart; they correctly picked up the k. 467, and only the second movement, not others. It’s the second movement that’s musically extraordinary. My opinion, anyway…

Bye for now.

]