My proposed paper on my new approach to QM was not accepted at the international conference where I had sent my abstract. (For context, see the post before the last, here [^] ).

“Thank God,” that’s what I actually felt when I received this piece of news, “I can now immediately proceed to procrastinate on writing the full-length paper, and also, simultaneously, un-procrastinate on writing some programs in Python.”

So far, I have written several small and simple code-snippets. All of these were for the usual (text-book) cases; all in only . Here in this post, I will mention specifically which ones…

**Time-independent Schrodinger equation (TISE):**

Here, I’ve implemented a couple of scripts, one for finding the eigen-vectors and -values for a particle in a box (with both zero and arbitrarily specified potentials) and another one for the quantum simple harmonic oscillator.

These were written not with the shooting method (which is the method used in the article by Rhett Allain for the Wired magazine [^]) but with the matrix method. … Yes, I have gone past the stage of writing all the numerical analysis algorithm in original, all by myself. These days, I directly use Python libraries wherever available, e.g., NumPy’s LinAlg methods. That’s why, I preferred the matrix method. … My code was not written from scratch; it was based on Cooper’s code “qp_se_matrix”, here [PDF ^]).

**Time-dependent Schrodinger equation (TDSE):**

Here, I tried out a couple of scripts.

The first one was more or less a straightforward porting of Ian Cooper’s program “se_fdtd” [PDF ^] from the original MatLab to Python. The second one was James Nagel’s Python program (written in 2007 (!) and hosted as a SciPy CookBook tutorial, here [^]). Both follow essentially the same scheme.

Initially, I found this scheme to be a bit odd to follow. Here is what it does.

It starts out by replacing the complex-valued Schrodinger equation with a pair of real-valued (time-dependent) equations. That was perfectly OK by me. It was their discretization which I found to be a bit peculiar. The discretization scheme here is second-order in *both* space and time, and yet it involves *explicit* time-stepping. That’s peculiar, so let me write a detailed note below (in part, for my own reference later on).

Also note: Though both Cooper and Nagel implement essentially the same method, Nagel’s program is written in Python, and so, it is easier to discuss (because the array-indexing is 0-based). For this reason, I might make a direct reference only to Nagel’s program even though it is to be understood that the same scheme is found implemented also by Cooper.

**A note on the method implemented by Nagel (and also by Cooper):**

What happens here is that like the usual Crank-Nicolson (CN) algorithm for the diffusion equation, this scheme too puts the half-integer time-steps to use (so as to have a *second*-order approximation for the *first*-order derivative, that of time). However, in the present scheme, the half-integer time-steps turn out to be not entirely *fictitious* (the way they are, in the usual CN method for the single real-valued diffusion equation). Instead, *all* of the half-integer instants are *fully* real here in the sense that they do enter the final discretized equations for the time-stepping.

The way *that* comes to happen is this: There are *two* real-valued equations to solve here, *coupled* to each other—one each for the real and imaginary parts. Since *both* the equations have to be solved at *each* time-step, what this method does is to take advantage of that already existing splitting of the time-step, and implements a scheme that is staggered in *time.* (Note, this scheme is *not* staggered in space, as in the usual CFD codes; it is staggered only in time.) Thus, since it is staggered and explicit, even the finite-difference quantities that are defined only at the *half-integer* time-steps, *also* get directly involved in the calculations. How precisely does *that* happen?

The scheme defines, allocates memory storage for, and computationally evaluates the equation for the *real* part, but this computation occurs only at the *full*-integer instants (). Similarly, this scheme also defines, allocates memory for, and computationally evaluates the equation for the *imaginary* part; however, this computation occurs only at the *half*-integer instants (). The particulars are as follows:

The initial condition (IC) being specified is, in general, complex-valued. The real part of this IC is set into a space-wide array defined for the instant ; here, . Then, the imaginary part of the *same* IC is set into a separate array which is defined nominally for a *different* instant: . Thus, even if both parts of the IC are specified at , the numerical procedure treats the imaginary part as if it was set into the system only at the instant .

Given this initial set-up, the actual time-evolution proceeds as follows:

- The real-part already available at is used in evaluating the “future” imaginary part—the one at
- The imaginary part thus found at is used, in turn, for evaluating the “future” real part—the one at .

At this point that you are allowed to say: lather, wash, repeat… Figure out exactly how. In particular, notice how the simulation must proceed in integer number of *pairs* of computational steps; how the imaginary part is only nominally (i.e. only computationally) distant in time from its corresponding real part.

Thus, overall, the discretization of the space part is pretty straight-forward here: the second-order derivative (the Laplacian) is replaced by the usual second-order finite difference approximation. However, for the time-part, what this scheme does is both similar to, and different from, the usual Crank-Nicolson scheme.

*Like *the CN scheme, the present scheme also uses the half-integer time-levels, and thus manages to become a second-order scheme for the time-axis too (not just space), even if the actual time interval for each time-step remains, exactly as in the CN, only , not .

However, *unlike* the CN scheme, this scheme still remains *explicit*. That’s right. No matrix equation is being solved at any time-step. You just zip through the update equations.

Naturally, the zipping through comes with a “cost”: The very scheme itself comes equipped with a *stability criterion*; it is *not* unconditionally stable (the way CN is). In fact, the stability criterion now refers to *half* of the time-interval, not full, and thus, it is a bit even more restrictive as to how big the time-step () can be given a certain granularity of the space-discretization (). … I don’t know, but guess that this is how they handle the first-order time derivatives in the FDTD method (finite difference time domain). May be the physics of their problems itself is such that they can get away with coarser grids without being physically too inaccurate, who knows…

**Other aspects of the codes by Nagel and Cooper:**

For the initial condition, both Cooper and Nagel begin with a “pulse” of a cosine function that is modulated to have the envelop of the Gaussian. In both their codes, the pulse is placed in the middle, and they both *terminate* the simulation when it reaches an end of the finite domain. I didn’t like this aspect of an arbitrary termination of the simulation.

However, I am still learning the ropes for numerically handling the complex-valued Schrodinger equation. In any case, I am not sure if I’ve got good enough a handle on the FDTD-like aspects of it. In particular, as of now, I am left wondering:

What if I have a second-order scheme for the first-order derivative of time, but if it comes with only *fictitious* half-integer time-steps (the way it does, in the usual Crank-Nicolson method for the real-valued diffusion equation)? In other words: What if I continue to have a second-order scheme for time, and yet, my scheme *does not use leap-frogging*? In still other words: What if I define *both* the real and imaginary parts at the *same* integer time-steps so that, in both cases, their values at the instant are directly fed into both their values at ?

In a way, this scheme seems simpler, in that no leap-frogging is involved. However, notice that it would also be an *implicit* scheme. I would have to solve two matrix-equations at each time-step. But then, I could perhaps get away with a larger time-step than what Nagel or Cooper use. What do you think? Is checker-board patterning (the main reason why we at all use staggered grids in CFD) an issue here—in *time* evolution? But isn’t the unconditional stability too good to leave aside without even trying? And isn’t the time-axis just one-way (unlike the space-axis that has BCs at both ends)? … I don’t know…

**PBCs and ABCs:**

Even as I was (and am) still grappling with the above-mentioned issue, I also wanted to make some immediate progress on the front of not having to terminate the simulation (once the pulse reached one of the ends of the domain).

So, instead of working right from the beginning with a (literally) complex Schrodinger equation, I decided to first model the simple (real-valued) diffusion equation, and to implement the PBCs (periodic boundary conditions) for it. I did.

My code seems to work, because the integral of the dependent variable (i.e., the total quantity of the diffusing quantity present in the entire domain—one with the topology of a ring) does seem to stay constant—as is promised by the Crank-Nicolson scheme. The integral stays “*numerically* the same” (within a small tolerance) even if obviously, there are now fluxes at both the ends. (An initial condition of a symmetrical saw-tooth profile defined between and , does come to asymptotically approach the horizontal straight-line at . That is what happens at run-time, so obviously, the scheme seems to handle the fluxes right.)

Anyway, I don’t always write everything from the scratch; I am a great believer in lifting codes already written by others (with attribution, of course :)). Thus, while thus searching on the ‘net for some already existing resources on numerically modeling the Schrodinger equation (preferably with code!), I also ran into some papers on the simulation of SE using ABCs (i.e., the absorbing boundary conditions). I was not sure, however, if I should implement the ABCs immediately…

As of today, I think that I am going to try and graduate from the transient diffusion equation (with the CN scheme and PBCs), to a trial of the implicit TDSE without leap-frogging, as outlined above. The only question is whether I should throw in the PBCs to go with that or the ABCs. Or, may be, *neither*, and just keep pinning the values for the end- and ghost-nodes down to , thereby effectively putting the entire simulation inside an infinite box?

At this point of time, I am tempted to try out the last. Thus, I think that I would rather first explore the staggering vs. non-staggering issue for a pulse in an infinite box, and understand it better, before proceeding to implement either the PBCs or the ABCs. Of course, I still have to think more about it… But hey, as I said, I am now in a mood of implementing, not of contemplating.

**Why not upload the programs right away?**

BTW, all these programs (TISE with matrix method, TDSE on the lines of Nagel/Cooper’s codes, transient DE with PBCs, etc.) are still in a fluid state, and so, I am not going to post them immediately here (though over a period of time, I sure would).

The reason for not posting the code runs something like this: Sometimes, I use the Python range objects for indexing. (I saw this goodie in Nagel’s code.) At other times, I don’t. But even when I don’t use the range objects, I anyway am tempted to revise the code so as to have them (for a better run-time efficiency).

Similarly, for the CN method, when it comes to solving the matrix equation at each time-step, I am still not using the TDMA (the Thomas algorithm) or even just sparse matrices. Instead, right now, I am allocating the entire sized matrices, and am directly using NumPy’s LinAlg’s solve() function on these biggies. No, the computational load doesn’t show up; after all, I anyway have to use a 0.1 second pause in between the rendering passes, and the biggest matrices I tried were only in size. (Remember, this is just a simulation.) Even then, I am tempted a bit to improve the efficiency. For these and similar reasons, some or the other tweaking is still going on in all the programs. That’s why, I won’t be uploading them right away.

**Anything else about my new approach, like delivering a seminar or so? Any news from the Indian physicists?**

I had already contacted a couple of physics professors from India, both from Pune: one, about 1.5 years ago, and another, within the last 6 months. Both these times, I offered to become a co-guide for some computational physics projects to be done by their PG/UG students or so. Both times (what else?) there was absolutely no reply to my emails. … If they were to respond, we could have together progressed further on simulating my approach. … I have always been “open” about it.

The above-mentioned experience is precisely similar to how there have been no replies when I wrote to some *other* professors of physics, i.e., when I offered to conduct a seminar (covering my new approach) in their departments. Particularly, from the young IISER Pune professor whom I had written. … Oh yes, BTW, there has been one more physicist who I contacted recently for a seminar (within the last month). Once again, there has been no reply. (This professor is known to enjoy hospitality abroad as an Indian, and also use my taxpayer’s money for research while in India.)

No, the issue is not whether the emails I write using my Yahoo! account go into their span folder—or something like that. That would be too innocuous a cause, and too easy to deal with—every one has a mobile-phone these days. But I know these (Indian) physicists. Their behaviour remains exactly the same even if I write my emails using a respectable academic email ID (my employers’, complete with a .edu domain). This was my experience in 2016, and it repeated again in 2017.

The bottom-line is this: If you are an engineer and if you write to these Indian physicists, there is almost a guarantee that your emails will go into a black-hole. They will not reply to you even if you yourself have a PhD, and are a Full Professor of engineering (even if only on an ad-hoc basis), and have studied and worked abroad, and even if your blog is followed internationally. So long as you are engineer, and mention QM, the Indian physicists simply shut themselves off.

However, there is a trick to get them to reply you. Their behavior does temporarily change when you put some impressive guy in your cc-field (e.g., some professor friend of yours from some IIT). In this case, they sometimes do reply your first email. However, soon after that initial shaking of hands, they somehow go back to their true core; they shut themselves off.

And this is what invariably happens with *all* of them—no matter what other Indian bloggers might have led you to believe.

There must be some systemic reasons for such behavior, you say? Here, I will offer a couple of relevant observations.

Systemically speaking, Indian physicists, taken as a group (and leaving any possible rarest of the rare exceptions aside), all fall into one band: (i) The first commonality is that they all are government employees. (ii) The second commonality they all tend to be leftists (or, heavily leftists). (iii) The third commonality is they (by and large) share is that they had lower (or far lower) competitive scores in the entrance examinations at the gateway points like XII, GATE/JAM, etc.

The first factor typically means that they know that no one is going to ask them why they didn’t reply (even to people like with my background). The second factor typically means that they don’t want to give you any mileage, not even just a plain academic respect, if you are not already one of “them”. The third factor typically means that they simply don’t have the very intellectual means to understand or judge anything you say if it is original—i.e., if it is not based on some work of someone from abroad. In plain words: they *are* incompetent. (That in part is the reason whenever I run into a competent Indian physicist, it is both a surprise and a pleasure. To drop a couple of names: Prof. Kanhere (now retired) from UoP (now SPPU), and Prof. Waghmare of JNCASR. … But leaving aside this minuscule minority, and coming to the rest of the herd: the less said, the better.)

In short, Indian physicists all fall into a band. And they all are *very* classical—no tunneling is possible. Not with these Indian physicists. (The trends, I guess, are similar all over the world. Yet, I definitely can say that Indians are worse, far worse, than people from the advanced, Western, countries.)

Anyway, as far as the path through the simulations goes, since no help is going to come from these government servants (regarded as physicists by foreigners), I now realized that I have to get going about it—simulations for my new approach—entirely on my own. If necessary, from the basic of the basics. … And that’s how I got going with these programs.

**Are these programs going to provide a peek into my new approach?**

No, none of these programs I talked about in this post is going to be very *directly* helpful for simulations related to my new approach. The programs I wrote thus far are all very, very standard (simplest UG text-book level) stuff. If resolving QM riddles were that easy, any number of people would have done it already.

… So, the programs I wrote over the last couple of weeks are nothing but just a beginning. I have to cover a lot of distance. It may take months, perhaps even a year or so. But I intend to keep working at it. At least in an off and on manner. I have no choice.

And, at least currently, I am going about it at a fairly good speed.

For the same reason, expect no further blogging for another 2–3 weeks or so.

But one thing is for certain. As soon as my paper on my new approach (to be written after running the simulations) gets written, I am going to quit QM. The field does not hold any further interest to me.

Coming to *you*: If you still wish to know more about my new approach before the paper gets written, then you convince these Indian professors of physics to arrange for my seminar. Or, else…

…

…

…

…

…

…

…

…

…

…

… What else? Simple! You. Just. Wait.

[Or see me in person if you would be visiting India. As I said, I have always been “open” from my side, and I continue to remain so.]

**A song I like:**

(Hindi) “bheegee bheegee fizaa…”

Music: Hemant Kumar

Singer: Asha Bhosale

Lyrics: Kaifi Aazmi

*History:*

Originally published: 2018.11.26 18:12 IST

Extension and revision: 2018.11.27 19.29 IST