I. A general update regarding my on-going research work (on my new approach to QM):

1.1 How the development is actually proceeding:

I am working through my new approach to QM. These days, I write down something and/or implement some small and simple Python code snippets (< 100 LOC Python code) every day. So, it’s almost on a daily basis that I am grasping something new.

The items of understanding are sometimes related to my own new approach to QM, and at other times, just about the mainstream QM itself. Yes, in the process of establishing a correspondence of my ideas with those of the mainstream QM, I am getting to learn the ideas and procedures from the mainstream QM too, to a better depth. … At other times, I learn something about the correspondence of both the mainstream QM and my approach, with the classical mechanics.

Yes, at times, I also spot some inconsistencies within my own framework! It too happens! I’ve spotted several “misconceptions” that I myself have had—regarding my own approach!

You see, when you are ab initio developing a new theory, it’s impossible to pursue the development of the theory very systematically. It’s impossible to be right about every thing, right from the beginning. That’s because the very theory itself is not fully known to you while you are still developing it! The neatly worked out structure, its best possible presentations, the proper hierarchical relations… all of these emerge only some time later.

Yes, you do have some overall, “vaguish” idea(s) about the major themes that are expected to hold the new theory together. You do know many elements that must be definitely there.

In my case, such essential themes or theoretical elements go, for example, like: the energy conservation principle, the reality of some complex-valued field, the specific (natural) form of the non-linearity which I have proposed, my description of the measurement process and of Born’s postulate, the role that the Eulerian (fixed control volume-based) formulations play in my theorization, etc.

But all these are just elements. Even when tied together, they still amount to only an initial framework. Many of these elements may eventually turn out to play an over-arching role in the finished theory. But during the initial stages (including the stage I am in), you can’t even tell which element is going to play a greater role. All the elements are just loosely (or flexibly) held together in your mind. Such a loosely held set does not qualify to be called a theory. There are lots and lots (and lots) of details that you still don’t even know exist. You come to grasp these only on the fly, only as you are pursuing the “fleshing out” of the “details”.

Once the initial stage gets over, and you are going through the fleshing out stage, the development has a way of progressing on multiple threads of thought, simultaneously.

There are insights or minor developments, or simply new validations of some earlier threads, which occur almost on a daily basis. Each is a separate piece of a small little development; it makes sense to you; and all such small little pieces keep adding up—in your mind and in your notebooks.

Still, there is not much to share with others, simply because in the absence of a knowledge of all that’s going through your mind, any pieces you share are simply going to look as if they were very haphazard, even “random”.

1.3. At this stage, others can easily misunderstand what you mean:

Another thing. There is also a danger that someone may misread you.

For example, because he himself is not clear on many other points which you have not noted explicitly.

Or, may be, you have noted your points somewhere, but he hasn’t yet gone through them. In my case, it is the entirety of my Ontologies series [^]. … Going by the patterns of hits at this blog, I doubt whether any single soul has ever read through them all—apart from me, that is. But this entire series is very much alive in my mind when I note something here or there, including on the Twitter too.

Or, sometimes, there is a worse possibility too: The other person may read what you write quite alright, but what you wrote down itself was somewhat misleading, perhaps even wrong!

Indeed, recently, something of this sort happened when I had a tiny correspondence with someone. I had given a link to my Outline document [^]. He went through it, and then quoted from it in his reply to me. I had said, in the Outline document, that the electrons and protons are classical point-particles. His own position was that they can’t possibly be. … How possibly could I reply him? I actually could not. So, I did not!

I distinctly remember that right when I was writing this point in the Outline document, I had very much hesitated precisely at it. I knew that the word “classical” was going to create a lot of confusions. People use it almost indiscriminately: (i) for the ontology of Newtonian particles, (ii) for the ontology of Newtonian gravity, (iii) for ontology of the Fourier theory (though very few people think of this theory in the context of ontologies), (iv) for ontology of EM as implied by Maxwell, (v) for ontology of EM as Lorentz was striving to get at and succeeded brilliantly in so many essential respects (but not all, IMO), etc.

However, if I were to spend time on getting this portion fully clarified (first to myself, and then for the Outline document), then I also ran the risk of missing out on noting many other important points which also were fairly nascent to me (in the sense, I had not noted them down in a LaTeX document). These points had to be noted on priority, right in the Outline document.

Some of these points were really crucial—the $V(x,t)$ field as being completely specified in reference to the elementary charges alone (i.e. no arbitrary PE fields), the non-linearity in $\Psi(x,t)$, the idea that it is the Instrument’s (or Detector’s) wavefunction which undergoes a catastrophic change—and not the wavefunction of the particle being measured, etc. A lot of such points. These had to be noted, without wasting my time on what precisely I meant when I used the word “classical” for the point-particle of the electron etc.

Yes, I did identify that I the elementary particles were to be taken as conditions in the aether. I did choose the word “background object” merely in order to avoid any confusion with Maxwell’s idea of a mechanical aether. But I myself wasn’t fully clear on all aspects of all the ideas. For instance, I still was not familiar with the differences of Lorentz’ aether from Maxwell’s.

All in all, a document like the Outline document had to be an incomplete document; it had to come out in the nature of a hurried job. In fact, it was so. And I identified it as such.

I myself gained a fuller clarity on many of these issues only while writing the Ontologies series, which happened some 7 months later, after putting out the Outline document online. And, it was even as recently as in the last month (i.e., about 1.5 years after the Outline document) that I was still further revising my ideas regarding the correspondence between QM and CM. … Indeed, this still remains a work in progress… I am maintaining handwritten notes and LaTeX files too (sort of like “journal”s or “diaries”).

All in all, sharing a random snapshot of a work-in-progress always carries such a danger. If you share your ideas too early, while they still are being worked out, you might even end up spreading some wrong notions! And when it comes to theoretical work, there is no product-recall mechanism here—at all! Detrimental to your goals, after all!

1.3 How my blogging is going to go, in the next few weeks:

So, though I am passing through a very exciting phase of development these days, and though I do feel like sharing something or the other on an almost daily basis, when I sit down and think of writing a blog post, unfortunately, I find that there is very little that I can actually share.

For this very reason, my blogging is going to be sparse over the coming weeks.

However, in the meanwhile, I might post some brief entries, especially regarding papers/notes/etc. by others. As in this post.

OTOH, if you want something bigger to think about, see the Q&A answers from my last post here. That material is enough to keep you occupied for a couple of decades or more… I am not joking. That’s what’s happened to others; it has happened to me; and I can guarantee you that it would happen to you too, so long as you keep forgetting whatever you’ve read about my new approach. You could then very easily spend decades and decades (and decades)…

Anyway, coming back to some recent interesting pieces by others…

2.1. Luboš Motl on TerraPower, Inc.:

Dr. Luboš Motl wrote a blog-post of the title “Green scientific illiteracy enters small nuclear reactors, too” [^]. This piece is a comment on TerraPower’s proposal. In case you didn’t know, TerraPower is a pet project of Bill Gates’.

My little note (on the local HDD), upon reading this post, had said something like, “The critics of this idea are right, from an engineering/technological viewpoint.”

In particular, I have too many apprehensions about using liquid sodium. Further, given the risk involved in distributing the sensitive nuclear material over all those geographically dispersed plants, this idea does become, err…, stupid.

In the above post, Motl makes reference to another post of his, one from 2019, regarding the renewable energies like the solar and the wind. The title of this earlier post read: “Bill Gates: advocates of dominant wind & solar energy are imbeciles” [^]. Make sure to go through this one too. The calculation given in it is of a back-of-the-envelop kind, but it also is very impeccable. You can’t find flaw with the calculation itself.

Of course, this does not mean that research on renewable energies should not be pursued. IMO, it should be!

It’s just that I want to point out a few things: (i) Motl chooses the city of Tokyo for his calculation, which IMO would be an extreme case. Tokyo is a very highly dense city—both population-wise and on the count of geographical density of industries (and hence, of industrial power consumption). There can easily be other places where the density of power consumption, and the availability of the natural renewable resources, are better placed together. (ii) Even then, calculations such as that performed by Motl must be included in all analyses—and, the cost of renewable energy must be calculated without factoring in the benefit of government subsidies. … Yes, research on renewable energy would still remain justified. (iii) Personally, I find the idea of converting the wind/solar electricity into hydrogen more attractive. See my 2018 post [^] which had mentioned the idea of using the hydrogen gas as a “flywheel” of sorts, in a distributed system of generation (i.e. without transporting the wind-generated hydrogen itself, over long distances).

2.2. Demonstrations on coupled oscillations and resonance at Harvard:

As to the relevance of this topic to my new approach to QM: The usual description of resonance proceeds by first stating a homogeneous differential equation, and then replacing the zero on the right hand-side with a term that stands for an oscillating driving force [^]. Thus, we specify a force-term for the driver, but the System under study is still being described with the separation vector (i.e. a displacement) as the primary unknown.

Now, just take the driver part of the equation, and think of it as a multi-scaled effect of a very big assemblage of particles whose motions themselves are fundamentally described using exactly the same kind of terms as those for the particles in the System, i.e., using displacements as the primary unknown. It is the multi-scaling procedure which transforms a fundamentally displacement-based description to a basically force-primary description. Got it? Hint below.

[Hint: In the resonance equation, it is assumed that form of the driving force remains exactly the same at all times: with exactly the same $F_0$, $m$, and $\omega$. If you replace the driving part with particles and springs, none of the three parameters characterizing the driving force will remain constant, especially $\omega$. They all will become functions of time. But we want all the three parameters to stay constant in time. …Now, the real hint: Think of the exact sinusoidal driving force as an abstraction, and multi-scaling as a means of reaching that abstraction.]

2.3 Visualization of physics at the University of St. Andrews:

Again, very neat [^]. The simulations here have very simple GUI, but the design of the applets has been done thoughtfully. The scenarios are at a level more advanced than the QM simulations at PhET, University of Colorado [^].

2.4. The three-body problem:

The nonlinearity in $\Psi(x,t)$ which I have proposed is, in many essential ways, similar to the classical $N$-body problem.

The simplest classical $N$-body problem is the $3$-body problem. Rhett Allain says that the only way to solve the $3$-body problem is numerically [^]. But make sure to at least cursorily note the special solutions mentioned in the Wiki [^]. This Resonance article (.PDF) [^] seems quite comprehensive, though I haven’t gone through it completely. Related, with pictures: A recent report with simulations, for search on “choreographies” (which is a technical term; it refers to trajectories that repeat) [^].

Sure there could be trajectories that repeat for some miniscule number of initial conditions. But the general rule is that the $3$-body problem already shows sensitive dependence on initial conditions. Search the ‘net for $4$-body, $5$-body problems. … In QM, we have $10^{23}$ particles. Cool, no?

2.5.1: Max Born in IISc Bangalore:

Check out a blog post/article by Karthik Ramaswamy, of the title “When Raman brought Born to Bangalore” [^]. (H/t Luboš Motl [^].)

2.5.2: Academic culture in India in recent times—a personal experience involving the University of Pune, IIT Bombay, IIT Madras, and IISc Bangalore:

After going through the above story, may I suggest that you also go through my posts on the Mechanical vs. Metallurgy “Branch Jumping” issue. This issue decidedly came up in 2002 and 2003, when I went to IIT Bombay for trying admission to PhD program in Mechanical department. I tried multiple times. They remained adamant throughout the 2002–2003 times. An associate professor from the Mechanical department was willing to become my guide. (We didn’t know each other beforehand.) He fought for me in the department meeting, but unsucessfully. (Drop me a line to know who.) One professor from their CS department, too, sympathetically listened to me. He didn’t understand the Mechanical department’s logic. (Drop me a line to know who.)

Eventually, in 2003, three departments at IISc Bangalore showed definite willingness to admit me.

One was a verbal offer that the Chairman of the SERC made to me, but in the formal interview (after I had on-the-spot cleared their written tests—I didn’t know they were going to hold these). He even offered me a higher-than-normal stipend (in view of my past experience), but he said that the topic of research would have to be one from some 4–5 ongoing research projects. I declined on the spot. (He did show a willingness to wait for a little while, but I respectfully declined it too, because I knew I wanted to develop my own ideas.)

At IISc, there also was a definite willingness to admit me by both their Mechanical and Metallurgy departments. That is, during my official interviews with them (which once again happened after I competitively cleared their separate written tests, being short-listed to within 15 or 20 out of some 180 fresh young MTech’s in Mechanical branch from IISc and IITs—being in software, I had forgotten much of my core engineering). Again, it emerged during my personal interviews with the departmental committees, that I could be in (yes, even in their Mechanical department), provided that I was willing to work on a topic of their choice. I protested a bit, and indicated the loss of my interest right then and there, during both these interviews.

Finally, at around the same time (2003), at IIT Madras, the Metallurgical Engg. department also made an offer to me (after yet another written test—which I knew was going to be held—and an interview with a big committee). They gave me the nod. That is, they would let me pursue my own ideas for my PhD. … I was known to many of them because I had done my MTech right from the same department, some 15–17 years earlier. They recalled, on their own, the hard work which I had put in during my MTech project work. They were quite confident that I could deliver on my topic even if they at that time they (and I!) had only a minimal idea about it.

However, soon enough, Prof. Kajale at COEP agreed to become my official guide at University of Pune. Since it would be convenient for me to remain in Pune (my mother was not keeping well, among other things), I decided to do my PhD from Pune, rather than broach the topic once again at SERC, or straight-away join the IIT Madras program.

Just thought of jotting down the more recent culture at these institutes (at IIT Bombay, IIT Madras, and IISc Bangalore), in COEP, and of course, in the University of Pune. I am sure it’s just a small slice in the culture, just one sample, but it still should be relevant…

Also relevant is this part: Right until I completely left academia for good a couple of years ago, COEP professors and the University of Pune (not to mention UGC and AICTE) continued barring me from becoming an approved professor of mechanical engineering. (It’s the same small set of professors who keep chairing interview processes in all the colleges, even universities. So, yes, the responsibility ultimately lies with a very small group of people from IIT Bombay’s Mechanical department—the Undisputed and Undisputable Leader, and with COEP and University of Pune—the  Faithful Followers of the former).

2.5.3. Dirac in India:

BTW, in India, there used to a monthly magazine called “Science Today.” I vaguely recall that my father used to have a subscription for it right since early 1970s or so. We would eagerly wait for each new monthly issue, especially once I knew enough English (and physics) to be able to more comfortably go through the contents. (My schooling was in Marathi medium, in rural areas.) Of course, my general browsing of this magazine had begun much earlier. [“Science Today” would be published by the Times of India group. Permanently gone are those days!]

I now vaguely remember that one of the issues of “Science Today” had Paul Dirac prominently featured in it. … I can’t any longer remember much anything about it. But by any chance, was it the case that Prof. Dirac was visiting India, may be TIFR Bombay, around that time—say in mid or late 1970s, or early 1980’s? … I tried searching for it on the ‘net, but could not find anything, not within the first couple of pages after a Google search. So, may be, likely, I have confused things. But would sure appreciate pointers to it…

PS: Yes, I found this much:

“During 1973 and 1975 Dirac lectured on the problems of cosmology in the Physical Engineering Institute in Leningrad. Dirac also visited India.’‘ [^].

… Hmm… Somehow, for some odd reason, I get this feeling that the writer of this piece, someone at Vigyan Prasar, New Delhi, must have for long been associated with IIT Bombay (or equivalent thereof). Whaddaya think?

2.6. Jim Baggott’s new book: “Quantum Reality”:

I don’t have the money to buy any books, but if I were to, I would certainly buy three books by Jim Baggott: The present book of the title “Quantum Reality,” as well as a couple of his earlier books: the “40 moments” book and the “Quantum Cookbook.” I have read a lot of pages available at the Google books for all of these three books (may be almost all of the available pages), and from what I read, I am fully confident that buying these books would be money very well spent indeed.

Dr. Sabine Hossenfelder has reviewed this latest book by Baggott, “Quantum Reality,” at the Nautil.us; see “Your guide to the many meanings of quantum mechanics,” here [^]. … I am impressed by it—I mean this review. To paraphrase Hossenfelder herself: “There is nothing funny going on here, in this review. It just, well, feels funny.”

Dr. Peter Woit, too, has reviewed “Quantum Reality” at his blog [^] though in a comparatively brief manner. Make sure to go through the comments after his post, especially the very first comment, the one which concerns classical mechanics, by Matt Grayson [^]. PS: Looks like Baggott himself is answering some of the comments too.

Sometime ago, I read a few blog posts by Baggott. It seemed to me that he is not very well trained in philosophy. It seems that he has read philosophy deeply, but not comprehensively. [I don’t know whether he has read the Objectivist metaphysics and epistemology or not; whether he has gone through the writings/lectures by Ayn Rand, Dr. Leonard Peikoff, Dr. Harry Binswanger and David Harriman or not. I think not. If so, I think that he would surely benefit by this material. As always, you don’t have to agree with the ideas. But yes, the material that I am pointing out is by all means neat enough that I can surely recommend it.]

Coming back to Baggott: I mean to say, he delivers handsomely when (i) he writes books, and (ii) sticks to the physics side of the topics. Or, when he is merely reporting on others’ philosophic positions. (He can condense down their positions in a very neat way.) But in his more leisurely blog posts/articles, and sometimes even in his comments, he does show a tendency to take some philosophic point in a something of a wrong direction, and to belabour on it unnecessarily. That is to say, he does show a certain tendency towards pedantry, as it were.  But let me hasten to add: He seems to show this tendency only in some of his blog-pieces. Somehow, when it comes to writing books, he does not at all show this tendency—well, at least not in the three books I’ve mentioned above.

So, the bottomline is this:

If you have an interest in QM, and if you want a comprehensive coverage of all its interpretations, then this book (“Quantum Reality”) is for you. It is meant for the layman, and also for philosophers.

However, if what you want is a very essentialized account of most all of the crucial moments in the development of QM (with a stress on physics, but with some philosophy also touched on, and with almost no maths), then go buy his “40 Moments” book.

Finally, if you have taken a university course in QM (or are currently taking it), then do make sure to buy his “Cookbook” (published in January this year). From what I have read, I can easily tell: You would be doing yourself a big favour by buying this book. I wish the Cookbook was available to me at least in 2015 if not earlier. But the point is, even after developing my new approach, I am still going to buy it. It achieves a seemingly impossible combination: Something that makes for an easy reading (if you already know the QM) but it will also serve as a permanent reference, something which you can look up any time later on. So, I am going to buy it, once I have the money. Also, “Quantum Reality”, the present book for the layman. Indeed all the three books I mentioned.

(But I am not interested in relativity theory, or QFT, standard model, etc. etc. etc., and so, I will not even look into any books on these topics, written by any one.)

OK then, let me turn back to my work… May be I will come back with some further links in the next post too, may be after 10–15 days. Until then, take care, and bye for now…

A song I like:

(Marathi) घन घन माला नभी दाटल्या (“ghan ghan maalaa nabhee daaTalyaa”)
Singer: Manna Dey
Music: Vasant Pawar

[A classic Marathi song. Based on the (Sanskrit, Marathi) राग मल्हार (“raaga” called “Malhaara”). The best quality audio is here [^]. Sung by Manna Dey, a Bengali guy who was famous for his Hindi film songs. … BTW, it’s been a marvellous day today. Clear skies in the morning when I thought of doing a blog post today and was wondering if I should add this song or not. And, by the time I finish it, here are strong showers in all their glory! While my song selection still remains more or less fully random (on the spur of the moment), since I have run so many songs already, there has started coming in a bit of deliberation too—many songs that strike me have already been run!

Since I am going to be away from blogging for a while, and since many of the readers of this blog don’t have the background to appreciate Marathi songs, I may come back and add an additional song, a non-Marathi song, right in this post. If so, the addition would be done within the next two days or so. …Else, just wait until the next post, please! Done, see the song below]

(Hindi) बोल रे पपीहरा (“bol re papiharaa”)
Singer: Vani Jairam
Music: Vasant Desai
Lyrics: Gulzar

[I looked up on the ‘net to see if I can get some Hindi song that is based on the same “raaga”, i.e., “Malhaar” (in general). I found this one, among others. Comparing these two songs should give you some idea about what it means when two songs are said to share the same “raaga”. … As to this song, I should also add that the reason for selecting it had more to do with nostalgia, really speaking. … You can find a good quality audio here [^].

Another thing (that just struck me, on the fly): Somehow, I also thought of all those ladies and gentlemen from the AICTE New Delhi, UGC New Delhi, IIT Bombay’s Mechanical Engg. department, all the professors (like those on R&R committees) from the University of Pune (now called SPPU), and of course, the Mechanical engg. professors from COEP… Also, the Mechanical engineering professors from many other “universities” from the Pune/Mumbai region. … पपीहरा… (“papiharaa”) Aha!… How apt are words!… Excellence! Quality! Research! Innovation! …बोल रे, पपीहरा ऽऽऽ (“bol re papiharaa…”). … No jokes, I had gone jobless for 8+ years the last time I counted…

Anyway, see if you like the song… I do like this song, though, probably, it doesn’t make it to my topmost list. … It has more of a nostalgia value for me…

Anyway, let’s wrap up. Take care and bye for now… ]

History:
— First published: 2020.09.05 18:28 IST.
— Several significant additions revisions till 2020.09.06 01:27 IST.
— Much editing. Added the second song. 2020.09.06 21:40 IST. (Now will leave this post in whatever shape it is in.)

# Further on QM, and on changing tracks over to Data Science

OK. As decided, I took a short trip to IIT Bombay, and saw a couple of professors of physics, for very brief face-to-face interactions on the 28th evening.

No chalk-work at the blackboard had to be done, because both of them were very busy—but also quick, really very quick, in getting to the meat of the matter.

As to the first professor I saw, I knew beforehand that he wouldn’t be very enthusiastic with any alternatives to anything in the mainstream QM.

He was already engrossed in a discussion with someone (who looked like a PhD student) when I knocked at the door of his cabin. The prof immediately mentioned that he has to finish (what looked like a few tons of) pending work items, before going away on a month-long trip just after a couple of days! But, hey, as I said (in my last post), directly barging into a professor’s cabin has always done wonders for me! So, despite his having some heavy^{heavy} schedule, he still motioned me to sit down for a quick and short interaction.

The three of us (the prof, his student, and me) then immediately had a very highly compressed discussion for some 15-odd minutes. As expected, the discussion turned out to be not only very rapid, and also quite uneven, because there were so many abrupt changes to the sub-topics and sub-issues, as they were being brought up and dispatched in quick succession. …

It was not an ideal time to introduce my new approach, and so, I didn’t. I did mention, however, that I was trying to develop some such a thing. The professor was of the opinion that if you come up with a way to do faster simulations, it would always be welcome, but if you are going to argue against the well-established laws, then… [he just shook head].

I told him that I was clear, very clear on one point. Suppose, I said, that I have a complex-valued field that is defined only over the physical 3D, and suppose further that my new approach (which involves such a 3D field) does work out. Then, suppose further that I get essentially the same results as the mainstream QM does.

In such a case, I said, I am going to say that here is a possibility of looking at it as a real physical mechanism underlying the QM theory.

And if people even then say that because it is in some way different from the established laws, therefore it is not to be taken seriously, then I am very clear that I am going to say: “You go your way and I will go mine.”

But of course, I further added, that I still don’t know yet how the calculations are done in the mainstream QM for the interacting electrons—that is, without invoking simplifying approximations (such as the fixed nucleus). I wanted to see how these calculations are done using the computational modeling approach (not the perturbation theory).

It was at this point that the professor really got the sense of what I was trying to get at. He then remarked that variational formulations are capable enough, and proceeded to outline some of their features. To my query as to what kind of an ansatz they use, and what kind of parameters are involved in inducing the variations, he mentioned Chebyshev polynomials and a few other things. The student mentioned the Slater determinants. Then the professor remarked that the particulars of the ansatz and the particulars of the variational techniques were not so crucial because all these techniques ultimately boil down to just diagonalizing a matrix. Somehow, I instinctively got the idea that he hasn’t been very much into numerical simulations himself, which turned out to be the case. In fact he immediately said so himself: “I don’t do wavefunctions. [Someone else from the same department] does it.” I decided to see this other professor the next day, because it was already evening (almost approaching 6 PM or so).

A few wonderful clarifications later, it was time for me to leave, and so I thanked the professor profusely for accommodating me. The poor fellow didn’t even have the time to notice my gratitude; he had already switched back to his interrupted discussion with the student.

But yes, the meeting was fruitful to me because the prof did get the “nerve” of the issue right, and in fact also gave me two very helpful papers to study, both of them being review articles. After coming home, I have now realized that while one of them is quite relevant to me, the other one is absolutely god-damn relevant!

Anyway, after coming out of the department on that evening, I was thinking of calling my friend to let him know that the purpose of the visit to the campus was over, and thus I was totally free. While thinking about calling him and walking through the parking lot, I just abruptly noticed a face that suddenly flashed something recognizable to me. It was this same second professor who “does wavefunctions!”

I had planned on seeing him the next day, but here he was, right in front me, walking towards his car in a leisurely mood. Translated, it meant: he was very much free of all his students, and so was available for a chat with me! Right now!! Of course, I had never had made any acquaintance with him in the past. I had only browsed through his home page once in the recent times, and so could immediately make out the face, that’s all. He was just about to open the door of his car when I approached him and introduced myself. There followed another intense bout of discussions, for another 10-odd minutes.

This second prof has done numerical simulations himself, and so, he was even faster in getting a sense of what kind of ideas I was toying with. Once again, I told him that I was trying for some new ideas but didn’t get any deeper into my approach, because I myself still don’t know whether my approach will produce the same results as the mainstream QM does or not. In any case, knowing the mainstream method of handling these things was crucial, I said.

I told him how, despite my extensive Internet searches, I had not found suitable material for doing calculations. He then said that he will give me the details about a book. I should study this book first, and if there are still some difficulties or some discussions to be had, then he would be available, but the discussion would then have to progress in reference to what is already given in that book. Neat idea, this one was, perfect by me. And turns out that the book he suggested was neat—absolutely perfectly relevant to my needs, background as well as preparation.

And with that ends this small story of this short visit to IIT Bombay. I went there with a purpose, and returned with one 50 page-long and very tightly written review paper, a second paper of some 20+ tightly written pages, and a reference to an entire PG-level book (about 500 pages). All of this material absolutely unknown to me despite my searches, and as it seems as of today, all of it being of utmost relevance to me, my new ideas.

But I have to get into Data Science first. Else I cannot survive. (I have been borrowing money to fend off the credit card minimum due amounts every month.)

So, I have decided to take a rest for today, and from tomorrow onwards, or may be a day later—i.e., starting from the “shubh muhurat” (auspicious time) of the April Fool’s day, I will begin my full-time pursuit of Data Science, with all that new material on QM only to be studied on a part-time basis. For today, however, I am just going to be doing a bit of a time-pass here and there. That’s how this post got written.

Take care, and wish you the same kind of luck as I had in spotting that second prof just like that in the parking lot. … If my approach works, then I know who to contact first with my results, for informal comments on them. … I wish you this same kind of a luck…

Work hard, and bye for now.

A song I like
(Marathi) “dhunda_ madhumati raat re, naath re…”
Music: Master Krishnarao
Singer: Lata Mangeshkar

[A Marathi classic. Credits are listed in a purely random order. A version that seems official (released by Rajshri Marathi) is here: [^] . However, somehow, the first stanza is not complete in it.

As to the set shown in this (and all such) movies, right up to, say the movie “Bajirao-Mastani,” I have—and always had—an issue. The open wide spaces for the palaces they show in the movies are completely unrealistic, given the technology of those days (and the actual remains of the palaces that are easy to be recalled by anyone). The ancients (whether here in India or at any other place) simply didn’t have the kind of technology which is needed in order to build such hugely wide internal (covered) spaces. Neitehr the so-called “Roman arch” (invented millenia earlier in India, I gather), nor the use of the monolithic stones for girders could possibly be enough to generate such huge spans. Idiots. If they can’t get even simple calculations right, that’s only to be expected—from them. But if they can’t even recall the visual details of the spans actually seen for the old palaces, that is simply inexcusable. Absolutely thorough morons, these movie-makers must be.]

# Python scripts for simulating QM, part 0: A general update

My proposed paper on my new approach to QM was not accepted at the international conference where I had sent my abstract. (For context, see the post before the last, here [^] ).

“Thank God,” that’s what I actually felt when I received this piece of news, “I can now immediately proceed to procrastinate on writing the full-length paper, and also, simultaneously, un-procrastinate on writing some programs in Python.”

So far, I have written several small and simple code-snippets. All of these were for the usual (text-book) cases; all in only $1D$. Here in this post, I will mention specifically which ones…

Time-independent Schrodinger equation (TISE):

Here, I’ve implemented a couple of scripts, one for finding the eigen-vectors and -values for a particle in a box (with both zero and arbitrarily specified potentials) and another one for the quantum simple harmonic oscillator.

These were written not with the shooting method (which is the method used in the article by Rhett Allain for the Wired magazine [^]) but with the matrix method. … Yes, I have gone past the stage of writing all the numerical analysis algorithm in original, all by myself. These days, I directly use Python libraries wherever available, e.g., NumPy’s LinAlg methods. That’s why, I preferred the matrix method. … My code was not written from scratch; it was based on Cooper’s code “qp_se_matrix”, here [PDF ^]).

Time-dependent Schrodinger equation (TDSE):

Here, I tried out a couple of scripts.

The first one was more or less a straightforward porting of Ian Cooper’s program “se_fdtd” [PDF ^] from the original MatLab to Python. The second one was James Nagel’s Python program (written in 2007 (!) and hosted as a SciPy CookBook tutorial, here [^]). Both follow essentially the same scheme.

Initially, I found this scheme to be a bit odd to follow. Here is what it does.

It starts out by replacing the complex-valued Schrodinger equation with a pair of real-valued (time-dependent) equations. That was perfectly OK by me. It was their discretization which I found to be a bit peculiar. The discretization scheme here is second-order in both space and time, and yet it involves explicit time-stepping. That’s peculiar, so let me write a detailed note below (in part, for my own reference later on).

Also note: Though both Cooper and Nagel implement essentially the same method, Nagel’s program is written in Python, and so, it is easier to discuss (because the array-indexing is 0-based). For this reason, I might make a direct reference only to Nagel’s program even though it is to be understood that the same scheme is found implemented also by Cooper.

A note on the method implemented by Nagel (and also by Cooper):

What happens here is that like the usual Crank-Nicolson (CN) algorithm for the diffusion equation, this scheme too puts the half-integer time-steps to use (so as to have a second-order approximation for the first-order derivative, that of time). However, in the present scheme, the half-integer time-steps turn out to be not entirely fictitious (the way they are, in the usual CN method for the single real-valued diffusion equation). Instead, all of the half-integer instants are fully real here in the sense that they do enter the final discretized equations for the time-stepping.

The way that comes to happen is this: There are two real-valued equations to solve here, coupled to each other—one each for the real and imaginary parts. Since both the equations have to be solved at each time-step, what this method does is to take advantage of that already existing splitting of the time-step, and implements a scheme that is staggered in time. (Note, this scheme is not staggered in space, as in the usual CFD codes; it is staggered only in time.) Thus, since it is staggered and explicit, even the finite-difference quantities that are defined only at the half-integer time-steps, also get directly involved in the calculations. How precisely does that happen?

The scheme defines, allocates memory storage for, and computationally evaluates the equation for the real part, but this computation occurs only at the full-integer instants ($n = 0, 1, 2, \dots$). Similarly, this scheme also defines, allocates memory for, and computationally evaluates the equation for the imaginary part; however, this computation occurs only at the half-integer instants ($n = 1/2, 1+1/2, 2+1/2, \dots$). The particulars are as follows:

The initial condition (IC) being specified is, in general, complex-valued. The real part of this IC is set into a space-wide array defined for the instant $n$; here, $n = 0$. Then, the imaginary part of the same IC is set into a separate array which is defined nominally for a different instant: $n+1/2$. Thus, even if both parts of the IC are specified at $t = 0$, the numerical procedure treats the imaginary part as if it was set into the system only at the instant $n = 1/2$.

Given this initial set-up, the actual time-evolution proceeds as follows:

• The real-part already available at $n$ is used in evaluating the “future” imaginary part—the one at $n+1/2$
• The imaginary part thus found at $n+1/2$ is used, in turn, for evaluating the “future” real part—the one at $n+1$.

At this point that you are allowed to say: lather, wash, repeat… Figure out exactly how. In particular, notice how the simulation must proceed in integer number of pairs of computational steps; how the imaginary part is only nominally (i.e. only computationally) distant in time from its corresponding real part.

Thus, overall, the discretization of the space part is pretty straight-forward here: the second-order derivative (the Laplacian) is replaced by the usual second-order finite difference approximation. However, for the time-part, what this scheme does is both similar to, and different from, the usual Crank-Nicolson scheme.

Like the CN scheme, the present scheme also uses the half-integer time-levels, and thus manages to become a second-order scheme for the time-axis too (not just space), even if the actual time interval for each time-step remains, exactly as in the CN, only $\Delta t$, not $2\Delta t$.

However, unlike the CN scheme, this scheme still remains explicit. That’s right. No matrix equation is being solved at any time-step. You just zip through the update equations.

Naturally, the zipping through comes with a “cost”: The very scheme itself comes equipped with a stability criterion; it is not unconditionally stable (the way CN is). In fact, the stability criterion now refers to half of the time-interval, not full, and thus, it is a bit even more restrictive as to how big the time-step ($\Delta t$) can be given a certain granularity of the space-discretization ($\Delta x$). … I don’t know, but guess that this is how they handle the first-order time derivatives in the FDTD method (finite difference time domain). May be the physics of their problems itself is such that they can get away with coarser grids without being physically too inaccurate, who knows…

Other aspects of the codes by Nagel and Cooper:

For the initial condition, both Cooper and Nagel begin with a “pulse” of a cosine function that is modulated to have the envelop of the Gaussian. In both their codes, the pulse is placed in the middle, and they both terminate the simulation when it reaches an end of the finite domain. I didn’t like this aspect of an arbitrary termination of the simulation.

However, I am still learning the ropes for numerically handling the complex-valued Schrodinger equation. In any case, I am not sure if I’ve got good enough a handle on the FDTD-like aspects of it. In particular, as of now, I am left wondering:

What if I have a second-order scheme for the first-order derivative of time, but if it comes with only fictitious half-integer time-steps (the way it does, in the usual Crank-Nicolson method for the real-valued diffusion equation)? In other words: What if I continue to have a second-order scheme for time, and yet, my scheme does not use leap-frogging? In still other words: What if I define both the real and imaginary parts at the same integer time-steps $n = 0, 1, 2, 3, \dots$ so that, in both cases, their values at the instant $n$ are directly fed into both their values at $n+1$?

In a way, this scheme seems simpler, in that no leap-frogging is involved. However, notice that it would also be an implicit scheme. I would have to solve two matrix-equations at each time-step. But then, I could perhaps get away with a larger time-step than what Nagel or Cooper use. What do you think? Is checker-board patterning (the main reason why we at all use staggered grids in CFD) an issue here—in time evolution? But isn’t the unconditional stability too good to leave aside without even trying? And isn’t the time-axis just one-way (unlike the space-axis that has BCs at both ends)? … I don’t know…

PBCs and ABCs:

Even as I was (and am) still grappling with the above-mentioned issue, I also wanted to make some immediate progress on the front of not having to terminate the simulation (once the pulse reached one of the ends of the domain).

So, instead of working right from the beginning with a (literally) complex Schrodinger equation, I decided to first model the simple (real-valued) diffusion equation, and to implement the PBCs (periodic boundary conditions) for it. I did.

My code seems to work, because the integral of the dependent variable (i.e., the total quantity of the diffusing quantity present in the entire domain—one with the topology of a ring) does seem to stay constant—as is promised by the Crank-Nicolson scheme. The integral stays “numerically the same” (within a small tolerance) even if obviously, there are now fluxes at both the ends. (An initial condition of a symmetrical saw-tooth profile defined between $y = 0.0$ and $y = 1.0$, does come to asymptotically approach the horizontal straight-line at $y = 0.5$. That is what happens at run-time, so obviously, the scheme seems to handle the fluxes right.)

Anyway, I don’t always write everything from the scratch; I am a great believer in lifting codes already written by others (with attribution, of course :)). Thus, while thus searching on the ‘net for some already existing resources on numerically modeling the Schrodinger equation (preferably with code!), I also ran into some papers on the simulation of SE using ABCs (i.e., the absorbing boundary conditions). I was not sure, however, if I should implement the ABCs immediately…

As of today, I think that I am going to try and graduate from the transient diffusion equation (with the CN scheme and PBCs), to a trial of the implicit TDSE without leap-frogging, as outlined above. The only question is whether I should throw in the PBCs to go with that or the ABCs. Or, may be, neither, and just keep pinning the  $\Psi$ values for the end- and ghost-nodes down to $0$, thereby effectively putting the entire simulation inside an infinite box?

At this point of time, I am tempted to try out the last. Thus, I think that I would rather first explore the staggering vs. non-staggering issue for a pulse in an infinite box, and understand it better, before proceeding to implement either the PBCs or the ABCs. Of course, I still have to think more about it… But hey, as I said, I am now in a mood of implementing, not of contemplating.

Why not upload the programs right away?

BTW, all these programs (TISE with matrix method, TDSE on the lines of Nagel/Cooper’s codes, transient DE with PBCs, etc.) are still in a fluid state, and so, I am not going to post them immediately here (though over a period of time, I sure would).

The reason for not posting the code runs something like this: Sometimes, I use the Python range objects for indexing. (I saw this goodie in Nagel’s code.) At other times, I don’t. But even when I don’t use the range objects, I anyway am tempted to revise the code so as to have them (for a better run-time efficiency).

Similarly, for the CN method, when it comes to solving the matrix equation at each time-step, I am still not using the TDMA (the Thomas algorithm) or even just sparse matrices. Instead, right now, I am allocating the entire $N \times N$ sized matrices, and am directly using NumPy’s LinAlg’s solve() function on these biggies. No, the computational load doesn’t show up; after all, I anyway have to use a 0.1 second pause in between the rendering passes, and the biggest matrices I tried were only $1001 \times 1001$ in size. (Remember, this is just a $1D$ simulation.) Even then, I am tempted a bit to improve the efficiency. For these and similar reasons, some or the other tweaking is still going on in all the programs. That’s why, I won’t be uploading them right away.

Anything else about my new approach, like delivering a seminar or so? Any news from the Indian physicists?

I had already contacted a couple of physics professors from India, both from Pune: one, about 1.5 years ago, and another, within the last 6 months. Both these times, I offered to become a co-guide for some computational physics projects to be done by their PG/UG students or so. Both times (what else?) there was absolutely no reply to my emails. … If they were to respond, we could have together progressed further on simulating my approach. … I have always been “open” about it.

The above-mentioned experience is precisely similar to how there have been no replies when I wrote to some other professors of physics, i.e., when I offered to conduct a seminar (covering my new approach) in their departments. Particularly, from the young IISER Pune professor whom I had written. … Oh yes, BTW, there has been one more physicist who I contacted recently for a seminar (within the last month). Once again, there has been no reply. (This professor is known to enjoy hospitality abroad as an Indian, and also use my taxpayer’s money for research while in India.)

No, the issue is not whether the emails I write using my Yahoo! account go into their span folder—or something like that. That would be too innocuous a cause, and too easy to deal with—every one has a mobile-phone these days. But I know these (Indian) physicists. Their behaviour remains exactly the same even if I write my emails using a respectable academic email ID (my employers’, complete with a .edu domain). This was my experience in 2016, and it repeated again in 2017.

The bottom-line is this: If you are an engineer and if you write to these Indian physicists, there is almost a guarantee that your emails will go into a black-hole. They will not reply to you even if you yourself have a PhD, and are a Full Professor of engineering (even if only on an ad-hoc basis), and have studied and worked abroad, and even if your blog is followed internationally. So long as you are engineer, and mention QM, the Indian physicists simply shut themselves off.

However, there is a trick to get them to reply you. Their behavior does temporarily change when you put some impressive guy in your cc-field (e.g., some professor friend of yours from some IIT). In this case, they sometimes do reply your first email. However, soon after that initial shaking of hands, they somehow go back to their true core; they shut themselves off.

And this is what invariably happens with all of them—no matter what other Indian bloggers might have led you to believe.

There must be some systemic reasons for such behavior, you say? Here, I will offer a couple of relevant observations.

Systemically speaking, Indian physicists, taken as a group (and leaving any possible rarest of the rare exceptions aside), all fall into one band: (i) The first commonality is that they all are government employees. (ii) The second commonality they all tend to be leftists (or, heavily leftists). (iii) The third commonality is they (by and large) share is that they had lower (or far lower) competitive scores in the entrance examinations at the gateway points like XII, GATE/JAM, etc.

The first factor typically means that they know that no one is going to ask them why they didn’t reply (even to people like with my background). The second factor typically means that they don’t want to give you any mileage, not even just a plain academic respect, if you are not already one of “them”. The third factor typically means that they simply don’t have the very intellectual means to understand or judge anything you say if it is original—i.e., if it is not based on some work of someone from abroad. In plain words: they are incompetent. (That in part is the reason whenever I run into a competent Indian physicist, it is both a surprise and a pleasure. To drop a couple of names: Prof. Kanhere (now retired) from UoP (now SPPU), and Prof. Waghmare of JNCASR. … But leaving aside this minuscule minority, and coming to the rest of the herd: the less said, the better.)

In short, Indian physicists all fall into a band. And they all are very classical—no tunneling is possible. Not with these Indian physicists. (The trends, I guess, are similar all over the world. Yet, I definitely can say that Indians are worse, far worse, than people from the advanced, Western, countries.)

Anyway, as far as the path through the simulations goes, since no help is going to come from these government servants (regarded as physicists by foreigners), I now realized that I have to get going about it—simulations for my new approach—entirely on my own. If necessary, from the basic of the basics. … And that’s how I got going with these programs.

Are these programs going to provide a peek into my new approach?

No, none of these programs I talked about in this post is going to be very directly helpful for simulations related to my new approach. The programs I wrote thus far are all very, very standard (simplest UG text-book level) stuff. If resolving QM riddles were that easy, any number of people would have done it already.

… So, the programs I wrote over the last couple of weeks are nothing but just a beginning. I have to cover a lot of distance. It may take months, perhaps even a year or so. But I intend to keep working at it. At least in an off and on manner. I have no choice.

And, at least currently, I am going about it at a fairly good speed.

For the same reason, expect no further blogging for another 2–3 weeks or so.

But one thing is for certain. As soon as my paper on my new approach (to be written after running the simulations) gets written, I am going to quit QM. The field does not hold any further interest to me.

Coming to you: If you still wish to know more about my new approach before the paper gets written, then you convince these Indian professors of physics to arrange for my seminar. Or, else…

… What else? Simple! You. Just. Wait.

[Or see me in person if you would be visiting India. As I said, I have always been “open” from my side, and I continue to remain so.]

A song I like:
(Hindi) “bheegee bheegee fizaa…”
Music: Hemant Kumar
Singer: Asha Bhosale
Lyrics: Kaifi Aazmi

History:
Originally published: 2018.11.26 18:12 IST
Extension and revision: 2018.11.27 19.29 IST