# Python scripts for simulating QM, part 0: A general update

My proposed paper on my new approach to QM was not accepted at the international conference where I had sent my abstract. (For context, see the post before the last, here [^] ).

“Thank God,” that’s what I actually felt when I received this piece of news, “I can now immediately proceed to procrastinate on writing the full-length paper, and also, simultaneously, un-procrastinate on writing some programs in Python.”

So far, I have written several small and simple code-snippets. All of these were for the usual (text-book) cases; all in only $1D$. Here in this post, I will mention specifically which ones…

Time-independent Schrodinger equation (TISE):

Here, I’ve implemented a couple of scripts, one for finding the eigen-vectors and -values for a particle in a box (with both zero and arbitrarily specified potentials) and another one for the quantum simple harmonic oscillator.

These were written not with the shooting method (which is the method used in the article by Rhett Allain for the Wired magazine [^]) but with the matrix method. … Yes, I have gone past the stage of writing all the numerical analysis algorithm in original, all by myself. These days, I directly use Python libraries wherever available, e.g., NumPy’s LinAlg methods. That’s why, I preferred the matrix method. … My code was not written from scratch; it was based on Cooper’s code “qp_se_matrix”, here [PDF ^]).

Time-dependent Schrodinger equation (TDSE):

Here, I tried out a couple of scripts.

The first one was more or less a straightforward porting of Ian Cooper’s program “se_fdtd” [PDF ^] from the original MatLab to Python. The second one was James Nagel’s Python program (written in 2007 (!) and hosted as a SciPy CookBook tutorial, here [^]). Both follow essentially the same scheme.

Initially, I found this scheme to be a bit odd to follow. Here is what it does.

It starts out by replacing the complex-valued Schrodinger equation with a pair of real-valued (time-dependent) equations. That was perfectly OK by me. It was their discretization which I found to be a bit peculiar. The discretization scheme here is second-order in both space and time, and yet it involves explicit time-stepping. That’s peculiar, so let me write a detailed note below (in part, for my own reference later on).

Also note: Though both Cooper and Nagel implement essentially the same method, Nagel’s program is written in Python, and so, it is easier to discuss (because the array-indexing is 0-based). For this reason, I might make a direct reference only to Nagel’s program even though it is to be understood that the same scheme is found implemented also by Cooper.

A note on the method implemented by Nagel (and also by Cooper):

What happens here is that like the usual Crank-Nicolson (CN) algorithm for the diffusion equation, this scheme too puts the half-integer time-steps to use (so as to have a second-order approximation for the first-order derivative, that of time). However, in the present scheme, the half-integer time-steps turn out to be not entirely fictitious (the way they are, in the usual CN method for the single real-valued diffusion equation). Instead, all of the half-integer instants are fully real here in the sense that they do enter the final discretized equations for the time-stepping.

The way that comes to happen is this: There are two real-valued equations to solve here, coupled to each other—one each for the real and imaginary parts. Since both the equations have to be solved at each time-step, what this method does is to take advantage of that already existing splitting of the time-step, and implements a scheme that is staggered in time. (Note, this scheme is not staggered in space, as in the usual CFD codes; it is staggered only in time.) Thus, since it is staggered and explicit, even the finite-difference quantities that are defined only at the half-integer time-steps, also get directly involved in the calculations. How precisely does that happen?

The scheme defines, allocates memory storage for, and computationally evaluates the equation for the real part, but this computation occurs only at the full-integer instants ($n = 0, 1, 2, \dots$). Similarly, this scheme also defines, allocates memory for, and computationally evaluates the equation for the imaginary part; however, this computation occurs only at the half-integer instants ($n = 1/2, 1+1/2, 2+1/2, \dots$). The particulars are as follows:

The initial condition (IC) being specified is, in general, complex-valued. The real part of this IC is set into a space-wide array defined for the instant $n$; here, $n = 0$. Then, the imaginary part of the same IC is set into a separate array which is defined nominally for a different instant: $n+1/2$. Thus, even if both parts of the IC are specified at $t = 0$, the numerical procedure treats the imaginary part as if it was set into the system only at the instant $n = 1/2$.

Given this initial set-up, the actual time-evolution proceeds as follows:

• The real-part already available at $n$ is used in evaluating the “future” imaginary part—the one at $n+1/2$
• The imaginary part thus found at $n+1/2$ is used, in turn, for evaluating the “future” real part—the one at $n+1$.

At this point that you are allowed to say: lather, wash, repeat… Figure out exactly how. In particular, notice how the simulation must proceed in integer number of pairs of computational steps; how the imaginary part is only nominally (i.e. only computationally) distant in time from its corresponding real part.

Thus, overall, the discretization of the space part is pretty straight-forward here: the second-order derivative (the Laplacian) is replaced by the usual second-order finite difference approximation. However, for the time-part, what this scheme does is both similar to, and different from, the usual Crank-Nicolson scheme.

Like the CN scheme, the present scheme also uses the half-integer time-levels, and thus manages to become a second-order scheme for the time-axis too (not just space), even if the actual time interval for each time-step remains, exactly as in the CN, only $\Delta t$, not $2\Delta t$.

However, unlike the CN scheme, this scheme still remains explicit. That’s right. No matrix equation is being solved at any time-step. You just zip through the update equations.

Naturally, the zipping through comes with a “cost”: The very scheme itself comes equipped with a stability criterion; it is not unconditionally stable (the way CN is). In fact, the stability criterion now refers to half of the time-interval, not full, and thus, it is a bit even more restrictive as to how big the time-step ($\Delta t$) can be given a certain granularity of the space-discretization ($\Delta x$). … I don’t know, but guess that this is how they handle the first-order time derivatives in the FDTD method (finite difference time domain). May be the physics of their problems itself is such that they can get away with coarser grids without being physically too inaccurate, who knows…

Other aspects of the codes by Nagel and Cooper:

For the initial condition, both Cooper and Nagel begin with a “pulse” of a cosine function that is modulated to have the envelop of the Gaussian. In both their codes, the pulse is placed in the middle, and they both terminate the simulation when it reaches an end of the finite domain. I didn’t like this aspect of an arbitrary termination of the simulation.

However, I am still learning the ropes for numerically handling the complex-valued Schrodinger equation. In any case, I am not sure if I’ve got good enough a handle on the FDTD-like aspects of it. In particular, as of now, I am left wondering:

What if I have a second-order scheme for the first-order derivative of time, but if it comes with only fictitious half-integer time-steps (the way it does, in the usual Crank-Nicolson method for the real-valued diffusion equation)? In other words: What if I continue to have a second-order scheme for time, and yet, my scheme does not use leap-frogging? In still other words: What if I define both the real and imaginary parts at the same integer time-steps $n = 0, 1, 2, 3, \dots$ so that, in both cases, their values at the instant $n$ are directly fed into both their values at $n+1$?

In a way, this scheme seems simpler, in that no leap-frogging is involved. However, notice that it would also be an implicit scheme. I would have to solve two matrix-equations at each time-step. But then, I could perhaps get away with a larger time-step than what Nagel or Cooper use. What do you think? Is checker-board patterning (the main reason why we at all use staggered grids in CFD) an issue here—in time evolution? But isn’t the unconditional stability too good to leave aside without even trying? And isn’t the time-axis just one-way (unlike the space-axis that has BCs at both ends)? … I don’t know…

PBCs and ABCs:

Even as I was (and am) still grappling with the above-mentioned issue, I also wanted to make some immediate progress on the front of not having to terminate the simulation (once the pulse reached one of the ends of the domain).

So, instead of working right from the beginning with a (literally) complex Schrodinger equation, I decided to first model the simple (real-valued) diffusion equation, and to implement the PBCs (periodic boundary conditions) for it. I did.

My code seems to work, because the integral of the dependent variable (i.e., the total quantity of the diffusing quantity present in the entire domain—one with the topology of a ring) does seem to stay constant—as is promised by the Crank-Nicolson scheme. The integral stays “numerically the same” (within a small tolerance) even if obviously, there are now fluxes at both the ends. (An initial condition of a symmetrical saw-tooth profile defined between $y = 0.0$ and $y = 1.0$, does come to asymptotically approach the horizontal straight-line at $y = 0.5$. That is what happens at run-time, so obviously, the scheme seems to handle the fluxes right.)

Anyway, I don’t always write everything from the scratch; I am a great believer in lifting codes already written by others (with attribution, of course :)). Thus, while thus searching on the ‘net for some already existing resources on numerically modeling the Schrodinger equation (preferably with code!), I also ran into some papers on the simulation of SE using ABCs (i.e., the absorbing boundary conditions). I was not sure, however, if I should implement the ABCs immediately…

As of today, I think that I am going to try and graduate from the transient diffusion equation (with the CN scheme and PBCs), to a trial of the implicit TDSE without leap-frogging, as outlined above. The only question is whether I should throw in the PBCs to go with that or the ABCs. Or, may be, neither, and just keep pinning the  $\Psi$ values for the end- and ghost-nodes down to $0$, thereby effectively putting the entire simulation inside an infinite box?

At this point of time, I am tempted to try out the last. Thus, I think that I would rather first explore the staggering vs. non-staggering issue for a pulse in an infinite box, and understand it better, before proceeding to implement either the PBCs or the ABCs. Of course, I still have to think more about it… But hey, as I said, I am now in a mood of implementing, not of contemplating.

Why not upload the programs right away?

BTW, all these programs (TISE with matrix method, TDSE on the lines of Nagel/Cooper’s codes, transient DE with PBCs, etc.) are still in a fluid state, and so, I am not going to post them immediately here (though over a period of time, I sure would).

The reason for not posting the code runs something like this: Sometimes, I use the Python range objects for indexing. (I saw this goodie in Nagel’s code.) At other times, I don’t. But even when I don’t use the range objects, I anyway am tempted to revise the code so as to have them (for a better run-time efficiency).

Similarly, for the CN method, when it comes to solving the matrix equation at each time-step, I am still not using the TDMA (the Thomas algorithm) or even just sparse matrices. Instead, right now, I am allocating the entire $N \times N$ sized matrices, and am directly using NumPy’s LinAlg’s solve() function on these biggies. No, the computational load doesn’t show up; after all, I anyway have to use a 0.1 second pause in between the rendering passes, and the biggest matrices I tried were only $1001 \times 1001$ in size. (Remember, this is just a $1D$ simulation.) Even then, I am tempted a bit to improve the efficiency. For these and similar reasons, some or the other tweaking is still going on in all the programs. That’s why, I won’t be uploading them right away.

Anything else about my new approach, like delivering a seminar or so? Any news from the Indian physicists?

I had already contacted a couple of physics professors from India, both from Pune: one, about 1.5 years ago, and another, within the last 6 months. Both these times, I offered to become a co-guide for some computational physics projects to be done by their PG/UG students or so. Both times (what else?) there was absolutely no reply to my emails. … If they were to respond, we could have together progressed further on simulating my approach. … I have always been “open” about it.

The above-mentioned experience is precisely similar to how there have been no replies when I wrote to some other professors of physics, i.e., when I offered to conduct a seminar (covering my new approach) in their departments. Particularly, from the young IISER Pune professor whom I had written. … Oh yes, BTW, there has been one more physicist who I contacted recently for a seminar (within the last month). Once again, there has been no reply. (This professor is known to enjoy hospitality abroad as an Indian, and also use my taxpayer’s money for research while in India.)

No, the issue is not whether the emails I write using my Yahoo! account go into their span folder—or something like that. That would be too innocuous a cause, and too easy to deal with—every one has a mobile-phone these days. But I know these (Indian) physicists. Their behaviour remains exactly the same even if I write my emails using a respectable academic email ID (my employers’, complete with a .edu domain). This was my experience in 2016, and it repeated again in 2017.

The bottom-line is this: If you are an engineer and if you write to these Indian physicists, there is almost a guarantee that your emails will go into a black-hole. They will not reply to you even if you yourself have a PhD, and are a Full Professor of engineering (even if only on an ad-hoc basis), and have studied and worked abroad, and even if your blog is followed internationally. So long as you are engineer, and mention QM, the Indian physicists simply shut themselves off.

However, there is a trick to get them to reply you. Their behavior does temporarily change when you put some impressive guy in your cc-field (e.g., some professor friend of yours from some IIT). In this case, they sometimes do reply your first email. However, soon after that initial shaking of hands, they somehow go back to their true core; they shut themselves off.

And this is what invariably happens with all of them—no matter what other Indian bloggers might have led you to believe.

There must be some systemic reasons for such behavior, you say? Here, I will offer a couple of relevant observations.

Systemically speaking, Indian physicists, taken as a group (and leaving any possible rarest of the rare exceptions aside), all fall into one band: (i) The first commonality is that they all are government employees. (ii) The second commonality they all tend to be leftists (or, heavily leftists). (iii) The third commonality is they (by and large) share is that they had lower (or far lower) competitive scores in the entrance examinations at the gateway points like XII, GATE/JAM, etc.

The first factor typically means that they know that no one is going to ask them why they didn’t reply (even to people like with my background). The second factor typically means that they don’t want to give you any mileage, not even just a plain academic respect, if you are not already one of “them”. The third factor typically means that they simply don’t have the very intellectual means to understand or judge anything you say if it is original—i.e., if it is not based on some work of someone from abroad. In plain words: they are incompetent. (That in part is the reason whenever I run into a competent Indian physicist, it is both a surprise and a pleasure. To drop a couple of names: Prof. Kanhere (now retired) from UoP (now SPPU), and Prof. Waghmare of JNCASR. … But leaving aside this minuscule minority, and coming to the rest of the herd: the less said, the better.)

In short, Indian physicists all fall into a band. And they all are very classical—no tunneling is possible. Not with these Indian physicists. (The trends, I guess, are similar all over the world. Yet, I definitely can say that Indians are worse, far worse, than people from the advanced, Western, countries.)

Anyway, as far as the path through the simulations goes, since no help is going to come from these government servants (regarded as physicists by foreigners), I now realized that I have to get going about it—simulations for my new approach—entirely on my own. If necessary, from the basic of the basics. … And that’s how I got going with these programs.

Are these programs going to provide a peek into my new approach?

No, none of these programs I talked about in this post is going to be very directly helpful for simulations related to my new approach. The programs I wrote thus far are all very, very standard (simplest UG text-book level) stuff. If resolving QM riddles were that easy, any number of people would have done it already.

… So, the programs I wrote over the last couple of weeks are nothing but just a beginning. I have to cover a lot of distance. It may take months, perhaps even a year or so. But I intend to keep working at it. At least in an off and on manner. I have no choice.

And, at least currently, I am going about it at a fairly good speed.

For the same reason, expect no further blogging for another 2–3 weeks or so.

But one thing is for certain. As soon as my paper on my new approach (to be written after running the simulations) gets written, I am going to quit QM. The field does not hold any further interest to me.

Coming to you: If you still wish to know more about my new approach before the paper gets written, then you convince these Indian professors of physics to arrange for my seminar. Or, else…

… What else? Simple! You. Just. Wait.

[Or see me in person if you would be visiting India. As I said, I have always been “open” from my side, and I continue to remain so.]

A song I like:
(Hindi) “bheegee bheegee fizaa…”
Music: Hemant Kumar
Singer: Asha Bhosale
Lyrics: Kaifi Aazmi

History:
Originally published: 2018.11.26 18:12 IST
Extension and revision: 2018.11.27 19.29 IST

# A list of books for understanding the non-relativistic QM

TL;DR: NFY (Not for you).

In this post, I will list those books which have been actually helpful to me during my self-studies of QM.

But before coming to the list, let me first note down a few points which would be important for engineers who wish to study QM on their own. After all, my blog is regularly visited by engineers too. That’s what the data about the visit patterns to various posts says.

Others (e.g. physicists) may perhaps skip over the note in the next section, and instead jump directly over to the list itself. However, even if the note for engineers is too long, perhaps, physicists should go through it too. If they did, they sure would come to know a bit more about the kind of background from which the engineers come.

# I. A note for engineers who wish to study QM on their own:

The point is this: QM is vast, even if its postulates are just a few. So, it takes a prolonged, sustained effort to learn it.

For the same reason (of vastness), learning QM also involves your having to side-by-side learn an entirely new approach to learning itself. (If you have been a good student of engineering, chances are pretty good that you already have some first-hand idea about this meta-learning thing. But the point is, if you wish to understand QM, you have to put it to use once again afresh!)

In terms of vastness, QM is, in some sense, comparable to this cluster of subjects spanning engineering and physics: engineering thermodynamics, statistical mechanics, kinetics, fluid mechanics, and heat- and mass-transfer.

I.1 Thermodynamics as a science that is hard to get right:

The four laws of thermodynamics (including the zeroth and the third) are easy enough to grasp—I mean, in the simpler settings. But when it comes to this subject (as also for the Newtonian mechanics, i.e., from the particle to the continuum mechanics), God lies not in the postulates but in their applications.

The statement of the first law of thermodynamics remains the same simple one. But complexity begins to creep in as soon as you begin to dig just a little bit deeper with it. Entire categories of new considerations enter the picture, and the meaning of the same postulates gets both enriched and deepened with them. For instance, consider the distinction of the open vs. the closed vs. the isolated systems, and the corresponding changes that have to be made even to the mathematical statements of the law. That’s just for the starters. The complexity keeps increasing: studies of different processes like adiabatic vs. isochoric vs. polytropic vs. isentropic etc., and understanding the nature of these idealizations and their relevance in diverse practical applications such as: steam power (important even today, specifically, in the nuclear power plants), IC engines, jet turbines, refrigeration and air-conditioning, furnaces, boilers, process equipment, etc.; phase transitions, material properties and their variations; empirical charts….

Then there is another point. To really understand thermodynamics well, you have to learn a lot of other subjects too. You have to go further and study some different but complementary sciences like heat and mass transfer, to begin with. And to do that well, you need to study fluid dynamics first. Kinetics is practically important too; think of process engineering and cost of energy. Ideas from statistical mechanics are important from the viewpoint of developing a fundamental understanding. And then, you have to augment all this study with all the empirical studies of the irreversible processes (think: the boiling heat transfer process). It’s only when you study such an entire gamut of topics and subjects that you can truly come to say that you now have some realistic understanding of the subject matter that is thermodynamics.

Developing understanding of the aforementioned vast cluster of subjects (of thermal sciences) is difficult; it requires a sustained effort spanning over years. Mistakes are not only very easily possible; in engineering schools, they are routine. Let me illustrate this point with just one example from thermodynamics.

Consider some point that is somewhat nutty to get right. For instance, consider the fact that no work is done during the free expansion of a gas. If you are such a genius that you could correctly get this point right on your very first reading, then hats off to you. Personally, I could not. Neither do I know of even a single engineer who could. We all had summarily stumbled on some fine points like this.

You see, what happens here is that thermodynamics and statistical mechanics involve entirely different ways of thinking, but they both are being introduced almost at the same time during your UG studies. Therefore, it is easy enough to mix up the some disparate metaphors coming from these two entirely different paradigms.

Coming to the specific example of the free expansion, initially, it is easy enough for you to think that since momentum is being carried by all those gas molecules escaping the chamber during the free expansion process, there must be a leakage of work associated with it. Further, since the molecules were already moving in a random manner, there must be an accompanying leakage of the heat too. Both turn out to be wrong ways of thinking about the process! Intuitions about thermodynamics develop only slowly. You think that you understood what the basic idea of a system and an environment is like, but the example of the free expansion serves to expose the holes in your understanding. And then, it’s not just thermo and stat mech. You have to learn how to separate both from kinetics (and they all, from the two other, closely related, thermal sciences: fluid mechanics, and heat and mass transfer).

But before you can learn to separate out the unique perspectives of these subject matters, you first have to learn their contents! But the way the university education happens, you also get exposed to them more or less simultaneously! (4 years is as nothing in a career that might span over 30 to 40 years.)

Since you are learning a lot many different paradigms at the same time, it is easy enough to naively transfer your fledgling understanding of one aspect of one paradigm (say, that of the particle or statistical mechanics) and naively insert it, in an invalid manner, into another paradigm which you are still just learning to use at roughly the same time (thermodynamics). This is what happens in the case of the free expansion of gases. Or, of throttling. Or, of the difference between the two… It is a rare student who can correctly answer all the questions on this topic, during his oral examination.

Now, here is the ultimate point: Postulates-wise, thermodynamics is independent of the rest of the subjects from the aforementioned cluster of subjects. So, in theory, you should be able to “get” thermodynamics—its postulates, in all their generality—even without ever having learnt these other subjects.

Yet, paradoxically enough, we find that complicated concepts and processes also become easier to understand when they are approached using many different conceptual pathways. A good example here would be the concept of entropy.

When you are a XII standard student (or even during your first couple of years in engineering), you are, more or less, just getting your feet wet with the idea of the differentials. As it so happens, before you run into the concept of entropy, virtually every physics concept was such that it was a ratio of two differentials. For instance, the instantaneous velocity is the ratio of d(displacement) over d(time). But the definition of entropy involves a more creative way of using the calculus: it has a differential (and that too an inexact differential), but only in the numerator. The denominator is a “plain-vanilla” variable. You have already learnt the maths used in dealing with the rates of changes—i.e. the calculus. But that doesn’t mean that you have an already learnt physical imagination with you which would let you handle this kind of a definition—one that involves a ratio of a differential quantity to an ordinary variable. … “Why should only one thing change even as the other thing remains steadfastly constant?” you may wonder. “And if it is anyway going to stay constant, then is it even significant? (Isn’t the derivative of a constant the zero?) So, why not just throw the constant variable out of the consideration?” You see, one major reason you can’t deal with the definition of entropy is simply because you can’t deal with the way its maths comes arranged. Understanding entropy in a purely thermodynamic—i.e. continuum—context can get confusing, to say the least. But then, just throw in a simple insight from Boltzmann’s theory, and suddenly, the bulb gets lit up!

So, paradoxically enough, even if multiple paradigms mean more work and even more possibilities of confusion, in some ways, having multiple approaches also does help.

When a subject is vast, and therefore involves multiple paradigms, people regularly fail to get certain complex ideas right. That happens even to very smart people. For instance, consider Maxwell’s daemon. Not many people could figure out how to deal with it correctly, for such a long time.

…All in all, it is only some time later, when you have already studied all these topics—thermodynamics, kinetics, statistical mechanics, fluid mechanics, heat and mass transfer—that finally things begin to fall in place (if they at all do, at any point of time!). But getting there involves hard effort that goes on for years: it involves learning all these topics individually, and then, also integrating them all together.

In other words, there is no short-cut to understanding thermodynamics. It seems easy enough to think that you’ve understood the 4 laws the first time you ran into them. But the huge gaps in your understanding begin to become apparent only when it comes to applying them to a wide variety of situations.

I.2 QM is vast, and requires multiple passes of studies:

Something similar happens also with QM. It too has relatively few postulates (3 to 6 in number, depending on which author you consult) but a vast scope of applicability. It is easy enough to develop a feeling that you have understood the postulates right. But, exactly as in the case of thermodynamics (or Newtonian mechanics), once again, the God lies not in the postulates but rather in their applications. And in case of QM, you have to hasten to add: the God also lies in the very meaning of these postulates—not just their applications. QM carries a one-two punch.

Similar to the case of thermodynamics and the related cluster of subjects, it is not possible to “get” QM in the first go. If you think you did, chances are that you have a superhuman intelligence. Or, far, far more likely, the plain fact of the matter is that you simply didn’t get the subject matter right—not in its full generality. (Which is what typically happens to the CS guys who think that they have mastered QM, even if the only “QM” they ever learnt was that of two-state systems in a finite-dimensional Hilbert space, and without ever acquiring even an inkling of ideas like radiation-matter interactions, transition rates, or the average decoherence times.)

The only way out, the only way that works in properly studying QM is this: Begin studying QM at a simpler level, finish developing as much understanding about its entire scope as possible (as happens in the typical Modern Physics courses), and then come to studying the same set of topics once again in a next iteration, but now to a greater depth. And, you have to keep repeating this process some 4–5 times. Often times, you have to come back from iteration n+2 to n.

As someone remarked at some forum (at Physics StackExchange or Quora or so), to learn QM, you have to give it “multiple passes.” Only then can you succeed understanding it. The idea of multiple passes has several implications. Let me mention only two of them. Both are specific to QM (and not to thermodynamics).

First, you have to develop the art of being able to hold some not-fully-satisfactory islands of understanding, with all the accompanying ambiguities, for extended periods of time (which usually runs into years!). You have to learn how to give a second or a third pass even when some of the things right from the first pass are still nowhere near getting clarified. You have to learn a lot of maths on the fly too. However, if you ask me, that’s a relatively easier task. The really difficult part is that you have to know (or learn!) how to keep forging ahead, even if at the same time, you carry a big set of nagging doubts that no one seems to know (or even care) about. (To make the matters worse, professional physicists, mathematicians and philosophers proudly keep telling you that these doubts will remain just as they are for the rest of your life.) You have to learn how to shove these ambiguous and un-clarified matters to some place near the back of your mind, you have to learn how to ignore them for a while, and still find the mental energy to once again begin right from the beginning, for your next pass: Planck and his cavity radiation, Einstein, blah blah blah blah blah!

Second, for the same reason (i.e. the necessity of multiple passes and the nature of QM), you also have to learn how to unlearn certain half-baked ideas and replace them later on with better ones. For a good example, go through Dan Styer’s paper on misconceptions about QM (listed near the end of this post).

Thus, two seemingly contradictory skills come into the play: You have to learn how to hold ambiguities without letting them affect your studies. At the same time, you also have to learn how not to hold on to them forever, or how to unlearn them, when the time to do becomes ripe.

Thus, learning QM does not involve just learning of new contents. You also have learn this art of building a sufficiently “temporary” but very complex conceptual structure in your mind—a structure that, despite all its complexity, still is resilient. You have to learn the art of holding such a framework together over a period of years, even as some parts of it are still getting replaced in your subsequent passes.

And, you have to compensate for all the failings of your teachers too (who themselves were told, effectively, to “shut up and calculate!”) Properly learning QM is a demanding enterprise.

# II. The list:

Now, with that long a preface, let me come to listing all the main books that I found especially helpful during my various passes. Please remember, I am still learning QM. I still don’t understand the second half of most any UG book on QM. This is a factual statement. I am not ashamed of it. It’s just that the first half itself managed to keep me so busy for so long that I could not come to studying, in an in-depth manner, the second half. (By the second half, I mean things like: the QM of molecules and binding, of their spectra, QM of solids, QM of complicated light-matter interactions, computational techniques like DFT, etc.) … OK. So, without any further ado, let me jot down the actual list.  I will subdivide it in several sub-sections

II.0. Junior-college (American high-school) level:

Obvious:

• Resnick and Halliday.
• Thomas and Finney. Also, Allan Jeffrey

II.1. Initial, college physics level:

• “Modern physics” by Beiser, or equivalent
• Optional but truly helpful: “Physical chemistry” by Atkins, or equivalent, i.e., only the parts relevant to QM. (I know engineers often tend to ignore the chemistry books, but they should not. In my experience, often times, chemistry books do a superior job of explaining physics. Physics, to paraphrase a witticism, is far too important to be left to the physicists!)

II.2. Preparatory material for some select topics:

• “Physics of waves” by Howard Georgi. Excellence written all over, but precisely for the same reason, take care to avoid the temptation to get stuck in it!
• Maths: No particular book, but a representative one would be Kreyszig, i.e., with Thomas and Finney or Allan Jeffrey still within easy reach.
• There are a few things you have to relearn, if necessary. These include: the idea of the limits of sequences and series. (Yes, go through this simple a topic too, once again. I mean it!). Then, the limits of functions.
Also try to relearn curve-tracing.
• Unlearn (or throw away) all the accounts of complex numbers which remain stuck at the level of how $\sqrt{-1}$ was stupefying, and how, when you have complex numbers, any arbitrary equation magically comes to have roots, etc. Unlearn all that talk. Instead, focus on the similarities of complex numbers to both the real numbers and vectors, and also their differences from each. Unlike what mathematicians love to tell you, complex numbers are not just another kind of numbers. They don’t represent just the next step in the logic of how the idea of numbers gets generalized as go from integers to real numbers. The reason is this: Unlike the integers, rationals, irrationals and reals, complex numbers take birth as composite numbers (as a pair of numbers that is ordered too), and they remain that way until the end of their life. Get that part right, and ignore all the mathematicians’ loose talk about it.
Study complex numbers in a way that, eventually, you should find yourself being comfortable with the two equivalent ways of modeling physical phenomena: as a set of two coupled real-valued differential equations, and as a single but complex-valued differential equation.
• Also try to become proficient with the two main expansions: the Taylor, and the Fourier.
• Also develop a habit of quickly substituting truncated expansions (i.e., either a polynomial, or a sum complex exponentials having just a few initial harmonics, not an entire infinity of them) into any “arbitrary” function as an ansatz, and see how the proposed theory pans out with these. The goal is to become comfortable, at the same time, with a habit of tracing conceptual pathways to the meaning of maths as well as with the computational techniques of FDM, FEM, and FFT.
• The finite differences approximation: Also, learn the art of quickly substituting the finite differences ($\Delta$‘s) in place of the differential quantities ($d$ or $\partial$) in a differential equation, and seeing how it pans out. The idea here is not just the computational modeling. The point is: Every differential equation has been derived in reference to an elemental volume which was then taken to a vanishingly small size. The variation of quantities of interest across such (infinitesimally small) volume are always represented using the Taylor series expansion.
(That’s correct! It is true that the derivations using the variational approach don’t refer to the Taylor expansion. But they also don’t use infinitesimal volumes; they refer to finite or infinite domains. It is the variation in functions which is taken to the vanishingly small limit in their case. In any case, if your derivation has an infinitesimall small element, bingo, you are going to use the Taylor series.)
Now, coming back to why you must learn develop the habit of having a finite differences approximation in place of a differential equation. The thing is this: By doing so, you are unpacking the derivation; you are traversing the analysis in the reverse direction, you are by the logic of the procedure forced to look for the physical (or at least lower-level, less abstract) referents of a mathematical relation/idea/concept.
While thus going back and forth between the finite differences and the differentials, also learn the art of tracing how the limiting process proceeds in each such a case. This part is not at all as obvious as you might think. It took me years and years to figure out that there can be infinitesimals within infinitesimals. (In fact, I have blogged about it several years ago here. More recently, I wrote a PDF document about how many numbers are there in the real number system, which discusses the same idea, from a different angle. In any case, if you were not shocked by the fact that there can be an infinity of infinitesimals within any infinitesimal, either think sufficiently long about it—or quit studying foundations of QM.)

II.3. Quantum chemistry level (mostly concerned with only the TISE, not TDSE):

• Optional: “QM: a conceptual approach” by Hameka. A fairly well-written book. You can pick it up for some serious reading, but also try to finish it as fast as you can, because you are going to relean the same stuff once again through the next book in the sequence. But yes, you can pick it up; it’s only about 200 pages.
• “Quantum chemistry” by McQuarrie. Never commit the sin of bypassing this excellent book.
Summarily ignore your friend (who might have advised you Feynman vol. 3 or Susskind’s theoretical minimum or something similar). Instead, follow my advice!
A suggestion: Once you finish reading through this particular book, take a small (40 page) notebook, and write down (in the long hand) just the titles of the sections of each chapter of this book, followed by a listing of the important concepts / equations / proofs introduced in it. … You see, the section titles of this book themselves are complete sentences that encapsulate very neat nuggets. Here are a couple of examples: “5.6: The harmonic oscillator accounts for the infrared spectrum of a diatomic molecule.” Yes, that’s a section title! Here is another: “6.2: If a Hamiltonian is separable, then its eigenfunctions are products of simpler eigenfunctions.” See why I recommend this book? And this (40 page notebook) way of studying it?
• “Quantum physics of atoms, molecules, solids, nuclei, and particles” (yes, that’s the title of this single volume!) by Eisberg and Resnick. This Resnick is the same one as that of Resnick and Halliday. Going through the same topics via yet another thick book (almost 850 pages) can get exasperating, at least at times. But guess if you show some patience here, it should simplify things later. …. Confession: I was too busy with teaching and learning engineering topics like FEM, CFD, and also with many other things in between. So, I could not find the time to read this book the way I would have liked to. But from whatever I did read (and I did go over a fairly good portion of it), I can tell you that not finishing this book was a mistake on my part. Don’t repeat my mistake. Further, I do keep going back to it, and may be as a result, I would one day have finished it! One more point. This book is more than quantum chemistry; it does discuss the time-dependent parts too. The only reason I include it in this sub-section (chemistry) rather than the next (physics) is because the emphasis here is much more on TISE than TDSE.

II.4. Quantum physics level (includes TDSE):

• “Quantum physics” by Alastair I. M. Rae. Hands down, the best book in its class. To my mind, it easily beats all of the following: Griffiths, Gasiorowicz, Feynman, Susskind, … .
Oh, BTW, this is the only book I have ever come across which does not put scare-quotes around the word “derivation,” while describing the original development of the Schrodinger equation. In fact, this text goes one step ahead and explicitly notes the right idea, viz., that Schrodinger’s development is a derivation, but it is an inductive derivation, not deductive. (… Oh God, these modern American professors of physics!)
But even leaving this one (arguably “small”) detail aside, the book has excellence written all over it. Far better than the competition.
Another attraction: The author touches upon all the standard topics within just about 225 pages. (He also has further 3 chapters, one each on relativity and QM, quantum information, and conceptual problems with QM. However, I have mostly ignored these.) When a book is of manageable size, it by itself is an overload reducer. (This post is not a portion from a text-book!)
The only “drawback” of this book is that, like many British authors, Rae has a tendency to seamlessly bunch together a lot of different points into a single, bigger, paragraph. He does not isolate the points sufficiently well. So, you have to write a lot of margin notes identifying those distinct, sub-paragraph level, points. (But one advantage here is that this procedure is very effective in keeping you glued to the book!)
• “Quantum physics” by Griffiths. Oh yes, Griffiths is on my list too. It’s just that I find it far better to go through Rae first, and only then come to going through Griffiths.
• … Also, avoid the temptation to read both these books side-by-side. You will soon find that you can’t do that. And so, driven by what other people say, you will soon end up ditching Rae—which would be a grave mistake. Since you can keep going through only one of them, you have to jettison the other. Here, I would advise you to first complete Rae. It’s indispensable. Griffiths is good too. But it is not indispensable. And as always, if you find the time and the inclination, you can always come back to Griffiths.

Starting sometime after finishing the initial UG quantum chemistry level books, but preferably after the quantum physics books, use the following two:

• “Foundations of quantum mechanics” by Travis Norsen. Very, very good. See my “review” here [^]
• “Foundations of quantum mechanics: from photons to quantum computers” by Reinhold Blumel.
Just because people don’t rave a lot about this book doesn’t mean that it is average. This book is peculiar. It does look very average if you flip through all its pages within, say, 2–3 minutes. But it turns out to be an extraordinarily well written book once you begin to actually read through its contents. The coverage here is concise, accurate, fairly comprehensive, and, as a distinctive feature, it also is fairly up-to-date.
Unlike the other text-books, Blumel gives you a good background in the specifics of the modern topics as well. So, once you complete this book, you should find it easy (to very easy) to understand today’s pop-sci articles, say those on quantum computers. To my knowledge, this is the only text-book which does this job (of introducing you to the topics that are relevant to today’s research), and it does this job exceedingly well.
• Use Blumel to understand the specifics, and use Norsen to understand their conceptual and the philosophical underpinnings.

II.Appendix: Miscellaneous—no levels specified; figure out as you go along:

• “Schrodinger’s cat” by John Gribbin. Unquestionably, the best pop-sci book on QM. Lights your fire.
• “Quantum” by Manjit Kumar. Helps keep the fire going.
• Kreyszig or equivalent. You need to master the basic ideas of the Fourier theory, and of solutions of PDEs via the separation ansatz.
• However, for many other topics like spherical harmonics or calculus of variations, you have to go hunting for explanations in some additional books. I “learnt” the spherical harmonics mostly through some online notes (esp. those by Michael Fowler of Univ. of Virginia) and QM textbooks, but I guess that a neat exposition of the topic, couched in contexts other than QM, would have been helpful. May be there is some ancient acoustics book that is really helpful. Anyway, I didn’t pursue this topic to any great depth (in fact I more or less skipped over it) because as it so happens, analytical methods fall short for anything more complex than the hydrogenic atoms.
• As to the variational calculus, avoid all the physics and maths books like a plague! Instead, learn the topic through the FEM books. Introductory FEM books have become vastly (i.e. categorically) better over the course of my generation. Today’s FEM text-books do provide a clear evidence that the authors themselves know what they are talking about! Among these books, just for learning the variational calculus aspects, I would advise going through Seshu or Fish and Belytschko first, and then through the relevant chapter from Reddy‘s book on FEM. In any case, avoid Bathe, Zienkiewicz, etc.; they are too heavily engineering-oriented, and often, in general, un-necessarily heavy-duty (though not as heavy-duty as Lancosz). Not very suitable for learning the basics of CoV as is required in the UG QM. A good supplementary book covering CoV is noted next.
• “From calculus to chaos: an introduction to dynamics” by David Acheson. A gem of a book. Small (just about 260 pages, including program listings—and just about 190 pages if you ignore them.) Excellent, even if, somehow, it does not appear on people’s lists. But if you ask me, this book is a must read for any one who has anything to do with physics or engineering. Useful chapters exist also on variational calculus and chaos. Comes with easy to understand QBasic programs (and their updated versions, ready to run on today’s computers, are available via the author’s Web site). Wish it also had chapters, say one each, on the mechanics of materials, and on fracture mechanics.
• Linear algebra. Here, keep your focus on understanding just the two concepts: (i) vector spaces, and (ii) eigen-vectors and -values. Don’t worry about other topics (like LU decomposition or the power method). If you understand these two topics right, the rest will follow “automatically,” more or less. To learn these two topics, however, don’t refer to text-books (not even those by Gilbert Strang or so). Instead, google on the online tutorials on computer games programming. This way, you will come to develop a far better (even robust) understanding of these concepts. … Yes, that’s right. One or two games programmers, I very definitely remember, actually did a much superior job of explaining these ideas (with all their complexity) than what any textbook by any university professor does. (iii) Oh yes, BTW, there is yet another concept which you should learn: “tensor product”. For this topic, I recommend Prof. Zhigang Suo‘s notes on linear algebra, available off iMechanica. These notes are a work in progress, but they are already excellent even in their present form.
• Probability. Contrary to a wide-spread impression (and to what one group of QM interpreters say), you actually don’t need much of statistics or probability in order to get the essence of QM right. Whatever you need has already been taught to you in your UG engineering/physics courses.Personally, though I haven’t yet gone through them, the two books on my radar (more from the data science angle) are: “Elementary probability” by Stirzaker, and “All of statistics” by Wasserman. But, frankly speaking, as far as QM itself is concerned, your intuitive understanding of probability as developed through your routine UG courses should be enough, IMHO.
• As to AJP type of articles, go through Dan Styer‘s paper on the nine formulations (doi:10.1119/1.1445404). But treat his paper on the common misconceptions (10.1119/1.18288) with a bit of caution; some of the ideas he lists as “misconceptions” are not necessarily so.
• arXiv tutorials/articles: Sometime after finishing quantum chemistry and before beginning quantum physics, go through the tutorial on QM by Bram Gaasbeek [^]. Neat, small, and really helpful for self-studies of QM. (It was written when the author was still a student himself.) Also, see the article on the postulates by Dorabantu [^]. Definitely helpful. Finally, let me pick up just one more arXiv article: “Entanglement isn’t just for spin” by Dan Schroeder [^]. Comes with neat visualizations, and helps demystify entanglement.
• Computational physics: Several good resources are available. One easy to recommend text-book is the one by Landau, Perez and Bordeianu. Among the online resources, the best collection I found was the one by Ian Cooper (of Univ. of Sydney) [^]. He has only MatLab scripts, not Python, but they all are very well documented (in an exemplary manner) via accompanying PDF files. It should be easy to port these programs to the Python eco-system.

Yes, we (finally) are near the end of this post, so let me add the mandatory catch-all clauses: This list is by no means comprehensive! This list supersedes any other list I may have put out in the past. This list may undergo changes in future.

Done.

OK. A couple of last minute addenda: For contrast, see the article “What is the best textbook for self-studying quantum mechanics?” which has appeared, of all places, on the Forbes!  [^]. (Looks like the QC-related hype has found its way into the business circles as well!) Also see the list at BookScrolling.com: “The best books to learn about quantum physics” [^].

OK. Now, I am really done.

A song I like:
(Marathi) “kiteedaa navyaane tulaa aaThavaave”
Music: Mandar Apte
Singer: Mandar Apte. Also, a separate female version by Arya Ambekar
Lyrics: Devayani Karve-Kothari

[Arya Ambekar’s version is great too, but somehow, I like Mandar Apte’s version better. Of course, I do often listen to both the versions. Excellent.]

[Almost 5000 More than 5,500 words! Give me a longer break for this time around, a much longer one, in fact… In the meanwhile, take care and bye until then…]

# Would it happen to me, too? …Also, other interesting stories / links

1. Would it happen to me, too?

“My Grandfather Thought He Solved a Cosmic Mystery,”

reports Veronique Greenwood for The Atlantic [^] [h/t the CalTech physicist Sean Carroll’s twitter feed]. The story has the subtitle:

“His career as an eminent physicist was derailed by an obsession. Was he a genius or a crackpot?”

If you visit the URL for this story, the actual HTML page which loads into your browser has another title, similar to the one above:

“Science Is Full of Mavericks Like My Grandfather. But Was His Physics Theory Right?”

Hmmm…. I immediately got interested. After all, I do work also on foundations of quantum mechanics. … “Will it happpen to me, too?” I thought.

At this point, you should really go through Greenwood’s article, and continue reading here only after you have finished reading it.

Any one who has worked on any conceptually new approach would find something in Greenwood’s article that resonates with him.

As to me, well, right at the time that attempts were being made to find examiners for my PhD, my guide (and even I) had heard a lot of people say very similar things as Greenwood now reports: “I don’t understand what you are saying, so please excuse me.” This, when I thought that my argument should be accessible even to an undergraduate in engineering!

And now that I continue working on the foundations of QM, having developed a further, completely new (and more comprehensive) approach, naturally, Greenwood’s article got me thinking: “Would it happen to me, too? Once again? What if it does?”

…Naah, it wouldn’t happen to me—that was my conclusion. Not even if I continue talking about, you know, QM!

But why wouldn’t something similar happen to me? Especially given the fact that a good part of it has already happened to me in the past?

The reason, in essence, is simple.

I am not just a physicist—not primarily, anyway. I am primarily an engineer, a computational modeller. That’s why, things are going to work out in a different way for me.

As to my past experience: Well, I still earned my PhD degree. And with it, the most critical part of the battle is already behind me. There is a lot of resistance to your acceptance before you have a PhD. Things do become a lot easier once you have gone successfully past it. That’s another reason why things are going to work out in a different way now. … Let me explain in detail.

I mean to say, suppose that I have a brand-new approach for resolving all the essential quantum mechanical riddles. [I think I actually do!]

Suppose that I try to arrange for a seminar to be delivered by me to a few physics professors and students, say at an IIT, IISER, or so. [I actually did!]

Suppose that they don’t respond very favorably or very enthusiastically. Suppose they are outright skeptical when I say that in principle, it is possible to think of a classical mechanically functioning analog simulator which essentially exhibits all the essential quantum mechanical features. Suppose that they get stuck right at that point—may be because they honestly and sincerely believe that no classical system can ever simulate the very quantum-ness of QM. And so, short of calling me a crack-pot or so, they just directly (almost sternly) issue the warning that there are a lot of arguments against a classical system reproducing the quantum features. [That’s what has actually happened; that’s what one of the physics professors I contacted wrote back to me.]

Suppose, then, that I send an abstract to an international conference or so. [This too has actually happend, too, recently.]

Suppose that, in the near future, the conference organizers too decline my submission. [In actual reality, I still don’t know anything about the status of my submission. It was in my routine searches that I came across this conference, and noticed that I did have about 4–5 hours’ time to meet the abstracts submissions deadline. I managed to submit my abstract within time. But since then, the conference Web site has not got updated. There is no indication from the organizers as to when the acceptance or rejection of the submitted abstracts would be communicated to the authors. An enquiry email I wrote to the organizers has gone unanswered for more than a week by now. Thus, the matter is still open. But, just for the sake of the argument, suppose that they end up rejecting my abstract. Suppose that’s what actually happens.]

So what?

Since I am not a physicist “proper”, it wouldn’t affect me the way it might have, if I were to be one.

… And, that way, I could even say that I am far too smart to let something like that (I mean some deep disappointment or something like that) happen to me! … No, seriously! Let me show you how.

Suppose that the abstract I sent to an upcoming conference was written in theoretical/conceptual terms. [In actual reality, it was.]

Suppose now that it therefore gets rejected.

So what?

I would simply build a computational model based on my ideas. … Here, remember, I have already begun “talking things” about it [^]. No one has come up with a strong objection so far. (May be because they know the sort of a guy I am.)

So, if my proposed abstract gets rejected, what I would do is to simply go ahead and perform a computer simulation of a classical system of this sort (one which, in turn, simulates the QM phenomena). I might even publish a paper or two about it—putting the whole thing in purely classical terms, so that I manage to get it published. (Before doing that, I might even discuss the technical issues involved on blogs, possibly even at iMechanica!)

After such a paper (ostensibly only on the classical mechanics) gets accepted and published, I will simply write a blog post, either here or at iMechanica, noting how that system actually simulates the so-and-so quantum mechanical feature. … Then, I would perform another simulation—say using DFT. (And it is mainly for DFT that I would need help from iMechanicians or so.) After it too gets accepted and published, I will write yet another blog post, explaining how it does show some quantum mechanical-ness. … Who knows such a sequence could continue…

But such a series (of the simulations) wouldn’t be very long, either! The thing is this.

If your idea does indeed simplify certain matters, then you don’t have to argue a lot about it—people can see its truth real fast. Especially if it has to do with “hard” sciences like engineering—even physics!

If your basic idea itself isn’t so good, then, putting it in the engineering terms makes it more likely that even if you fail to get the weakness of your theory, someone else would. All in all, well and good for you.

As to the other possibility, namely, if your idea is good, but, despite putting it in the simpler terms (say in engineering or simulation terms), people still fail to see it, then, well, so long as your job (or money-making potential) itself is not endangered, then I think that it is a good policy to leave the mankind to its own follies. It is not your job to save the world, said Ayn Rand. Here, I believe her. (In fact, I believed in this insight even before I had ever run into Ayn Rand.)

As to the philosophic issues such as those involved in the foundations of QM—well, these are best tackled philosophically, not physics-wise. I wouldn’t use a physics-based argument to take a philosophic argument forward. Neither would I use a philosophical argument to take a physics-argument forward. The concerns and the methods of each are distinctly different, I have come to learn over a period of years.

Yes, you can use a physics situation as being illustrative of a philosophic point. But an illustration is not an argument; it is merely a device to make understanding easier. Similarly, you could try to invoke a philosophic point (say an epistemological point) to win a physics-based argument. But your effort would be futile. Philosophic ideas are so abstract that they can often be made to fit several different, competing, physics-related arguments. I would try to avoid both these errors.

But yes, as a matter of fact, certain issues that can only be described as philosophic ones, do happen to get involved when it comes to the area of the foundations of QM.

Now, here, given the nature of philosophy, and of its typical practitioners today (including those physicists who do dabble in philosophy), even if I become satisfied that I have resolved all the essential QM riddles, I still wouldn’t expect these philosophers to accept my ideas—not immediately anyway. In fact, as I anticipate things, philosophers, taken as a group, would never come to accept my position, I think. Such an happenstance is not necessarily to be ascribed to the personal failings of the individual philosophers (even if a lot of them actually do happen to be world-class stupid). That’s just how philosophy (as a discipline of studies) itself is like. A philosophy is a comprehensive view of existence—whether realistic or otherwise. That’s why it’s futile to expect that all of the philosophers would come to agree with you!

But yes, I would expect them to get the essence of my argument. And, many of them would, actually, get my argument, its logic—this part, I am quite sure of. But just the fact that they do understand my argument would not necessarily lead them to accept my positions, especially the idea that all the QM riddles are thereby resolved. That’s what I think.

Similarly, there also are a lot of mathematicians who dabble in the area of foundations of QM. What I said for philosophers also applies more or less equally well to them. They too would get my ideas immediately. But they too wouldn’t, therefore, come to accept my positions. Not immediately anyway. And in all probability, never ever in my lifetime or theirs.

So, there. Since I don’t expect an overwhelming acceptance of my ideas in the first place, there isn’t going to be any great disappointment either. The very expectations do differ.

Further, I must say this: I would never ever be able to rely on a purely abstract argument. That would feel like too dicey or flimsy to me. I would have to offer my arguments in terms of physically existing things, even if of a brand new kind. And, machines built out of them. At least, some working simulations. I would have to have these. I would not be able to rest on an abstract argument alone. To be satisfactory to me, I would have to actually build a machine—a soft machine—that works. And, doing just this part itself is going to be far more than enough to keep me happy. They don’t have to accept the conceptual arguments or the theory that goes with the design of such (soft) machines. It is enough that I play with my toys. And that’s another reason why I am not likely to derive a very deep sense of disenchantment or disappointment.

But if you ask me, the way I really, really like think about it is this:

If they decline my submission to the conference, I will write a paper about it, and send it, may be, to Sean Carroll or Sabine Hosenfelder or so. … The way I imagine things, he is then going to immediately translate my paper into German, add his own name to ensure its timely publication, and … . OK, you get the idea.

[In the interests of making this post completely idiot-proof, let me add: Here, in this sub-section, I was just kidding.]

2. The problem with the Many Worlds:

“Why the Many Worlds interpretation has many problems.”

Philip Ball argues in an article for the Quanta Mag [^] to the effect that many worlds means no world at all.

No, this is not exactly what he says. But what he says is clear enough that it is this conclusion which becomes inescapable.

As to what he actually says: Well, here is a passage, for instance:

“My own view is that the problems with the MWI are overwhelming—not because they show it must be wrong, but because they render it incoherent. It simply cannot be articulated meaningfully.”

In other words, Ball’s actual position is on the epistemic side, not on the ontic. However, his arguments are clear enough (and they often enough touch on issues that are fundamental enough) that the ontological implications of what he actually says, also become inescapable. OK, sometimes, the article unnecessarily takes detours into non-essentials, even into something like polemics. Still, overall, the write up is very good. Recommended very strongly.

Homework for you: If the Many Worlds idea is that bad, then explain why it might be that many otherwise reasonable people (for instance, Sean Carroll) do find the Many Worlds approach attractive. [No cheating. Think on your own and write. But if cheating is what you must do, then check out my past comment at some blog—I no longer remember where I wrote it, but probably it was on Roger Schlafly’s blog. My comment had tackled precisely this latter issue, in essential terms. Hints for your search: My comment had spoken about data structures like call-stacks and trees, and their unfolding.]

3. QM as an embarrassment to science:

“Why quantum mechanics is an “embarrassment” to science”

Brad Plumer in his brief note at the Washington Post [^] provides a link to a video by Sean Carroll.

Carroll is an effective communicator.

[Yes, he is the same one who I imagine is going to translate my article into German and… [Once again, to make this post idiot-proof: I was just kidding.]]

4. Growing younger…

I happened to take up a re-reading of David Ruelle’s book: “Chance and Chaos”. The last time I read it was in the early 1990s.

I felt younger! … May be if something strikes me while I am going through it after a gap of decades, I will come back and note it here.

5. Good introductory resources on nonlinear dynamics, catastrophe theory, and chaos theory:

If you are interested in the area of nonlinear dynamics, catastrophe theory and chaos theory, here are a few great resources:

• For a long time, the best introduction to the topic was a brief write-up by Prof. Harrison of UToronto; it still remains one of the best [^].
• Prof. Zeeman’s 1976 article for SciAm on the catastrophe theory is a classic. Prof. Zhigang Suo (of Harvard) has written a blog post of title “Recipe for catastrophe”at iMechanica [^], in which he helpfully provides a copy of Zeeman’s article. I have strongly recommended Zeeman’s write-up before, and I strongly recommend it once again. Go through it even if only to learn how to write for the layman and still not lose precision or quality.
• As to a more recent introductory expositions, do see Prof. Geoff Boeing’s blog post: “Chaos theory and the logistic map” [^]. Boeing is a professor of urban planning, and not of engineering, physics, CS, or maths. But it is he who gives the clearest idea about the distinction between randomness and chaos that I have ever run into. (However, I only later gathered that he does have a UG in CS, and a PG in Information Management.) Easy to understand. Well ordered. Overall, very highly recommended.

Apart from it all:

Happy Diwali!

A song I like:

(Hindi) “tere humsafar geet hai tere…”
Music: R. D. Burman
Singers: Kishore Kumar, Mukesh, Asha Bhosale
Lyrics: Majrooh Sultanpuri

[Has this song been lifted from some Western song? At least inspired from one?

Here are the reasons for this suspicion: (1) It has a Western-sounding tune. It doesn’t sound Indian. There is no obvious basis either in the “raag-daari,” or in the Indian folk music. (ii) There are (beautiful) changes in the chords here. But there is no concept of chords in the traditional Indian music—basically, there is no concept of harmony in it, only of melody. (iii) Presence of “yoddling” (if that’s the right word for it). That too, by a female singer. That too, in the early 1970’s! Despite all  the “taan”s and “firat”s and all that, this sort of a thing (let’s call it yoddling) has never been a part of the traditional Indian music.

Chances are good that some of the notes were (perhaps very subconsciously) inspired from a Western tune. For instance, I can faintly hear “jingle bells” in the refrain. … But the question is: is there a more direct correspondence to a Western tune, or not.

And, if it was not lifted or inspired from a Western song, then it’s nothing but a work of an absolute genius. RD anyway was one—whether this particular song was inspired from some other song, or not.

But yes, I liked this song a great deal as a school-boy. It happened to strike me once again only recently (within the last couple of weeks or so). I found that I still love it just as much, if not more.]

[As usual, may be I will come back tomorrow or so, and edit/streamline this post a bit. One update done on 2018.11.04 08:26 IST. A second update done on 2018.11.04 21:01 IST. I will now leave this post in whatever shape it is in. Got to move on to trying out a few things in Python and all. Will keep you informed, probably after Diwali. In the meanwhile, take care and bye for now…]

# The bouncing droplets imply having to drop the Bohmian approach?

If you are interested in the area of QM foundations, then may be you should drop everything at once, and go, check out the latest pop-sci news report: “Famous experiment dooms alternative to quantum weirdness” by Natalie Wolchover in the Quanta Magazine [^].

Remember the bouncing droplets experiments performed by Yves Couder and pals? In 2006, they had reported that they could get the famous interference pattern even if the bouncing droplets passed through the double slit arrangement only one at a time. … As the Quanta article now reports, it turns out that when other groups in the USA and France tried to reproduce this result (the single-particle double-slit interference), they could not.

“Repeat runs of the experiment, called the “double-slit experiment,” have contradicted Couder’s initial results and revealed the double-slit experiment to be the breaking point of both the bouncing-droplet analogy and de Broglie’s pilot-wave vision of quantum mechanics.”

Well, just an experimental failure or two in reproducing the interference, by itself, wouldn’t make for a “breaking point,”i.e., if the basic idea itself were to be sound. So the question now becomes whether the basic idea itself is sound enough or not.

Turns out that a new argument has been put forth, in the form of a thought experiment, which reportedly shows why and how the very basic idea itself must be regarded as faulty. This thought experiment has been proposed by a Danish professor of fluid dynamics, Prof. Tomas Bohr. (Yes, there is a relation: Prof. Tomas Bohr is a son of the Nobel laureate Aage Bohr, i.e., a grandson of the Nobel laureate Niels Bohr [^].)

Though related to QM foundations, this thought experiment is not very “philosophical” in nature; on the contrary, it is very, very “physics-like.” And the idea behind it also is “simple.” … It’s one of those ideas which make you exclaim “why didn’t I think of it before?”—at least the first time you run into it. Here is an excerpt (which actually is the caption for an immediately understandable diagram):

“Tomas Bohr’s variation on the famous double-slit experiment considers what would happen if a particle must go to one side or the other of a central dividing wall before passing through one of the slits. Quantum mechanics predicts that the wall will have no effect on the resulting double-slit interference pattern. Pilot-wave theory, however, predicts that the wall will prevent interference from happening.”

… Ummm… Not quite.

From whatever little I know about the pilot-wave theory, I think that the wall wouldn’t prevent the interference from occurring, even if you use this theory. … It all seems to depend on how you interpret (and/or extend) the pilot-wave theory. But if applied right (which means: in its own spirit), then I guess that the theory is just going to reproduce whatever it is that the mainstream QM predicts. Given this conclusion I have drawn about this approach, I did think that the above-quoted portion was a bit misleading.

The main text of the article then proceeds to more accurately point out the actual problem (i.e., the way Prof. Tomas Bohr apparently sees it):

“… the dividing-wall thought experiment highlights, in starkly simple form, the inherent problem with de Broglie’s idea. In a quantum reality driven by local interactions between a particle and a pilot wave, you lose the necessary symmetry to produce double-slit interference and other nonlocal quantum phenomena. An ethereal, nonlocal wave function is needed that can travel unimpeded on both sides of any wall. [snip] But with pilot waves, “since one of these sides in the experiment carries a particle and one doesn’t, you’ll never get that right. You’re breaking this very important symmetry in quantum mechanics.””

But isn’t the pilot wave precisely ethereal and nonlocal in nature, undergoing instantaneous changes to itself at all points of space? Doesn’t the pilot theory posit that this wave doesn’t consist of anything material that does the waving but is just a wave, all by itself?

…So, if you think it through, people seem to be mixing up two separate issues here:

1. One issue is whether it will at all be possible for any real physical experiment done up with the bouncing droplets to be able to reproduce the predictions of QM or not.
2. An entirely different issue is whether, in Bohr’s dividing-wall thought-experiment, the de Broglie-Bohm approach actually predicts something that is at a variance from what QM predicts or not.

These two indeed are separate issues, and I think that the critics are right on the first count, but not necessarily on the second.

Just to clarify: The interference pattern as predicted by the mainstream QM itself would undergo a change, a minor but a very definite change, once you introduce the middle dividing wall; it would be different from the pattern obtained for the “plain-vanilla” version of the interference chamber. And if what I understand about the Bohmian mechanics is correct, then it too would proceed to  produce exactly the same patterns in both these cases.

With that said, I would still like to remind you that my own understanding of the pilot-wave theory is only minimal, mostly at the level of browsing of the Wiki and a few home pages, and going through a few pop-sci level explanations by a few Bohmians. I have never actually sat down to actually go through even one paper on it fully (let alone systematically study an entire book or a whole series of articles on this topic).

For this reason, I would rather leave it to the “real” Bohmians to respond to this fresh argument by Prof. Tomas Bohr.

But yes, a new argument—or at least, an old argument but in a remarkably new settings—it sure seems to be.

How would the Bohmians respond?

If you ask me, from whatever I have gathered about the Bohmians and their approach, I think that they are simply going to be nonchalant about this new objection, too. I don’t think that you could possibly hope to pin them down with this argument either. They are simply going to bounce back, just like those drops. And the reason for that, in turn, is what I mentioned already here in this post: their pilot-wave is both ethereal and nonlocal in the first place.

So, yes, even if Wolchover’s report does seem to be misguided a bit, I still liked it, mainly because it was informative on both the sides: experimental as well as theoretical (viz., as related to the new thought-experiment).

In conclusion, even if the famous experiment does not doom this (Bohmian) alternative to the quantum weirdness, the basic reason for its unsinkability is this:

The Bohmian mechanics is just as weird as the mainstream QM is—even if the Bohmians habitually and routinely tell you otherwise.

When a Bohmian tells you that his theory is “sensible”/“realistic”/etc/, what he is talking about is: the nature of his original ambition—but not the actual nature of his actual theory.

To write anything further about QM is to begin dropping hints to my new approach. So let me stop right here.

[But yes, I am fully ready willing from my side to disclose all details about it at any time to a suitable audience. … Let physics professors in India respond to my requests to let me conduct an informal (but officially acknowledged) seminar on my new approach, and see if I get ready to deliver it right within a week’s time, or not!

[Keep waiting!]]

Regarding other things, as you know, the machine I am using right now is (very) slow. Even then, I have managed to run a couple of 10-line Python scripts, using VSCode.

I have immediately taken to liking this IDE “code-editor.” (Never had tried it before.) I like it a lot. … Just how much?

I think I can safely say that VSCode is the best thing to have happened to the programming world since VC++ 6 about two decades ago.

Yes, I have already stopped using PyCharm (which, IMHO, is now the second-best alternative, not the best).

No songs section this time, because I have already run a neat and beautiful song just yesterday. (Check out my previous post.) … OK, if some song strikes me in a day or two, I will return here to add it. Else, wait until the next time around. … Until then, take care and bye for now…

[Originally published on 16 October 2018 22:09 hrs IST. Minor editing (including to the title line) done by 17 October 2018 08:09 hrs IST.]

/

# $1 billion in new allocations, for serendipity POTUSes and other residents of the Imperial City do not on a very regular basis visit this blog. I was, therefore, very much surprised to find a rare bird from that place dropping by at this blog on “Wed Aug 29, 2018”, at “03:21:26” HRS IST, i.e., early in the morning of 29th August. In any case, “the early to rise…” couldn’t possibly be a motivation here; Washinton, D.C., and India are separated by some 9 and a half hours of time difference [^]. … Ditto, perhaps, as far as that thing about the early bird catching the worm, goes. So, what possibly could be the reason? I don’t know. Any guesses? I gathered from Roger Schlafly’s blog [^] who in turn gathered it from Prof. Peter Woit’s blog [^] that Moving through the US Congress is a National Quantum Initiative Act, which would provide over a billion dollars in funding for things related to quantum computation. Alright. At this point, I strongly recommend that you go back to Schlafly’s post, finish reading it, and only then continue with the rest of this post. Personally, I am not at all against the proposed act i.e. the Act. It all is American money, first thing. And, the rest of the world sure knows for a fact that America has huge amounts of money. They could easily fund even just serendipity [^]! So, from this point of view and motivation,$ 1 billion actually looks like a paltry amount. I mean, given the fact that it all is American money anyway.

Anyway, coming back to my concerns (and those of this post), I then pursued the links to the QC skeptics that Schlafly helpfully provides in his post.

The Quanta Magazine article [^] covering Prof. Kalai’s work was something I had already browsed some time ago, when it had first come. This time around though, I showed the good sense to actually pursue the links given in it, especially the links given in this passage:

“… a loose group of mathematicians, physicists and computer scientists [have been] arguing that quantum computing, for all its theoretical promise, is something of a mirage.”

Hmmm… Mathematicians, physicists, and computer scientists….

I am not sure if the “physicist” (Wolfram) is indeed arguing against QCs in the linked passage [^].

The “computer scientist” (Prof. Oded Goldreich) has some remarkable insights into the nature of the very theory of QM itself [^]. However, his note is very brief. It also seems to be a bit dated. Looks like it is an informally written and early thought on this matter.

But it was the “mathematician” (Prof. Leonid Levin) who, I found, was truly distinctive [^]—despite his being a “mathematician.”

OK, the way it happened was this. It was only when I landed at Levin’s page that I came to know that this write up was coming from him. Now, his name did give me something like a vague ring—a vague feel that this guy was, may be, a neat / original guy or something like that. But I couldn’t place him immediately. So I did the right thing. I just ignored who he was, and focused on what he had to say by way of regarding the QC as a “mirage.”

There was little trouble getting hooked on to Levin’s write-up. The writing, I realized, was very tight and wonderful. Just how wonderful? I would consider it a great achievement if I ever manage to write something that is written even half one-tenth as well as how Levin writes here. … Want to see a sample? I (anyway) can’t resist the temptation to copy-paste this particular passage:

2.2. Quantum Computers

QC has $n$ interacting elements, called q-bits. A pure state of each is a unit vector on the complex plane $C^2$. Its two components are quantum amplitudes of its two Boolean values. A state of the entire machine is a vector in the tensor product of $n$ planes. Its $2^n$ coordinate vectors are tensor-products of q-bit basis states, one for each n-bit combination. The machine is cooled, isolated from the environment nearly perfectly, and initialized in one of its basis states representing the input and empty memory bits. The computation is arranged as a sequence of perfectly reversible interactions of the q-bits, putting their combination in the superposition of a rapidly increasing number of basis states, each having an exponentially small amplitude. The environment may intervene with errors; the computation is done in an error-correcting way, immune to such errors as long as they are few and of special restricted forms. Otherwise, the equations of Quantum Mechanics are obeyed with unlimited precision. This is crucial since the amplitudes are exponentially small and deviations in remote (hundredth or even much further) decimal places would overwhelm the content completely. Peter Shor shows such computers capable of factoring in polynomial time. The exponentially many coordinates of their states can, using a rough analogy, explore one potential solution each and concentrate the amplitudes in the one that works.

… Damn it! Not a single word out of place, and, not a single relevant consideration missed!

… If this piece is typical of Levin’s writing, then I must say that the rest of us (outside of his specialty) have been missing something remarkable.

Go, continue with the fun. Even the title of Levin’s next sub-section—“Small Difficulties”—is delightful.

Oh, BTW, Leonid Levin is one of the two people who independently discovered the existence of the P-vs-NP issue [^]. …

It all still does not mean that I am against the act i.e. the Act.

I do in all sincerity believe that they are far more likely to fail achieving even “just” the quantum supremacy, than succeeding in it. But realize that quantum supremacy is just the potatoes here. The meat is: breaking the RSA codes. No one is talking about that part. (Meaning, I feel sure that the meat is even far less likely to be achieved, ever.)

At the same time, I also equally sincerely believe that, all things considered, the amount ($1 billion) is not at all something on which Americans would (or even should!) get worked up a lot. OK. Some may experience dismay over the fact that more Democrats than Repulicans are going to get employed as a result of that funding. However, none could challenge the fact that most people who stand to derive benefits here would be your typical scientists and engineers: middle-class, law-abiding, hard-working citizens who place high value on education and culture, why, sometimes even on reason! Even the worst critics of the bill would agree that these people wouldn’t make for all that bad a company for a dinner. As guests, they may not make for the most interesting lot, but they also wouldn’t spoil the mood of your party with some off-color remarks. Also, it is a fact that while in office or lab, they would work sincerely on their goal—even if the goal is that of building a QC that works! So, given all that, the money they are asking for isn’t a complete waste—you couldn’t call it “pork,” so to speak. And then, you can never tell what accidental and unintended discovery might come out of it all. [^] Yes, American science is weird—not to mention the American post-graduate (called “graduate”) education, and also the American engineering (especially the kind that is practiced on either coast, but especially so in California). Trends in American education, science and engineering are all dominated by all kinds of fads, entrenched viewpoints, prejudices and whatnot—except for reason. (I should know!) The QC is, from this viewpoint, just another fad. Yet, it also remains true that sometimes, after thoroughly checking equipment, removing pigeon-nests from it, and even cleaning out the accumulated droppings, the signal still remains there—it refuses to go away [^]. Yes, 55 years is a long time to have passed since then, but still, somehow, it does seem to me, speaking in overall terms, that$ 1 billion in new allocations would not necessarily be a bad thing, so to say—you couldn’t possibly call it “pork.”

Another thing. The way I really see it is this way: The more they try to build a really powerful QC and fail—as they are bound to—the better become my chances of collecting a Nobel or two. Whaddaya think?

A Song I Like:
(Hindi) “hawaa ke jhonke aaj…” (“sawaar loon”)
Lyrics: Amitabh Bhattacharya
Music: Amit Trivedi
Singer: Monali Thakur

/

# Absolutely Random Notings on QM—Part 2: LOL!

I intend to aperiodically update this post whenever I run into the more “interesting” write-ups about QM and/or quantum physicists. Accordingly, I will mention the dates on which I update this post.

I will return back to Heisenberg and Schrodinger in the next part of this series. But in the meanwhile, enjoy the “inaugaral” link below.

1. Post first published on 08 July 2018, 13:28 hrs IST with the following “interesting” write-up:

Wiki on “Fundamental Fysiks [sic] Group”: [^]

A Song I Like:

(English, “Western”): “old turkey buzzard…” from the movie “MacKenna’s Gold”
[I here mostly copy-paste, dear gentlemen, for, while I had enjoyed the song especially during the usual turbulent teens, I have not had the pleasure to locate the source of the same–back then, or ever. Hence relying on the ‘net.[Oh, BTW, it requires another post on the movie itself, though! [Just remind me, that’s all!]]]
Music: Quincy Jones
Lyrics: Freddy Douglass
Singer: Jose Feliciano

/

# Absolutely Random Notings on QM—Part 1: Bohr. And, a bad philosophy making its way into physics with his work, and his academic influence

TL;DR: Go—and keep—away.

I am still firming up my opinions. However, there is never a harm in launching yet another series of posts on a personal blog, is there? So here we go…

Quantum Mechanics began with Planck. But there was no theory of quanta in what Planck had offered.

What Planck had done was to postulate only the existence of the quanta of the energy, in the cavity radiation.

Einstein used this idea to predict the heat capacities of solids—a remarkable work, one that remains underappreciated in both text-books as well as popular science books on QM.

The first pretense at a quantum theory proper came from Bohr.

Bohr was thinking not about the cavity radiations, but about the spectra of the radiations emitted or absorbed by gases.

Matter, esp. gases, following Dalton, …, Einstein, and Perin, were made of distinct atoms. The properties of gases—especially the reason why they emitted or absorbed radiation only at certain distinct frequencies, but not at any other frequencies (including those continuous patches of frequencies in between the experimentally evident sharp peaks)—had to be explained in reference to what the atoms themselves were like. There was no other way out—not yet, not given the sound epistemology in physics of those days.

Thinking up a new universe still was not allowed back then in science let alone in physics. One still had to clearly think about explaining what was given in observations, what was in evidence. Effects still had be related back to causes; outward actions still had to be related back to the character/nature of the entities that thus acted.

The actor, unquestionably by now, was the atom. The effects were the discrete spectra. Not much else was known.

Those were the days were when the best hotels and restaurants in Berlin, London, and New York would have horse-driven buggies ushering in the socially important guests. Buggies still was the latest technology back then. Not many people thus ushered in are remembered today. But Bohr is.

If the atom was the actor, and the effects under study were the discrete spectra, then what was needed to be said, in theory, was something regarding the structure of the atom.

If an imagined entity sheer by its material/chemical type doesn’t do it, then it’s the structure—its shape and size—which must do it.

Back then, this still was regarded as one of the cardinal principles of science, unlike the mindless opposition to the science of Homeopathy today, esp. in the UK. But back then, it was known that one important reason that Calvin gets harassed by the school bully was that not just the sheer size of the latter’s matter but also that the structure of the latter was different. In other words: If you consumed alcohol, you simply didn’t take in so many atoms of carbon as in proportion to so many atoms of hydrogen, etc. You took in a structure, a configuration with which these atoms came in.

However, the trouble back then was, none had have the means to see the atoms.

If by structure you mean the geometrical shape and size, or some patterns of density, then clearly, there was no experimental observations pertaining to the same. The only relevant observation available to people back then was what had already been encapsulated in Rutherford’s model, viz., the incontestable idea that the atomic nucleus had to be massive and dense, occupying a very small space as compared to an atom taken as a whole; the electrons had to carry very little mass in comparison. (The contrast of Rutherford’s model of c. 1911 was to the earlier plum cake model by Thomson.)

Bohr would, therefore, have to start with Rutherford’s model of atoms, and invent some new ideas concerning it, and see if his model was consistent with the known results given by spectroscopic observations.

What Bohr offered was a model for the electrons contained in a nuclear atom.

However, even while differing from the Rutherford’s plum-cake model, Bohr’s model emphatically lacked a theory for the nature of the electrons themselves. This part has been kept underappreciated by the textbook authors and science teachers.

In particular, Bohr’s theory had absolutely no clue as to the process according to which the electrons could, and must, jump in between their stable orbits.

The meat of the matter was worse, far worse: Bohr had explicitly prohibited from pursuing any mechanism or explanation concerning the quantum jumps—an idea which he was the first to propose. [I don’t know of any one else originally but independently proposing the same idea.]

Bohr achieved this objective not through any deployment of the best possible levels of scientific reason but out of his philosophic convictions—the convictions of the more irrational kind. The quantum jumps were obviously not observable, according to him, only their effects were. So, strictly speaking, the quantum jumps couldn’t possibly be a part of his theory—plain and simple!

But then, Bohr in his philosophic enthusiasm didn’t stop just there. He went even further—much further. He fully deployed the powers of his explicit reasoning as well as the weight of his seniority in prohibiting the young physicists from even thinking of—let alone ideating or offering—any mechanism for such quantum jumps.

In other words, Bohr took special efforts to keep the young quantum enthusiasts absolutely and in principle clueless, as far as his quantum jumps were concerned.

Bohr’s theory, in a sense, was in line with the strictest demands of the philosophy of empiricism. Here is how Bohr’s application of this philosophy went:

1. This electron—it can be measured!—at this energy level, now!
2. [May be] The same electron, but this energy level, now!
3. This energy difference, this frequency. Measured! [Thank you experimental spectroscopists; hats off to you, for, you leave Bohr alone!!]
4. OK. Now, put the above three into a cohesive “theory.” And, BTW, don’t you ever even try to think about anything else!!

Continuing just a bit on the same lines, Bohr sure would have said (quoting Peikoff’s explanation of the philosophy of empiricism):

1. [Looking at a tomato] We can only say this much in theory: “This, now, tomato!”
2. Making a leeway for the most ambitious ones of the ilk: “This *red* tomato!!”

Going by his explicit philosophic convictions, it must have been a height of “speculation” for Bohr to mumble something—anything—about a thing like “orbit.” After all, even by just mentioning a word like “orbit,” Bohr was being absolutely philosophically inconsistent here. Dear reader, observe that the orbit itself never at all was an observable!

Bohr must have in his conscience convulsed at this fact; his own philosophy couldn’t possibly have, strictly speaking, permitted him to accommodate into his theory a non-measurable feature of a non-measurable entity—such as his orbits of his electrons. Only the allure of outwardly producing predictions that matched with the experiment might have quietened his conscience—and that too, temporarily. At least until he got a new stone-building housing an Institute for himself and/or a Physics Nobel, that is.

Possible. With Herr Herr Herr Doktor Doktor Doktor Professor Professors, anything is possible.

It is often remarked that the one curious feature of the Bohr theory was the fact that the stability of the electronic orbits was postulated in it, not explained.

That is, not explained in reference to any known physical principle. The analogy to the solar system indeed was just that: an analogy. It was not a reference to an established physical principle.

However, the basically marvelous feature of the Bohr theory was not that the orbits were stable (in violation of the known laws of electrodynamics). It was: there at all were any orbits in it, even if no experiment had ever given any evidence for the continuously or discontinuously subsequent positions electrons within an atom or of their motions.

So much for originator of the cult of sticking only to the “observables.”

What Sommerfeld did was to add footnotes to Bohr’s work.

Sommerfeld did this work admirably well.

However, what this instance in the history of physics clearly demonstrates is yet another principle from the epistemology of physics: how a man of otherwise enormous mathematical abilities and training (and an academically influential position, I might add), but having evidently no remarkable capacity for a very novel, breakthrough kind of conceptual thinking, just cannot but fall short of making any lasting contributions to physics.

“Math” by itself simply isn’t enough for physics.

What came to be known as the old quantum theory, thus, faced an impasse.

Under Bohr’s (and philosophers’) loving tutorship, the situation continued for a long time—for more than a decade!

A Song I Like:

(Marathi) “sakhi ga murali mohan mohi manaa…”
Music: Hridaynath Mangeshkar
Singer: Asha Bhosale
Lyrics: P. Savalaram

PS: Only typos and animals of the similar ilk remain to be corrected.

/

# Is something like a re-discovery of the same thing by the same person possible?

Yes, we continue to remain very busy.

However, in spite of all that busy-ness, in whatever spare time I have [in the evenings, sometimes at nights, why, even on early mornings [which is quite unlike me, come to think of it!]], I cannot help but “think” in a bit “relaxed” [actually, abstract] manner [and by “thinking,” I mean: musing, surmising, etc.] about… about what else but: QM!

So, I’ve been doing that. Sort of like, relaxed distant wonderings about QM…

Idle musings like that are very helpful. But they also carry a certain danger: it is easy to begin to believe your own story, even if the story itself is not being borne by well-established equations (i.e. by physic-al evidence).

But keeping that part aside, and thus coming to the title question: Is it possible that the same person makes the same discovery twice?

It may be difficult to believe so, but I… I seemed to have managed to have pulled precisely such a trick.

Of course, the “discovery” in question is, relatively speaking, only a part of of the whole story, and not the whole story itself. Still, I do think that I had discovered a certain important part of a conclusion about QM a while ago, and then, later on, had completely forgotten about it, and then, in a slow, patient process, I seem now to have worked inch-by-inch to reach precisely the same old conclusion.

In short, I have re-discovered my own (unpublished) conclusion. The original discovery was may be in the first half of this calendar year. (I might have even made a hand-written note about it, I need to look up my hand-written notes.)

Now, about the conclusion itself. … I don’t know how to put it best, but I seem to have reached the conclusion that the postulates of quantum mechanics [^], say as stated by Dirac and von Neumann [^], have been conceptualized inconsistently.

Please note the issue and the statement I am making, carefully. As you know, more than 9 interpretations of QM [^][^][^] have been acknowledged right in the mainstream studies of QM [read: University courses] themselves. Yet, none of these interpretations, as far as I know, goes on to actually challenge the quantum mechanical formalism itself. They all do accept the postulates just as presented (say by Dirac and von Neumann, the two “mathematicians” among the physicists).

Coming to me, my positions: I, too, used to say exactly the same thing. I used to say that I agree with the quantum postulates themselves. My position was that the conceptual aspects of the theory—at least all of them— are missing, and so, these need to be supplied, and if the need be, these also need to be expanded.

But, as far as the postulates themselves go, mine used to be the same position as that in the mainstream.

Until this morning.

Then, this morning, I came to realize that I have “re-discovered,” (i.e. independently discovered for the second time), that I actually should not be buying into the quantum postulates just as stated; that I should be saying that there are theoretical/conceptual errors/misconceptions/misrepresentations woven-in right in the very process of formalization which produced these postulates.

Since I think that I should be saying so, consider that, with this blog post, I have said so.

Just one more thing: the above doesn’t mean that I don’t accept Schrodinger’s equation. I do. In fact, I now seem to embrace Schrodinger’s equation with even more enthusiasm than I have ever done before. I think it’s a very ingenious and a very beautiful equation.

A Song I Like:

(Hindi) “tum jo hue mere humsafar”
Music: O. P. Nayyar
Singers: Geeta Dutt and Mohammad Rafi
Lyrics: Majrooh Sultanpuri

Update on 2017.10.14 23:57 IST: Streamlined a bit, as usual.

/

# “Measure for Measure”—a pop-sci video on QM

This post is about a video on QM for the layman. The title of the video is: “Measure for Measure: Quantum Physics and Reality” [^]. It is also available on YouTube, here [^].

I don’t recall precisely where on the ‘net I saw the video being mentioned. Anyway, even though its running time is 01:38:43 (i.e. 1 hour, 38 minutes, making it something like a full-length feature film), I still went ahead, downloaded it and watched it in full. (Yes, I am that interested in QM!)

The video was shot live at an event called “World Science Festival.” I didn’t know about it beforehand, but here is the Wiki on the festival [^], and here is the organizer’s site [^].

The event in the video is something like a panel discussion done on stage, in front of a live audience, by four professors of physics/philosophy. … Actually five, including the moderator.

Brian Greene of Columbia [^] is the moderator. (Apparently, he co-founded the World Science Festival.) The discussion panel itself consists of: (i) David Albert of Columbia [^]. He speaks like a philosopher but seems inclined towards a specific speculative theory of QM, viz. the GRW theory. (He has that peculiar, nasal, New York accent… Reminds you of Dr. Harry Binswanger—I mean, by the accent.) (ii) Sheldon Goldstein of Rutgers [^]. He is a Bohmian, out and out. (iii) Sean Carroll of CalTech [^]. At least in the branch of the infinity of the universes in which this video unfolds, he acts 100% deterministically as an Everettian. (iv) Ruediger Schack of Royal Holloway (the spelling is correct) [^]. I perceive him as a QBist; guess you would, too.

Though the video is something like a panel discussion, it does not begin right away with dudes sitting on chairs and talking to each other. Even before the panel itself assembles on the stage, there is a racy introduction to the quantum riddles, mainly on the wave-particle duality, presented by the moderator himself. (Prof. Greene would easily make for a competent TV evangelist.) This part runs for some 20 minutes or so. Then, even once the panel discussion is in progress, it is sometimes interwoven with a few short visualizations/animations that try to convey the essential ideas of each of the above viewpoints.

I of course don’t agree with any one of these approaches—but then, that is an entirely different story.

Coming back to the video, yes, I do want to recommend it to you. The individual presentations as well as the panel discussions (and comments) are done pretty well, in an engaging and informal way. I did enjoy watching it.

The parts which I perhaps appreciated the most were (i) the comment (near the end) by David Albert, between 01:24:19–01:28:02, esp. near 1:27:20 (“small potatoes”) and, (ii) soon later, another question by Brian Greene and another answer by David Albert, between 01:33:26–01:34:30.

In this second comment, David Albert notes that “the serious discussions of [the foundational issues of QM] … only got started 20 years ago,” even though the questions themselves do go back to about 100 years ago.

That is so true.

The video was recorded recently. About 20 years ago means: from about mid-1990s onwards. Thus, it is only from mid-1990s, Albert observes, that the research atmosphere concerning the foundational issues of QM has changed—he means for the better. I think that is true. Very true.

For instance, when I was in UAB (1990–93), the resistance to attempting even just a small variation to the entrenched mainstream view (which means, the Copenhagen interpretation (CI for short)) was so enormous and all pervading, I mean even in the US/Europe, that I was dead sure that a graduate student like me would never be able to get his nascent ideas on QM published, ever. It therefore came as a big (and a very joyous) surprise to me when my papers on QM actually got accepted (in 2005). … Yes, the attitudes of physicists have changed. Anyway, my point here is, the mainstream view used to be so entrenched back then—just about 20 years ago. The Copenhagen interpretation still was the ruling dogma, those days. Therefore, that remark by Prof. Albert does carry some definite truth.

Prof. Albert’s observation also prompts me to pose a question to you.

What could be the broad social, cultural, technological, economic, or philosophic reasons behind the fact that people (researchers, graduate students) these days don’t feel the same kind of pressure in pursuing new ideas in the field of Foundations of QM? Is the relatively greater ease of publishing papers in foundations of QM, in your opinion, an indication of some negative trends in the culture? Does it show a lowering of the editorial standards? Or is there something positive about this change? Why has it become easier to discuss foundations of QM? What do you think?

I do have my own guess about it, and I would sure like to share it with you. But before I do that, I would very much like to hear from you.

Any guesses? What could be the reason(s) why the serious discussions on foundations of QM might have begun to occur much more freely only after mid-1990s—even though the questions had been raised as early as in 1920s (or earlier)?

Over to you.

Greetings in advance for the Republic Day. I [^] am still jobless.

[E&OE]

# The Infosys Prizes, 2015

I realized that it was the end of November the other day, and it somehow struck me that I should check out if there has been any news on the Infosys prizes for this year. I vaguely recalled that they make the yearly announcements sometime in the last quarter of a year.

Turns out that, although academic bloggers whose blogs I usually check out had not highlighted this news, the prizes had already been announced right in mid-November [^].

It also turns out also that, yes, I “know”—i.e., have in-person chatted (exactly once) with—one of the recipients. I mean Professor Dr. Umesh Waghmare, who received this year’s award for Engineering Sciences [^]. I had run into him in an informal conference once, and have written about it in a recent post, here [^].

Dr. Waghmare is a very good choice, if you ask me. His work is very neat—I mean both the ideas which he picks out to work on, and the execution on them.

I still remember his presentation at that informal conference (where I chatted with him). He had talked about a (seemingly) very simple idea, related to graphene [^]—its buckling.

Here is my highly dumbed down version of that work by Waghmare and co-authors. (It’s dumbed down a lot—Waghmare et al’s work was on buckling, not bending. But it’s OK; this is just a blog, and guess I have a pretty general sort of a “general readership” here.)

Bending, in general, sets up a combination of tensile and compressive stresses, which results in the setting up of a bending moment within a beam or a plate. All engineers (except possibly for the “soft” branches like CS and IT) study bending quite early in their undergraduate program, typically in the second year. So, I need not explain its analysis in detail. In fact, in this post, I will write only a common-sense level description of the issue. For technical details, look up the Wiki articles on bending [^] and buckling [^] or Prof. Bower’s book [^].

Assuming you are not an engineer, you can always take a longish rubber eraser, hold it so that its longest edge is horizontal, and then bend it with a twist of your fingers. If the bent shape is like an inverted ‘U’, then, the inner (bottom) surface has got compressed, and the outer (top) surface has got stretched. Since compression and tension are opposite in nature, and since the eraser is a continuous body of a finite height, it is easy to see that there has to be a continuous surface within the volume of the eraser, some half-way through its height, where there can be no stresses. That’s because, the stresses change sign in going from the compressive stress at the bottom surface to the tensile stresses on the top surface. For simplicity of mathematics, this problem is modeled as a 1D (line) element, and therefore, in elasticity theory, this actual 2D surface is referred to as the neutral axis (i.e. a line).

The deformation of the eraser is elastic, which means that it remains in the bent state only so long as you are applying a bending “force” to it (actually, it’s a moment of a force).

The classical theory of bending allows you to relate the curvature of the beam, and the bending moment applied to it. Thus, knowing bending moment (or the applied forces), you can tell how much the eraser should bend. Or, knowing how much the eraser has curved, you can tell how big a pair of fforces would have to be applied to its ends. The theory works pretty well; it forms of the basis of how most buildings are designed anyway.

So far, so good. What happens if you bend, not an eraser, but a graphene sheet?

The peculiarity of graphene is that it is a single atom-thick sheet of carbon atoms. Your usual eraser contains billions and billions of layers of atoms through its thickness. In contrast, the thickness of a graphene sheet is entirely accounted for by the finite size of the single layer of atoms. And, it is found that unlike thin paper, the graphen sheet, even if it is the the most extreme case of a thin sheet, actually does offer a good resistance to bending. How do you explain that?

The naive expectation is that something related to the interatomic bonding within this single layer must, somehow, produce both the compressive and tensile stresses—and the systematic variation from the locally tensile to the locally compressive state as we go through this thickness.

Now, at the scale of single atoms, quantum mechanical effects obviously are dominant. Thus, you have to consider those electronic orbitals setting up the bond. A shift in the density of the single layer of orbitals should correspond to the stresses and strains in the classical mechanics of beams and plates.

What Waghmare related at that conference was a very interesting bit.

He calculated the stresses as predicted by (in my words) the changed local density of the orbitals, and found that the forces predicted this way are way smaller than the experimentally reported values for graphene sheets. In other words, the actual graphene is much stiffer than what the naive quantum mechanics-based model shows—even if the model considers those electronic orbitals. What is the source of this additional stiffness?

He then showed a more detailed calculation (i.e. a simulation), and found that the additional stiffness comes from a quantum-mechanical interaction between the portions of the atomic orbitals that go off transverse to the plane of the graphene sheet.

Thus, suppose a graphene sheet is initially held horizontally, and then bent to form an inverted U-like curvature. According to Waghmare and co-authros, you now have to consider not just the orbital cloud between the atoms (i.e. the cloud lying in the same plane as the graphene sheet) but also the orbital “petals” that shoot vertically off the plane of the graphene. Such petals are attached to nucleus of each C atom; they are a part of the electronic (or orbital) structure of the carbon atoms in the graphene sheet.

In other words, the simplest engineering sketch for the graphene sheet, as drawn in the front view, wouldn’t look like a thin horizontal line; it would also have these small vertical “pins” at the site of each carbon atom, overall giving it an appearance rather like a fish-bone.

What happens when you bend the graphene sheet is that on the compression side, the orbital clouds for these vertical petals run into each other. Now, you know that an orbital cloud can be loosely taken as the electronic charge density, and that the like charges (e.g. the negatively charged electrons) repel each other. This inter-electronic repulsive force tends to oppose the bending action. Thus, it is the petals’ contribution which accounts for the additional stiffness of the graphene sheet.

I don’t know whether this result was already known to the scientific community back then in 2010 or not, but in any case, it was a very early analysis of bending of graphene. Further, as far as I could tell, the quality of Waghmare’s calculations and simulations was very definitely superlative. … You work in a field (say computational modeling) for some time, and you just develop a “nose” of sorts, that allows you to “smell” a superlative calculation from an average one. Particularly so, if your own skills on the calculations side are rather on the average, as happens to be the case with me. (My strengths are in conceptual and computational sides, but not on the mathematical side.) …

So, all in all, it’s a very well deserved prize. Congratulations, Dr. Waghmare!

A Song I Like:

(The so-called “fusion” music) “Jaisalmer”
Artists: Rahul Sharma (Santoor) and Richard Clayderman (Piano)
Album: Confluence

[As usual, may be one more editing pass…]

[E&OE]

/