# Work Is Punishment

Work is not worship—they said.

It’s a punishment, full stop!—they said.

One that is to be smilingly borne.

And so lose everything else too. …

Hmmm… I said. … I was confused.

Work is enjoyment, actually. … I then discovered.

I told them.

They didn’t believe.

Not when I said it.

Not because they ceased believing in me.

It’s just that. They. Simply. Didn’t. Believe. In. It.

And they professed to believe in

a lot of things that never did make

any sense to themselves.

They said so.

And it was so.

A long many years have passed by, since then.

Now, whether they believe in it or not,

I have come to believe in this gem:

Work is punishment—full stop.

That’s the principle on the basis of which I am henceforth going to operate.

And yes! This still is a poem alright?

[What do you think most poems written these days are like?]

It remains a poem.

And I am going to make money. A handsome amount of money.

For once in my life-time.

After all, one can make money and still also write poems.

That’s what they say.

Or do science. Real science. Physics. Even physics for that matter.

Or, work. Real work, too.

It’s better than having no money and…

.

/

# Python scripts for simulating QM, part 0: A general update

My proposed paper on my new approach to QM was not accepted at the international conference where I had sent my abstract. (For context, see the post before the last, here [^] ).

“Thank God,” that’s what I actually felt when I received this piece of news, “I can now immediately proceed to procrastinate on writing the full-length paper, and also, simultaneously, un-procrastinate on writing some programs in Python.”

So far, I have written several small and simple code-snippets. All of these were for the usual (text-book) cases; all in only $1D$. Here in this post, I will mention specifically which ones…

Time-independent Schrodinger equation (TISE):

Here, I’ve implemented a couple of scripts, one for finding the eigen-vectors and -values for a particle in a box (with both zero and arbitrarily specified potentials) and another one for the quantum simple harmonic oscillator.

These were written not with the shooting method (which is the method used in the article by Rhett Allain for the Wired magazine [^]) but with the matrix method. … Yes, I have gone past the stage of writing all the numerical analysis algorithm in original, all by myself. These days, I directly use Python libraries wherever available, e.g., NumPy’s LinAlg methods. That’s why, I preferred the matrix method. … My code was not written from scratch; it was based on Cooper’s code “qp_se_matrix”, here [PDF ^]).

Time-dependent Schrodinger equation (TDSE):

Here, I tried out a couple of scripts.

The first one was more or less a straightforward porting of Ian Cooper’s program “se_fdtd” [PDF ^] from the original MatLab to Python. The second one was James Nagel’s Python program (written in 2007 (!) and hosted as a SciPy CookBook tutorial, here [^]). Both follow essentially the same scheme.

Initially, I found this scheme to be a bit odd to follow. Here is what it does.

It starts out by replacing the complex-valued Schrodinger equation with a pair of real-valued (time-dependent) equations. That was perfectly OK by me. It was their discretization which I found to be a bit peculiar. The discretization scheme here is second-order in both space and time, and yet it involves explicit time-stepping. That’s peculiar, so let me write a detailed note below (in part, for my own reference later on).

Also note: Though both Cooper and Nagel implement essentially the same method, Nagel’s program is written in Python, and so, it is easier to discuss (because the array-indexing is 0-based). For this reason, I might make a direct reference only to Nagel’s program even though it is to be understood that the same scheme is found implemented also by Cooper.

A note on the method implemented by Nagel (and also by Cooper):

What happens here is that like the usual Crank-Nicolson (CN) algorithm for the diffusion equation, this scheme too puts the half-integer time-steps to use (so as to have a second-order approximation for the first-order derivative, that of time). However, in the present scheme, the half-integer time-steps turn out to be not entirely fictitious (the way they are, in the usual CN method for the single real-valued diffusion equation). Instead, all of the half-integer instants are fully real here in the sense that they do enter the final discretized equations for the time-stepping.

The way that comes to happen is this: There are two real-valued equations to solve here, coupled to each other—one each for the real and imaginary parts. Since both the equations have to be solved at each time-step, what this method does is to take advantage of that already existing splitting of the time-step, and implements a scheme that is staggered in time. (Note, this scheme is not staggered in space, as in the usual CFD codes; it is staggered only in time.) Thus, since it is staggered and explicit, even the finite-difference quantities that are defined only at the half-integer time-steps, also get directly involved in the calculations. How precisely does that happen?

The scheme defines, allocates memory storage for, and computationally evaluates the equation for the real part, but this computation occurs only at the full-integer instants ($n = 0, 1, 2, \dots$). Similarly, this scheme also defines, allocates memory for, and computationally evaluates the equation for the imaginary part; however, this computation occurs only at the half-integer instants ($n = 1/2, 1+1/2, 2+1/2, \dots$). The particulars are as follows:

The initial condition (IC) being specified is, in general, complex-valued. The real part of this IC is set into a space-wide array defined for the instant $n$; here, $n = 0$. Then, the imaginary part of the same IC is set into a separate array which is defined nominally for a different instant: $n+1/2$. Thus, even if both parts of the IC are specified at $t = 0$, the numerical procedure treats the imaginary part as if it was set into the system only at the instant $n = 1/2$.

Given this initial set-up, the actual time-evolution proceeds as follows:

• The real-part already available at $n$ is used in evaluating the “future” imaginary part—the one at $n+1/2$
• The imaginary part thus found at $n+1/2$ is used, in turn, for evaluating the “future” real part—the one at $n+1$.

At this point that you are allowed to say: lather, wash, repeat… Figure out exactly how. In particular, notice how the simulation must proceed in integer number of pairs of computational steps; how the imaginary part is only nominally (i.e. only computationally) distant in time from its corresponding real part.

Thus, overall, the discretization of the space part is pretty straight-forward here: the second-order derivative (the Laplacian) is replaced by the usual second-order finite difference approximation. However, for the time-part, what this scheme does is both similar to, and different from, the usual Crank-Nicolson scheme.

Like the CN scheme, the present scheme also uses the half-integer time-levels, and thus manages to become a second-order scheme for the time-axis too (not just space), even if the actual time interval for each time-step remains, exactly as in the CN, only $\Delta t$, not $2\Delta t$.

However, unlike the CN scheme, this scheme still remains explicit. That’s right. No matrix equation is being solved at any time-step. You just zip through the update equations.

Naturally, the zipping through comes with a “cost”: The very scheme itself comes equipped with a stability criterion; it is not unconditionally stable (the way CN is). In fact, the stability criterion now refers to half of the time-interval, not full, and thus, it is a bit even more restrictive as to how big the time-step ($\Delta t$) can be given a certain granularity of the space-discretization ($\Delta x$). … I don’t know, but guess that this is how they handle the first-order time derivatives in the FDTD method (finite difference time domain). May be the physics of their problems itself is such that they can get away with coarser grids without being physically too inaccurate, who knows…

Other aspects of the codes by Nagel and Cooper:

For the initial condition, both Cooper and Nagel begin with a “pulse” of a cosine function that is modulated to have the envelop of the Gaussian. In both their codes, the pulse is placed in the middle, and they both terminate the simulation when it reaches an end of the finite domain. I didn’t like this aspect of an arbitrary termination of the simulation.

However, I am still learning the ropes for numerically handling the complex-valued Schrodinger equation. In any case, I am not sure if I’ve got good enough a handle on the FDTD-like aspects of it. In particular, as of now, I am left wondering:

What if I have a second-order scheme for the first-order derivative of time, but if it comes with only fictitious half-integer time-steps (the way it does, in the usual Crank-Nicolson method for the real-valued diffusion equation)? In other words: What if I continue to have a second-order scheme for time, and yet, my scheme does not use leap-frogging? In still other words: What if I define both the real and imaginary parts at the same integer time-steps $n = 0, 1, 2, 3, \dots$ so that, in both cases, their values at the instant $n$ are directly fed into both their values at $n+1$?

In a way, this scheme seems simpler, in that no leap-frogging is involved. However, notice that it would also be an implicit scheme. I would have to solve two matrix-equations at each time-step. But then, I could perhaps get away with a larger time-step than what Nagel or Cooper use. What do you think? Is checker-board patterning (the main reason why we at all use staggered grids in CFD) an issue here—in time evolution? But isn’t the unconditional stability too good to leave aside without even trying? And isn’t the time-axis just one-way (unlike the space-axis that has BCs at both ends)? … I don’t know…

PBCs and ABCs:

Even as I was (and am) still grappling with the above-mentioned issue, I also wanted to make some immediate progress on the front of not having to terminate the simulation (once the pulse reached one of the ends of the domain).

So, instead of working right from the beginning with a (literally) complex Schrodinger equation, I decided to first model the simple (real-valued) diffusion equation, and to implement the PBCs (periodic boundary conditions) for it. I did.

My code seems to work, because the integral of the dependent variable (i.e., the total quantity of the diffusing quantity present in the entire domain—one with the topology of a ring) does seem to stay constant—as is promised by the Crank-Nicolson scheme. The integral stays “numerically the same” (within a small tolerance) even if obviously, there are now fluxes at both the ends. (An initial condition of a symmetrical saw-tooth profile defined between $y = 0.0$ and $y = 1.0$, does come to asymptotically approach the horizontal straight-line at $y = 0.5$. That is what happens at run-time, so obviously, the scheme seems to handle the fluxes right.)

Anyway, I don’t always write everything from the scratch; I am a great believer in lifting codes already written by others (with attribution, of course :)). Thus, while thus searching on the ‘net for some already existing resources on numerically modeling the Schrodinger equation (preferably with code!), I also ran into some papers on the simulation of SE using ABCs (i.e., the absorbing boundary conditions). I was not sure, however, if I should implement the ABCs immediately…

As of today, I think that I am going to try and graduate from the transient diffusion equation (with the CN scheme and PBCs), to a trial of the implicit TDSE without leap-frogging, as outlined above. The only question is whether I should throw in the PBCs to go with that or the ABCs. Or, may be, neither, and just keep pinning the  $\Psi$ values for the end- and ghost-nodes down to $0$, thereby effectively putting the entire simulation inside an infinite box?

At this point of time, I am tempted to try out the last. Thus, I think that I would rather first explore the staggering vs. non-staggering issue for a pulse in an infinite box, and understand it better, before proceeding to implement either the PBCs or the ABCs. Of course, I still have to think more about it… But hey, as I said, I am now in a mood of implementing, not of contemplating.

Why not upload the programs right away?

BTW, all these programs (TISE with matrix method, TDSE on the lines of Nagel/Cooper’s codes, transient DE with PBCs, etc.) are still in a fluid state, and so, I am not going to post them immediately here (though over a period of time, I sure would).

The reason for not posting the code runs something like this: Sometimes, I use the Python range objects for indexing. (I saw this goodie in Nagel’s code.) At other times, I don’t. But even when I don’t use the range objects, I anyway am tempted to revise the code so as to have them (for a better run-time efficiency).

Similarly, for the CN method, when it comes to solving the matrix equation at each time-step, I am still not using the TDMA (the Thomas algorithm) or even just sparse matrices. Instead, right now, I am allocating the entire $N \times N$ sized matrices, and am directly using NumPy’s LinAlg’s solve() function on these biggies. No, the computational load doesn’t show up; after all, I anyway have to use a 0.1 second pause in between the rendering passes, and the biggest matrices I tried were only $1001 \times 1001$ in size. (Remember, this is just a $1D$ simulation.) Even then, I am tempted a bit to improve the efficiency. For these and similar reasons, some or the other tweaking is still going on in all the programs. That’s why, I won’t be uploading them right away.

Anything else about my new approach, like delivering a seminar or so? Any news from the Indian physicists?

I had already contacted a couple of physics professors from India, both from Pune: one, about 1.5 years ago, and another, within the last 6 months. Both these times, I offered to become a co-guide for some computational physics projects to be done by their PG/UG students or so. Both times (what else?) there was absolutely no reply to my emails. … If they were to respond, we could have together progressed further on simulating my approach. … I have always been “open” about it.

The above-mentioned experience is precisely similar to how there have been no replies when I wrote to some other professors of physics, i.e., when I offered to conduct a seminar (covering my new approach) in their departments. Particularly, from the young IISER Pune professor whom I had written. … Oh yes, BTW, there has been one more physicist who I contacted recently for a seminar (within the last month). Once again, there has been no reply. (This professor is known to enjoy hospitality abroad as an Indian, and also use my taxpayer’s money for research while in India.)

No, the issue is not whether the emails I write using my Yahoo! account go into their span folder—or something like that. That would be too innocuous a cause, and too easy to deal with—every one has a mobile-phone these days. But I know these (Indian) physicists. Their behaviour remains exactly the same even if I write my emails using a respectable academic email ID (my employers’, complete with a .edu domain). This was my experience in 2016, and it repeated again in 2017.

The bottom-line is this: If you are an engineer and if you write to these Indian physicists, there is almost a guarantee that your emails will go into a black-hole. They will not reply to you even if you yourself have a PhD, and are a Full Professor of engineering (even if only on an ad-hoc basis), and have studied and worked abroad, and even if your blog is followed internationally. So long as you are engineer, and mention QM, the Indian physicists simply shut themselves off.

However, there is a trick to get them to reply you. Their behavior does temporarily change when you put some impressive guy in your cc-field (e.g., some professor friend of yours from some IIT). In this case, they sometimes do reply your first email. However, soon after that initial shaking of hands, they somehow go back to their true core; they shut themselves off.

And this is what invariably happens with all of them—no matter what other Indian bloggers might have led you to believe.

There must be some systemic reasons for such behavior, you say? Here, I will offer a couple of relevant observations.

Systemically speaking, Indian physicists, taken as a group (and leaving any possible rarest of the rare exceptions aside), all fall into one band: (i) The first commonality is that they all are government employees. (ii) The second commonality they all tend to be leftists (or, heavily leftists). (iii) The third commonality is they (by and large) share is that they had lower (or far lower) competitive scores in the entrance examinations at the gateway points like XII, GATE/JAM, etc.

The first factor typically means that they know that no one is going to ask them why they didn’t reply (even to people like with my background). The second factor typically means that they don’t want to give you any mileage, not even just a plain academic respect, if you are not already one of “them”. The third factor typically means that they simply don’t have the very intellectual means to understand or judge anything you say if it is original—i.e., if it is not based on some work of someone from abroad. In plain words: they are incompetent. (That in part is the reason whenever I run into a competent Indian physicist, it is both a surprise and a pleasure. To drop a couple of names: Prof. Kanhere (now retired) from UoP (now SPPU), and Prof. Waghmare of JNCASR. … But leaving aside this minuscule minority, and coming to the rest of the herd: the less said, the better.)

In short, Indian physicists all fall into a band. And they all are very classical—no tunneling is possible. Not with these Indian physicists. (The trends, I guess, are similar all over the world. Yet, I definitely can say that Indians are worse, far worse, than people from the advanced, Western, countries.)

Anyway, as far as the path through the simulations goes, since no help is going to come from these government servants (regarded as physicists by foreigners), I now realized that I have to get going about it—simulations for my new approach—entirely on my own. If necessary, from the basic of the basics. … And that’s how I got going with these programs.

Are these programs going to provide a peek into my new approach?

No, none of these programs I talked about in this post is going to be very directly helpful for simulations related to my new approach. The programs I wrote thus far are all very, very standard (simplest UG text-book level) stuff. If resolving QM riddles were that easy, any number of people would have done it already.

… So, the programs I wrote over the last couple of weeks are nothing but just a beginning. I have to cover a lot of distance. It may take months, perhaps even a year or so. But I intend to keep working at it. At least in an off and on manner. I have no choice.

And, at least currently, I am going about it at a fairly good speed.

For the same reason, expect no further blogging for another 2–3 weeks or so.

But one thing is for certain. As soon as my paper on my new approach (to be written after running the simulations) gets written, I am going to quit QM. The field does not hold any further interest to me.

Coming to you: If you still wish to know more about my new approach before the paper gets written, then you convince these Indian professors of physics to arrange for my seminar. Or, else…

… What else? Simple! You. Just. Wait.

[Or see me in person if you would be visiting India. As I said, I have always been “open” from my side, and I continue to remain so.]

A song I like:
(Hindi) “bheegee bheegee fizaa…”
Music: Hemant Kumar
Singer: Asha Bhosale
Lyrics: Kaifi Aazmi

History:
Originally published: 2018.11.26 18:12 IST
Extension and revision: 2018.11.27 19.29 IST

# A list of books for understanding the non-relativistic QM

TL;DR: NFY (Not for you).

In this post, I will list those books which have been actually helpful to me during my self-studies of QM.

But before coming to the list, let me first note down a few points which would be important for engineers who wish to study QM on their own. After all, my blog is regularly visited by engineers too. That’s what the data about the visit patterns to various posts says.

Others (e.g. physicists) may perhaps skip over the note in the next section, and instead jump directly over to the list itself. However, even if the note for engineers is too long, perhaps, physicists should go through it too. If they did, they sure would come to know a bit more about the kind of background from which the engineers come.

# I. A note for engineers who wish to study QM on their own:

The point is this: QM is vast, even if its postulates are just a few. So, it takes a prolonged, sustained effort to learn it.

For the same reason (of vastness), learning QM also involves your having to side-by-side learn an entirely new approach to learning itself. (If you have been a good student of engineering, chances are pretty good that you already have some first-hand idea about this meta-learning thing. But the point is, if you wish to understand QM, you have to put it to use once again afresh!)

In terms of vastness, QM is, in some sense, comparable to this cluster of subjects spanning engineering and physics: engineering thermodynamics, statistical mechanics, kinetics, fluid mechanics, and heat- and mass-transfer.

I.1 Thermodynamics as a science that is hard to get right:

The four laws of thermodynamics (including the zeroth and the third) are easy enough to grasp—I mean, in the simpler settings. But when it comes to this subject (as also for the Newtonian mechanics, i.e., from the particle to the continuum mechanics), God lies not in the postulates but in their applications.

The statement of the first law of thermodynamics remains the same simple one. But complexity begins to creep in as soon as you begin to dig just a little bit deeper with it. Entire categories of new considerations enter the picture, and the meaning of the same postulates gets both enriched and deepened with them. For instance, consider the distinction of the open vs. the closed vs. the isolated systems, and the corresponding changes that have to be made even to the mathematical statements of the law. That’s just for the starters. The complexity keeps increasing: studies of different processes like adiabatic vs. isochoric vs. polytropic vs. isentropic etc., and understanding the nature of these idealizations and their relevance in diverse practical applications such as: steam power (important even today, specifically, in the nuclear power plants), IC engines, jet turbines, refrigeration and air-conditioning, furnaces, boilers, process equipment, etc.; phase transitions, material properties and their variations; empirical charts….

Then there is another point. To really understand thermodynamics well, you have to learn a lot of other subjects too. You have to go further and study some different but complementary sciences like heat and mass transfer, to begin with. And to do that well, you need to study fluid dynamics first. Kinetics is practically important too; think of process engineering and cost of energy. Ideas from statistical mechanics are important from the viewpoint of developing a fundamental understanding. And then, you have to augment all this study with all the empirical studies of the irreversible processes (think: the boiling heat transfer process). It’s only when you study such an entire gamut of topics and subjects that you can truly come to say that you now have some realistic understanding of the subject matter that is thermodynamics.

Developing understanding of the aforementioned vast cluster of subjects (of thermal sciences) is difficult; it requires a sustained effort spanning over years. Mistakes are not only very easily possible; in engineering schools, they are routine. Let me illustrate this point with just one example from thermodynamics.

Consider some point that is somewhat nutty to get right. For instance, consider the fact that no work is done during the free expansion of a gas. If you are such a genius that you could correctly get this point right on your very first reading, then hats off to you. Personally, I could not. Neither do I know of even a single engineer who could. We all had summarily stumbled on some fine points like this.

You see, what happens here is that thermodynamics and statistical mechanics involve entirely different ways of thinking, but they both are being introduced almost at the same time during your UG studies. Therefore, it is easy enough to mix up the some disparate metaphors coming from these two entirely different paradigms.

Coming to the specific example of the free expansion, initially, it is easy enough for you to think that since momentum is being carried by all those gas molecules escaping the chamber during the free expansion process, there must be a leakage of work associated with it. Further, since the molecules were already moving in a random manner, there must be an accompanying leakage of the heat too. Both turn out to be wrong ways of thinking about the process! Intuitions about thermodynamics develop only slowly. You think that you understood what the basic idea of a system and an environment is like, but the example of the free expansion serves to expose the holes in your understanding. And then, it’s not just thermo and stat mech. You have to learn how to separate both from kinetics (and they all, from the two other, closely related, thermal sciences: fluid mechanics, and heat and mass transfer).

But before you can learn to separate out the unique perspectives of these subject matters, you first have to learn their contents! But the way the university education happens, you also get exposed to them more or less simultaneously! (4 years is as nothing in a career that might span over 30 to 40 years.)

Since you are learning a lot many different paradigms at the same time, it is easy enough to naively transfer your fledgling understanding of one aspect of one paradigm (say, that of the particle or statistical mechanics) and naively insert it, in an invalid manner, into another paradigm which you are still just learning to use at roughly the same time (thermodynamics). This is what happens in the case of the free expansion of gases. Or, of throttling. Or, of the difference between the two… It is a rare student who can correctly answer all the questions on this topic, during his oral examination.

Now, here is the ultimate point: Postulates-wise, thermodynamics is independent of the rest of the subjects from the aforementioned cluster of subjects. So, in theory, you should be able to “get” thermodynamics—its postulates, in all their generality—even without ever having learnt these other subjects.

Yet, paradoxically enough, we find that complicated concepts and processes also become easier to understand when they are approached using many different conceptual pathways. A good example here would be the concept of entropy.

When you are a XII standard student (or even during your first couple of years in engineering), you are, more or less, just getting your feet wet with the idea of the differentials. As it so happens, before you run into the concept of entropy, virtually every physics concept was such that it was a ratio of two differentials. For instance, the instantaneous velocity is the ratio of d(displacement) over d(time). But the definition of entropy involves a more creative way of using the calculus: it has a differential (and that too an inexact differential), but only in the numerator. The denominator is a “plain-vanilla” variable. You have already learnt the maths used in dealing with the rates of changes—i.e. the calculus. But that doesn’t mean that you have an already learnt physical imagination with you which would let you handle this kind of a definition—one that involves a ratio of a differential quantity to an ordinary variable. … “Why should only one thing change even as the other thing remains steadfastly constant?” you may wonder. “And if it is anyway going to stay constant, then is it even significant? (Isn’t the derivative of a constant the zero?) So, why not just throw the constant variable out of the consideration?” You see, one major reason you can’t deal with the definition of entropy is simply because you can’t deal with the way its maths comes arranged. Understanding entropy in a purely thermodynamic—i.e. continuum—context can get confusing, to say the least. But then, just throw in a simple insight from Boltzmann’s theory, and suddenly, the bulb gets lit up!

So, paradoxically enough, even if multiple paradigms mean more work and even more possibilities of confusion, in some ways, having multiple approaches also does help.

When a subject is vast, and therefore involves multiple paradigms, people regularly fail to get certain complex ideas right. That happens even to very smart people. For instance, consider Maxwell’s daemon. Not many people could figure out how to deal with it correctly, for such a long time.

…All in all, it is only some time later, when you have already studied all these topics—thermodynamics, kinetics, statistical mechanics, fluid mechanics, heat and mass transfer—that finally things begin to fall in place (if they at all do, at any point of time!). But getting there involves hard effort that goes on for years: it involves learning all these topics individually, and then, also integrating them all together.

In other words, there is no short-cut to understanding thermodynamics. It seems easy enough to think that you’ve understood the 4 laws the first time you ran into them. But the huge gaps in your understanding begin to become apparent only when it comes to applying them to a wide variety of situations.

I.2 QM is vast, and requires multiple passes of studies:

Something similar happens also with QM. It too has relatively few postulates (3 to 6 in number, depending on which author you consult) but a vast scope of applicability. It is easy enough to develop a feeling that you have understood the postulates right. But, exactly as in the case of thermodynamics (or Newtonian mechanics), once again, the God lies not in the postulates but rather in their applications. And in case of QM, you have to hasten to add: the God also lies in the very meaning of these postulates—not just their applications. QM carries a one-two punch.

Similar to the case of thermodynamics and the related cluster of subjects, it is not possible to “get” QM in the first go. If you think you did, chances are that you have a superhuman intelligence. Or, far, far more likely, the plain fact of the matter is that you simply didn’t get the subject matter right—not in its full generality. (Which is what typically happens to the CS guys who think that they have mastered QM, even if the only “QM” they ever learnt was that of two-state systems in a finite-dimensional Hilbert space, and without ever acquiring even an inkling of ideas like radiation-matter interactions, transition rates, or the average decoherence times.)

The only way out, the only way that works in properly studying QM is this: Begin studying QM at a simpler level, finish developing as much understanding about its entire scope as possible (as happens in the typical Modern Physics courses), and then come to studying the same set of topics once again in a next iteration, but now to a greater depth. And, you have to keep repeating this process some 4–5 times. Often times, you have to come back from iteration n+2 to n.

As someone remarked at some forum (at Physics StackExchange or Quora or so), to learn QM, you have to give it “multiple passes.” Only then can you succeed understanding it. The idea of multiple passes has several implications. Let me mention only two of them. Both are specific to QM (and not to thermodynamics).

First, you have to develop the art of being able to hold some not-fully-satisfactory islands of understanding, with all the accompanying ambiguities, for extended periods of time (which usually runs into years!). You have to learn how to give a second or a third pass even when some of the things right from the first pass are still nowhere near getting clarified. You have to learn a lot of maths on the fly too. However, if you ask me, that’s a relatively easier task. The really difficult part is that you have to know (or learn!) how to keep forging ahead, even if at the same time, you carry a big set of nagging doubts that no one seems to know (or even care) about. (To make the matters worse, professional physicists, mathematicians and philosophers proudly keep telling you that these doubts will remain just as they are for the rest of your life.) You have to learn how to shove these ambiguous and un-clarified matters to some place near the back of your mind, you have to learn how to ignore them for a while, and still find the mental energy to once again begin right from the beginning, for your next pass: Planck and his cavity radiation, Einstein, blah blah blah blah blah!

Second, for the same reason (i.e. the necessity of multiple passes and the nature of QM), you also have to learn how to unlearn certain half-baked ideas and replace them later on with better ones. For a good example, go through Dan Styer’s paper on misconceptions about QM (listed near the end of this post).

Thus, two seemingly contradictory skills come into the play: You have to learn how to hold ambiguities without letting them affect your studies. At the same time, you also have to learn how not to hold on to them forever, or how to unlearn them, when the time to do becomes ripe.

Thus, learning QM does not involve just learning of new contents. You also have learn this art of building a sufficiently “temporary” but very complex conceptual structure in your mind—a structure that, despite all its complexity, still is resilient. You have to learn the art of holding such a framework together over a period of years, even as some parts of it are still getting replaced in your subsequent passes.

And, you have to compensate for all the failings of your teachers too (who themselves were told, effectively, to “shut up and calculate!”) Properly learning QM is a demanding enterprise.

# II. The list:

Now, with that long a preface, let me come to listing all the main books that I found especially helpful during my various passes. Please remember, I am still learning QM. I still don’t understand the second half of most any UG book on QM. This is a factual statement. I am not ashamed of it. It’s just that the first half itself managed to keep me so busy for so long that I could not come to studying, in an in-depth manner, the second half. (By the second half, I mean things like: the QM of molecules and binding, of their spectra, QM of solids, QM of complicated light-matter interactions, computational techniques like DFT, etc.) … OK. So, without any further ado, let me jot down the actual list.  I will subdivide it in several sub-sections

II.0. Junior-college (American high-school) level:

Obvious:

• Resnick and Halliday.
• Thomas and Finney. Also, Allan Jeffrey

II.1. Initial, college physics level:

• “Modern physics” by Beiser, or equivalent
• Optional but truly helpful: “Physical chemistry” by Atkins, or equivalent, i.e., only the parts relevant to QM. (I know engineers often tend to ignore the chemistry books, but they should not. In my experience, often times, chemistry books do a superior job of explaining physics. Physics, to paraphrase a witticism, is far too important to be left to the physicists!)

II.2. Preparatory material for some select topics:

• “Physics of waves” by Howard Georgi. Excellence written all over, but precisely for the same reason, take care to avoid the temptation to get stuck in it!
• Maths: No particular book, but a representative one would be Kreyszig, i.e., with Thomas and Finney or Allan Jeffrey still within easy reach.
• There are a few things you have to relearn, if necessary. These include: the idea of the limits of sequences and series. (Yes, go through this simple a topic too, once again. I mean it!). Then, the limits of functions.
Also try to relearn curve-tracing.
• Unlearn (or throw away) all the accounts of complex numbers which remain stuck at the level of how $\sqrt{-1}$ was stupefying, and how, when you have complex numbers, any arbitrary equation magically comes to have roots, etc. Unlearn all that talk. Instead, focus on the similarities of complex numbers to both the real numbers and vectors, and also their differences from each. Unlike what mathematicians love to tell you, complex numbers are not just another kind of numbers. They don’t represent just the next step in the logic of how the idea of numbers gets generalized as go from integers to real numbers. The reason is this: Unlike the integers, rationals, irrationals and reals, complex numbers take birth as composite numbers (as a pair of numbers that is ordered too), and they remain that way until the end of their life. Get that part right, and ignore all the mathematicians’ loose talk about it.
Study complex numbers in a way that, eventually, you should find yourself being comfortable with the two equivalent ways of modeling physical phenomena: as a set of two coupled real-valued differential equations, and as a single but complex-valued differential equation.
• Also try to become proficient with the two main expansions: the Taylor, and the Fourier.
• Also develop a habit of quickly substituting truncated expansions (i.e., either a polynomial, or a sum complex exponentials having just a few initial harmonics, not an entire infinity of them) into any “arbitrary” function as an ansatz, and see how the proposed theory pans out with these. The goal is to become comfortable, at the same time, with a habit of tracing conceptual pathways to the meaning of maths as well as with the computational techniques of FDM, FEM, and FFT.
• The finite differences approximation: Also, learn the art of quickly substituting the finite differences ($\Delta$‘s) in place of the differential quantities ($d$ or $\partial$) in a differential equation, and seeing how it pans out. The idea here is not just the computational modeling. The point is: Every differential equation has been derived in reference to an elemental volume which was then taken to a vanishingly small size. The variation of quantities of interest across such (infinitesimally small) volume are always represented using the Taylor series expansion.
(That’s correct! It is true that the derivations using the variational approach don’t refer to the Taylor expansion. But they also don’t use infinitesimal volumes; they refer to finite or infinite domains. It is the variation in functions which is taken to the vanishingly small limit in their case. In any case, if your derivation has an infinitesimall small element, bingo, you are going to use the Taylor series.)
Now, coming back to why you must learn develop the habit of having a finite differences approximation in place of a differential equation. The thing is this: By doing so, you are unpacking the derivation; you are traversing the analysis in the reverse direction, you are by the logic of the procedure forced to look for the physical (or at least lower-level, less abstract) referents of a mathematical relation/idea/concept.
While thus going back and forth between the finite differences and the differentials, also learn the art of tracing how the limiting process proceeds in each such a case. This part is not at all as obvious as you might think. It took me years and years to figure out that there can be infinitesimals within infinitesimals. (In fact, I have blogged about it several years ago here. More recently, I wrote a PDF document about how many numbers are there in the real number system, which discusses the same idea, from a different angle. In any case, if you were not shocked by the fact that there can be an infinity of infinitesimals within any infinitesimal, either think sufficiently long about it—or quit studying foundations of QM.)

II.3. Quantum chemistry level (mostly concerned with only the TISE, not TDSE):

• Optional: “QM: a conceptual approach” by Hameka. A fairly well-written book. You can pick it up for some serious reading, but also try to finish it as fast as you can, because you are going to relean the same stuff once again through the next book in the sequence. But yes, you can pick it up; it’s only about 200 pages.
• “Quantum chemistry” by McQuarrie. Never commit the sin of bypassing this excellent book.
A suggestion: Once you finish reading through this particular book, take a small (40 page) notebook, and write down (in the long hand) just the titles of the sections of each chapter of this book, followed by a listing of the important concepts / equations / proofs introduced in it. … You see, the section titles of this book themselves are complete sentences that encapsulate very neat nuggets. Here are a couple of examples: “5.6: The harmonic oscillator accounts for the infrared spectrum of a diatomic molecule.” Yes, that’s a section title! Here is another: “6.2: If a Hamiltonian is separable, then its eigenfunctions are products of simpler eigenfunctions.” See why I recommend this book? And this (40 page notebook) way of studying it?
• “Quantum physics of atoms, molecules, solids, nuclei, and particles” (yes, that’s the title of this single volume!) by Eisberg and Resnick. This Resnick is the same one as that of Resnick and Halliday. Going through the same topics via yet another thick book (almost 850 pages) can get exasperating, at least at times. But guess if you show some patience here, it should simplify things later. …. Confession: I was too busy with teaching and learning engineering topics like FEM, CFD, and also with many other things in between. So, I could not find the time to read this book the way I would have liked to. But from whatever I did read (and I did go over a fairly good portion of it), I can tell you that not finishing this book was a mistake on my part. Don’t repeat my mistake. Further, I do keep going back to it, and may be as a result, I would one day have finished it! One more point. This book is more than quantum chemistry; it does discuss the time-dependent parts too. The only reason I include it in this sub-section (chemistry) rather than the next (physics) is because the emphasis here is much more on TISE than TDSE.

II.4. Quantum physics level (includes TDSE):

• “Quantum physics” by Alastair I. M. Rae. Hands down, the best book in its class. To my mind, it easily beats all of the following: Griffiths, Gasiorowicz, Feynman, Susskind, … .
Oh, BTW, this is the only book I have ever come across which does not put scare-quotes around the word “derivation,” while describing the original development of the Schrodinger equation. In fact, this text goes one step ahead and explicitly notes the right idea, viz., that Schrodinger’s development is a derivation, but it is an inductive derivation, not deductive. (… Oh God, these modern American professors of physics!)
But even leaving this one (arguably “small”) detail aside, the book has excellence written all over it. Far better than the competition.
Another attraction: The author touches upon all the standard topics within just about 225 pages. (He also has further 3 chapters, one each on relativity and QM, quantum information, and conceptual problems with QM. However, I have mostly ignored these.) When a book is of manageable size, it by itself is an overload reducer. (This post is not a portion from a text-book!)
The only “drawback” of this book is that, like many British authors, Rae has a tendency to seamlessly bunch together a lot of different points into a single, bigger, paragraph. He does not isolate the points sufficiently well. So, you have to write a lot of margin notes identifying those distinct, sub-paragraph level, points. (But one advantage here is that this procedure is very effective in keeping you glued to the book!)
• “Quantum physics” by Griffiths. Oh yes, Griffiths is on my list too. It’s just that I find it far better to go through Rae first, and only then come to going through Griffiths.
• … Also, avoid the temptation to read both these books side-by-side. You will soon find that you can’t do that. And so, driven by what other people say, you will soon end up ditching Rae—which would be a grave mistake. Since you can keep going through only one of them, you have to jettison the other. Here, I would advise you to first complete Rae. It’s indispensable. Griffiths is good too. But it is not indispensable. And as always, if you find the time and the inclination, you can always come back to Griffiths.

Starting sometime after finishing the initial UG quantum chemistry level books, but preferably after the quantum physics books, use the following two:

• “Foundations of quantum mechanics” by Travis Norsen. Very, very good. See my “review” here [^]
• “Foundations of quantum mechanics: from photons to quantum computers” by Reinhold Blumel.
Just because people don’t rave a lot about this book doesn’t mean that it is average. This book is peculiar. It does look very average if you flip through all its pages within, say, 2–3 minutes. But it turns out to be an extraordinarily well written book once you begin to actually read through its contents. The coverage here is concise, accurate, fairly comprehensive, and, as a distinctive feature, it also is fairly up-to-date.
Unlike the other text-books, Blumel gives you a good background in the specifics of the modern topics as well. So, once you complete this book, you should find it easy (to very easy) to understand today’s pop-sci articles, say those on quantum computers. To my knowledge, this is the only text-book which does this job (of introducing you to the topics that are relevant to today’s research), and it does this job exceedingly well.
• Use Blumel to understand the specifics, and use Norsen to understand their conceptual and the philosophical underpinnings.

II.Appendix: Miscellaneous—no levels specified; figure out as you go along:

• “Schrodinger’s cat” by John Gribbin. Unquestionably, the best pop-sci book on QM. Lights your fire.
• “Quantum” by Manjit Kumar. Helps keep the fire going.
• Kreyszig or equivalent. You need to master the basic ideas of the Fourier theory, and of solutions of PDEs via the separation ansatz.
• However, for many other topics like spherical harmonics or calculus of variations, you have to go hunting for explanations in some additional books. I “learnt” the spherical harmonics mostly through some online notes (esp. those by Michael Fowler of Univ. of Virginia) and QM textbooks, but I guess that a neat exposition of the topic, couched in contexts other than QM, would have been helpful. May be there is some ancient acoustics book that is really helpful. Anyway, I didn’t pursue this topic to any great depth (in fact I more or less skipped over it) because as it so happens, analytical methods fall short for anything more complex than the hydrogenic atoms.
• As to the variational calculus, avoid all the physics and maths books like a plague! Instead, learn the topic through the FEM books. Introductory FEM books have become vastly (i.e. categorically) better over the course of my generation. Today’s FEM text-books do provide a clear evidence that the authors themselves know what they are talking about! Among these books, just for learning the variational calculus aspects, I would advise going through Seshu or Fish and Belytschko first, and then through the relevant chapter from Reddy‘s book on FEM. In any case, avoid Bathe, Zienkiewicz, etc.; they are too heavily engineering-oriented, and often, in general, un-necessarily heavy-duty (though not as heavy-duty as Lancosz). Not very suitable for learning the basics of CoV as is required in the UG QM. A good supplementary book covering CoV is noted next.
• “From calculus to chaos: an introduction to dynamics” by David Acheson. A gem of a book. Small (just about 260 pages, including program listings—and just about 190 pages if you ignore them.) Excellent, even if, somehow, it does not appear on people’s lists. But if you ask me, this book is a must read for any one who has anything to do with physics or engineering. Useful chapters exist also on variational calculus and chaos. Comes with easy to understand QBasic programs (and their updated versions, ready to run on today’s computers, are available via the author’s Web site). Wish it also had chapters, say one each, on the mechanics of materials, and on fracture mechanics.
• Linear algebra. Here, keep your focus on understanding just the two concepts: (i) vector spaces, and (ii) eigen-vectors and -values. Don’t worry about other topics (like LU decomposition or the power method). If you understand these two topics right, the rest will follow “automatically,” more or less. To learn these two topics, however, don’t refer to text-books (not even those by Gilbert Strang or so). Instead, google on the online tutorials on computer games programming. This way, you will come to develop a far better (even robust) understanding of these concepts. … Yes, that’s right. One or two games programmers, I very definitely remember, actually did a much superior job of explaining these ideas (with all their complexity) than what any textbook by any university professor does. (iii) Oh yes, BTW, there is yet another concept which you should learn: “tensor product”. For this topic, I recommend Prof. Zhigang Suo‘s notes on linear algebra, available off iMechanica. These notes are a work in progress, but they are already excellent even in their present form.
• Probability. Contrary to a wide-spread impression (and to what one group of QM interpreters say), you actually don’t need much of statistics or probability in order to get the essence of QM right. Whatever you need has already been taught to you in your UG engineering/physics courses.Personally, though I haven’t yet gone through them, the two books on my radar (more from the data science angle) are: “Elementary probability” by Stirzaker, and “All of statistics” by Wasserman. But, frankly speaking, as far as QM itself is concerned, your intuitive understanding of probability as developed through your routine UG courses should be enough, IMHO.
• As to AJP type of articles, go through Dan Styer‘s paper on the nine formulations (doi:10.1119/1.1445404). But treat his paper on the common misconceptions (10.1119/1.18288) with a bit of caution; some of the ideas he lists as “misconceptions” are not necessarily so.
• arXiv tutorials/articles: Sometime after finishing quantum chemistry and before beginning quantum physics, go through the tutorial on QM by Bram Gaasbeek [^]. Neat, small, and really helpful for self-studies of QM. (It was written when the author was still a student himself.) Also, see the article on the postulates by Dorabantu [^]. Definitely helpful. Finally, let me pick up just one more arXiv article: “Entanglement isn’t just for spin” by Dan Schroeder [^]. Comes with neat visualizations, and helps demystify entanglement.
• Computational physics: Several good resources are available. One easy to recommend text-book is the one by Landau, Perez and Bordeianu. Among the online resources, the best collection I found was the one by Ian Cooper (of Univ. of Sydney) [^]. He has only MatLab scripts, not Python, but they all are very well documented (in an exemplary manner) via accompanying PDF files. It should be easy to port these programs to the Python eco-system.

Yes, we (finally) are near the end of this post, so let me add the mandatory catch-all clauses: This list is by no means comprehensive! This list supersedes any other list I may have put out in the past. This list may undergo changes in future.

Done.

OK. A couple of last minute addenda: For contrast, see the article “What is the best textbook for self-studying quantum mechanics?” which has appeared, of all places, on the Forbes!  [^]. (Looks like the QC-related hype has found its way into the business circles as well!) Also see the list at BookScrolling.com: “The best books to learn about quantum physics” [^].

OK. Now, I am really done.

A song I like:
(Marathi) “kiteedaa navyaane tulaa aaThavaave”
Music: Mandar Apte
Singer: Mandar Apte. Also, a separate female version by Arya Ambekar
Lyrics: Devayani Karve-Kothari

[Arya Ambekar’s version is great too, but somehow, I like Mandar Apte’s version better. Of course, I do often listen to both the versions. Excellent.]

[Almost 5000 More than 5,500 words! Give me a longer break for this time around, a much longer one, in fact… In the meanwhile, take care and bye until then…]

# How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.

Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…

But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.

As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.

When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…

I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…

A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan

# Blog-Filling—Part 3

Note: A long Update was added on 23 November 2017, at the end of the post.

Today I got just a little bit of respite from what has been a very tight schedule, which has been running into my weekends, too.

But at least for today, I do have a bit of a respite. So, I could at least think of posting something.

But for precisely the same reason, I don’t have any blogging material ready in the mind. So, I will just note something interesting that passed by me recently:

1. Catastrophe Theory: Check out Prof. Zhigang Suo’s recent blog post at iMechanica on catastrophe theory, here [^]; it’s marked by Suo’s trademark simplicity. He also helpfully provides a copy of Zeeman’s 1976 SciAm article, too. Regular readers of this blog will know that I am a big fan of the catastrophe theory; see, for instance, my last post mentioning the topic, here [^].
2. Computational Science and Engineering, and Python: If you are into computational science and engineering (which is The Proper And The Only Proper long-form of “CSE”), and wish to have fun with Python, then check out Prof. Hans Petter Langtangen’s excellent books, all under Open Source. Especially recommended is his “Finite Difference Computing with PDEs—A Modern Software Approach” [^]. What impressed me immediately was the way the author begins this book with the wave equation, and not with the diffusion or potential equation as is the routine practice in the FDM (or CSE) books. He also provides the detailed mathematical reason for his unusual choice of ordering the material, but apart from his reason(s), let me add in a comment here: wave $\Rightarrow$ diffusion $\Rightarrow$ potential (Poisson-Laplace) precisely was the historical order in which the maths of PDEs (by which I mean both the formulations of the equations and the techniques for their solutions) got developed—even though the modern trend is to reverse this order in the name of “simplicity.” The book comes with Python scripts; you don’t have to copy-paste code from the PDF (and then keep correcting the errors of characters or indentations). And, the book covers nonlinearity too.
3. Good Notes/Teachings/Explanations of UG Quantum Physics: I ran across Dan Schroeder’s “Entanglement isn’t just for spin.” Very true. And it needed to be said [^]. BTW, if you want a more gentle introduction to the UG-level QM than is presented in Allan Adam (et al)’s MIT OCW 8.04–8.06 [^], then make sure to check out Schroeder’s course at Weber [^] too. … Personally, though, I keep on fantasizing about going through all the videos of Adam’s course and taking out notes and posting them at my Web site. [… sigh]
4. The Supposed Spirituality of the “Quantum Information” Stored in the “Protein-Based Micro-Tubules”: OTOH, if you are more into philosophy of quantum mechanics, then do check out Roger Schlafly’s latest post, not to mention my comment on it, here [^].

The point no. 4. above was added in lieu of the usual “A Song I Like” section. The reason is, though I could squeeze in the time to write this post, I still remain far too rushed to think of a song—and to think/check if I have already run it here or not. But I will try add one later on, either to this post, or, if there is a big delay, then as the next “blog filler” post, the next time round.

[Update on 23 Nov. 2017 09:25 AM IST: Added the Song I Like section; see below]

OK, that’s it! … Will catch you at some indefinite time in future here, bye for now and take care…

A Song I Like:

(Western, Instrumental) “Theme from ‘Come September'”
Credits: Bobby Darin (?) [+ Billy Vaughn (?)]

[I grew up in what were absolutely rural areas in Maharashtra, India. All my initial years till my 9th standard were limited, at its upper end in the continuum of urbanity, to Shirpur, which still is only a taluka place. And, back then, it was a decidedly far more of a backward + adivasi region. The population of the main town itself hadn’t reached more than 15,000 or so by the time I left it in my X standard; the town didn’t have a single traffic light; most of the houses including the one we lived in) were load-bearing structures, not RCC; all the roads in the town were of single lanes; etc.

Even that being the case, I happened to listen to this song—a Western song—right when I was in Shirpur, in my 2nd/3rd standard. I first heard the song at my Mama’s place (an engineer, he was back then posted in the “big city” of the nearby Jalgaon, a district place).

As to this song, as soon as I listened to it, I was “into it.” I remained so for all the days of that vacation at Mama’s place. Yes, it was a 45 RPM record, and the permission to put the record on the player and even to play it, entirely on my own, was hard won after a determined and tedious effort to show all the elders that I was able to put the pin on to the record very carefully. And, every one in the house was an elder to me: my siblings, cousins, uncle, his wife, not to mention my parents (who were the last ones to be satisfied). But once the recognition arrived, I used it to the hilt; I must have ended up playing this record for at least 5 times for every remaining day of the vacation back then.

As far as I am concerned, I am entirely positive that appreciation for a certain style or kind of music isn’t determined by your environment or the specific culture in which you grow up.

As far as songs like these are concerned, today I am able to discern that what I had immediately though indirectly grasped, even as a 6–7 year old child, was what I today would describe as a certain kind of an “epistemological cleanliness.” There was a clear adherence to certain definitive, delimited kind of specifics, whether in terms of tones or rhythm. Now, it sure did help that this tune was happy. But frankly, I am certain, I would’ve liked a “clean” song like this one—one with very definite “separations”/”delineations” in its phrases, in its parts—even if the song itself weren’t to be so directly evocative of such frankly happy a mood. Indian music, in contrast, tends to keep “continuity” for its own sake, even when it’s not called for, and the certain downside of that style is that it leads to a badly mixed up “curry” of indefinitely stretched out weilings, even noise, very proudly passing as “music”. (In evidence: pick up any traditional “royal palace”/”kothaa” music.) … Yes, of course, there is a symmetrical downside to the specific “separated” style carried by the Western music too; the specific style of noise it can easily slip into is a disjointed kind of a noise. (In evidence, I offer 90% of Western classical music, and 99.99% of Western popular “music”. As to which 90%, well, we have to meet in person, and listen to select pieces of music on the fly.)

Anyway, coming back to the present song, today I searched for the original soundtrack of “Come September”, and got, say, this one [^]. However, I am not too sure that the version I heard back then was this one. Chances are much brighter that the version I first listened to was Billy Vaughn’s, as in here [^].

… A wonderful tune, and, as an added bonus, it never does fail to take me back to my “salad days.” …

… Oh yes, as another fond memory: that vacation also was the very first time that I came to wear a T-shirt; my Mama had gifted it to me in that vacation. The actual choice to buy a T-shirt rather than a shirt (+shorts, of course) was that of my cousin sister (who unfortunately is no more). But I distinctly remember she being surprised to learn that I was in no mood to have a T-shirt when I didn’t know what the word meant… I also distinctly remember her assuring me using sweet tones that a T-shirt would look good on me! … You see, in rural India, at least back then, T-shirts weren’t heard of; for years later on, may be until I went to Nasik in my 10th standard, it would be the only T-shirt I had ever worn. … But, anyway, as far as T-shirts go… well, as you know, I was into software engineering, and so….

Bye [really] for now and take care…]