Do you really need a QC in order to have a really unpredictable stream of bits?

0. Preliminaries:

This post has reference to Roger Schlafly’s recent post [^] in which he refers to Prof. Scott Aaronson’s post touching on the issue of the randomness generated by a QC vis-a-vis that obtained using the usual classical hardware [^], in particular, to Aaronson’s remark:

“the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random.”

I do think (based on my new approach to QM [(PDF) ^]) that building a scalable QC is an impossible task.

I wonder if they (the QC enthusiasts) haven’t already begun realizing the hopelessness of their endeavours, and thus haven’t slowly begun preparing for a graceful exit, say via the QC-as-a-RNG route.

While Aaronson’s remarks also saliently involve the element of the “faraway” skeptic, I will mostly ignore that consideration here in this post. I mean to say, initially, I will ignore the scenario in which you have to transmit random bits over a network, and still have to assure the skeptic that what he was getting at the receiving end was something coming “straight from the oven”—something which was not tampered with, in any way, during the transit. The skeptic would have to be specially assured in this scenario, because a network is inherently susceptible to a third-party attack wherein the attacker seeks to exploit the infrastructure of the random keys distribution to his advantage, via injection of systematic bits (i.e. bits of his choice) that only appear random to the intended receiver. A system that quantum-mechanically entangles the two devices at the two ends of the distribution channel, does logically seem to have a very definite advantage over a combination of ordinary RNGs and classical hardware for the network. However, I will not address this part here—not for the most part, and not initially, anyway.

Instead, for most of this post, I will focus on just one basic question:

Can any one be justified in thinking that an RNG that operates at the QM-level might have even a slightest possible advantage, at least logically speaking, over another RNG that operates at the CM-level? Note, the QM-level RNG need not always be a general purpose and scalable QC; it can be any simple or special-purpose device that exploits, and at its core operates at, the specifically QM-level.

Even if I am a 100% skeptic of the scalable QC, I also think that the answer on this latter count is: yes, perhaps you could argue that way. But then, I think, your argument would still be pointless.

Let me explain, following my approach, why I say so.


2. RNGs as based on nonlinearities. Nonlinearities in QM vs. those in CM:

2.1. Context: QM involves IAD:

QM does involve either IAD (instantaneous action a distance), or very, very large (decidedly super-relativistic) speeds for propagation of local changes over all distant regions of space.

From the experimental evidence we have, it seems that there have to be very, very high speeds of propagation, for even smallest changes that can take place in the \Psi and V fields. The Schrodinger equation assumes infinitely large speeds for them. Such obviously cannot be the case—it is best to take the infinite speeds as just an abstraction (as a mathematical approximation) to the reality of very, very high actual speeds. However, the experimental evidence also indicates that even if there has to be some or the other upper bound to the speeds v, with v \gg c, the speeds still have to be so high as to seemingly approach infinity, if the Schrodinger formalism is to be employed. And, of course, as you know it, Schrodinger’s formalism is pretty well understood, validated, and appreciated [^]. (For more on the speed limits and IAD in general, see the addendum at the end of this post.)

I don’t know the relativity theory or the relativistic QM. But I guess that since the electric fields of massive QM particles are non-uniform (they are in fact singular), their interactions with \Psi must be such that the system has to suddenly snap out of some one configuration and in the same process snap into one of the many alternative possible configurations. Since there are huge (astronomically large) number of particles in the universe, the alternative configurations would be {astronomically large}^{very large}—after all, the particles positions and motions are continuous. Thus, we couldn’t hope to calculate the propagation speeds for the changes in the local features of a configuration in terms of all those irreversible snap-out and snap-in events taken individually. We must take them in an ensemble sense. Further, the electric charges are massive, identical, and produce singular and continuous fields. Overall, it is the ensemble-level effects of these individual quantum mechanical snap-out and snap-in events whose end-result would be: the speed-of-light limitation of the special relativity (SR). After all, SR holds on the gross scale; it is a theory from classical electrodynamics. The electric and magnetic fields of classical EM can be seen as being produced by the quantum \Psi field (including the spinor function) of large ensembles of particles in the limit that the number of their configurations approaches infinity, and the classical EM waves i.e. light are nothing but the second-order effects in the classical EM fields.

I don’t know. I was just loud-thinking. But it’s certainly possible to have IAD for the changes in \Psi and V, and thus to have instantaneous energy transfers via photons across two distant atoms in a QM-level description, and still end up with a finite limit for the speed of light (c) for large collections of atoms.

OK. Enough of setting up the context.

2.2: The domain of dependence for the nonlinearity in QM vs. that in CM:

If QM is not linear, i.e., if there is a nonlinearity in the \Psi field (as I have proposed), then to evaluate the merits of the QM-level and CM-level RNGs, we have to compare the two nonlinearities: those in the QM vs. those in the CM.

The classical RNGs are always based on the nonlinearities in CM. For example:

  • the nonlinearities in the atmospheric electricity (the “static”) [^], or
  • the fluid-dynamical nonlinearities (as shown in the lottery-draw machines [^], or the lava lamps [^]), or
  • some or the other nonlinear electronic circuits (available for less than $10 in hardware stores)
  • etc.

All of them are based on two factors: (i) a large number of components (in the core system generating the random signal, not necessarily in the part that probes its state), and (ii) nonlinear interactions among all such components.

The number of variables in the QM description is anyway always larger: a single classical atom is seen as composed from tens, even hundreds of quantum mechanical charges. Further, due to the IAD present in the QM theory, the domain of dependence (DoD) [^] in QM remains, at all times, literally the entire universe—all charges are included in it, and the entire \Psi field too.

On the other hand, the DoD in the CM description remains limited to only that finite region which is contained in the relevant past light-cone. Even when a classical system is nonlinear, and thus gets crazy very rapidly with even small increases in the number of degrees of freedom (DOFs), its DoD still remains finite and rather very small at all times. In contrast, the DoD of QM is the whole universe—all physical objects in it.

2.3 Implication for the RNGs:

Based on the above-mentioned argument, which in my limited reading and knowledge Aaronson has never presented (and neither has any one else either, basically because they all continue to believe in von Neumann’s characterization of QM as a linear theory), an RNG operating at the QM level does seem to have, “logically” speaking, an upper hand over an RNG operating at the CM level.

Then why do I still say that arguing for the superiority of a QM-level RNG is still pointless?


3. The MVLSN principle, and its epistemological basis:

If you apply a proper epistemology (and I have in my mind here the one by Ayn Rand), then the supposed “logical” difference between the two descriptions becomes completely superfluous. That’s because the quantities whose differences are being examined, themselves begin to lose any epistemological standing.

The reason for that, in turn, is what I call the MVLSN principle: the law of the Meaninglessness of the Very Large or very Small Numbers (or scales).

What the MVLSN principle says is that if your argument crucially depends on the use of very large (or very small) quantities and relationships between them, i.e., if the fulcrum of your argument rests on some great extrapolations alone, then it begins to lose all cognitive merit. “Very large” and “very small” are contextual terms here, to be used judiciously.

Roughly speaking, if this principle is applied to our current situation, what it says is that when in your thought you cross a certain limit of DOFs and hence a certain limit of complexity (which anyway is sufficiently large as to be much, much beyond the limit of any and every available and even conceivable means of predictability), then any differences in the relative complexities (here, of the QM-level RNGs vs. the CM-level RNGs) ought to be regarded as having no bearing at all on knowledge, and therefore, as having no relevance in any practical issue.

Both QM-level and CM-level RNGs would be far too complex for you to devise any algorithm or a machine that might be able to predict the sequence of the bits coming out of either. Really. The complexity levels already grow so huge, even with just the classical systems, that it’s pointless trying to predict the the bits. Or, to try and compare the complexity of the classical RNGs with the quantum RNGs.

A clarification: I am not saying that there won’t be any systematic errors or patterns in the otherwise random bits that a CM-based RNG produces. Sure enough, due statistical testing and filtering is absolutely necessary. For instance, what the radio-stations or cell-phone towers transmit are, from the viewpoint of a RNG based on radio noise, systematic disturbances that do affect its randomness. See random.org [^] for further details. I am certainly not denying this part.

All that I am saying is that the sheer number of DOF’s involved itself is so huge that the very randomness of the bits produced even by a classical RNG is beyond every reasonable doubt.

BTW, in this context, do see my previous couple of posts dealing with probability, indeterminism, randomness, and the all-important system vs. the law distinction here [^], and here [^].


4. To conclude my main argument here…:

In short, even “purely” classical RNGs can be way, way too complex for any one to be concerned in any way about their predictability. They are unpredictable. You don’t have to go chase the QM level just in order to ensure unpredictability.

Just take one of those WinTV lottery draw machines [^], start the air flow, get your prediction algorithm running on your computer (whether classical or quantum), and try to predict the next ball that would come out once the switch is pressed. Let me be generous. Assume that the switch gets pressed at exactly predictable intervals.

Go ahead, try it.


5. The Height of the Tallest Possible Man (HTPM):

If you still insist on the supposedly “logical” superiority of the QM-level RNGs, make sure to understand the MVLSN principle well.

The issue here is somewhat like asking this question:

What could possibly be the upper limit to the height of man, taken as a species? Not any other species (like the legendary “yeti”), but human beings, specifically. How tall can any man at all get? Where do you draw the line?

People could perhaps go on arguing, with at least some fig-leaf of epistemological legitimacy, over numbers like 12 feet vs. 14 feet as the true limit. (The world record mentioned in the Guinness Book is slightly under 9 feet [^]. The ceiling in a typical room is about 10 feet high.) Why, they could even perhaps go like: “Ummmm… may be 12 feet is more likely a limit than 24 feet? whaddaya say?”

Being very generous of spirit, I might still describe this as a borderline case of madness. The reason is, in the act of undertaking even just a probabilistic comparison like that, the speaker has already agreed to assign non-zero probabilities to all the numbers belonging to that range. Realize, no one would invoke the ideas of likelihood or probability theory if he thought that the probability for an event, however calculated, was always going to be zero. He would exclude certain kinds of ranges from his analysis to begin with—even for a stochastic analysis. … So, madness it is, even if, in my most generous mood, I might regard it as a borderline madness.

But if you assume that a living being has all the other characteristic of only a human being (including being naturally born to human parents), and if you still say that in between the two statements: (A) a man could perhaps grow to be 100 feet tall, and (B) a man could perhaps grow to be 200 feet tall, it is the statement (A) which is relatively and logically more reasonable, then what the principle (MVLSN) says is this: “you basically have lost all your epistemological bearing.”

That’s nothing but complex (actually, philosophic) for saying that you have gone mad, full-stop.

The law of the meaningless of the very large or very small numbers does have a certain basis in epistemology. It goes something like this:

Abstractions are abstractions from the actually perceived concretes. Hence, even while making just conceptual projections, the range over which a given abstraction (or concept) can remain relevant is determined by the actual ranges in the direct experience from which they were derived (and the nature, scope and purpose of that particular abstraction, the method of reaching it, and its use in applications including projections). Abstractions cannot be used in disregard of the ranges of the measurements over which they were formed.

I think that after having seen the sort of crazy things that even simplest nonlinear systems with fewest variables and parameters can do (for instance, which weather agency in the world can make predictions (to the accuracy demanded by newspapers) beyond 5 days? who can predict which way is the first vortex going to be shed even in a single cylinder experiment?), it’s very easy to conclude that the CM-level vs. QM-level RNG distinction is comparable to the argument about the greater reasonableness of a 100 feet tall man vs. that of a 200 feet tall man. It’s meaningless. And, madness.


6. Aaronson’s further points:

To be fair, much of the above write-up was not meant for Aaronson; he does readily grant the CM-level RNGs validity. What he says, immediately after the quote mentioned at the beginning of this post, is that if you don’t have the requirement of distributing bits over a network,

…then generating random bits is obviously trivial with existing technology.

However, since Aaronson believes that QM is a linear theory, he does not even consider making a comparison of the nonlinearities involved in QM and CM.

I thought that it was important to point out that even the standard (i.e., Schrodinger’s equation-based) QM is nonlinear, and further, that even if this fact leads to some glaring differences between the two technologies (based on the IAD considerations), such differences still do not lead to any advantages whatsoever for the QM-level RNG, as far as the task of generating random bits is concerned.

As to the task of transmitting them over a network is concerned, Aaronson then notes:

If you do have the requirement, on the other hand, then you’ll have to do something interesting—and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

Sure, it will have to involve QM. But then, why does it have to be only a QC? Why not have just special-purpose devices that are quantum mechanically entangled over wires / EM-waves?

And finally, let me come to yet another issue: But why would you at all have to have that requirement?—of having to transmit the keys over a network, and not using any other means?

Why does something as messy as a network have to get involved for a task that is as critical and delicate as distribution of some super-specially important keys? If 99.9999% of your keys-distribution requirements can be met using “trivial” (read: classical) technologies, and if you can also generate random keys using equipment that costs less than $100 at most, then why do you have to spend billions of dollars in just distributing them to distant locations of your own offices / installations—especially if the need for changing the keys is going to be only on an infrequent basis? … And if bribing or murdering a guy who physically carries a sealed box containing a thumb-drive having secret keys is possible, then what makes the guys manning the entangled stations suddenly go all morally upright and also immortal?

From what I have read, Aaronson does consider such questions even if he seems to do so rather infrequently. The QC enthusiasts, OTOH, never do.

As I said, this QC as an RNG thing does show some marks of trying to figure out a respectable exit-way out of the scalable QC euphoria—now that they have already managed to wrest millions and billions in their research funding.

My two cents.


Addendum on speed limits and IAD:

Speed limits are needed out of the principle that infinity is a mathematical concept and cannot metaphysically exist. However, the nature of the ontology involved in QM compels us to rethink many issues right from the beginning. In particular, we need to carefully distinguish between all the following situations:

  1. The transportation of a massive classical object (a distinguishable, i.e. finite-sized, bounded piece of physical matter) from one place to another, in literally no time.
  2. The transmission of the momentum or changes in it (like forces or changes in them) being carried by one object, to a distant object not in direct physical contact, in literally no time.
  3. Two mutually compensating changes in the local values of some physical property (like momentum or energy) suffered at two distant points by the same object, a circumstance which may be viewed from some higher-level or abstract perspective as transmission of the property in question over space but in no time. In reality, it’s just one process of change affecting only one object, but it occurs in a special way: in mutually compensating manner at two different places at the same time.

Only the first really qualifies to be called spooky. The second is curious but not necessarily spooky—not if you begin to regard two planets as just two regions of the same background object, or alternatively, as two clearly different objects which are being pulled in various ways at the same time and in mutually compensating ways via some invisible strings or fields that shorten or extend appropriately. The third one is not spooky at all—the object that effects the necessary compensations is not even a third object (like a field). Both the interacting “objects” and the “intervening medium” are nothing but different parts of one and the same object.

What happens in QM is the third possibility. I have been describing such changes as occurring with an IAD (instantaneous action at a distance), but now I am not too sure if such a usage is really correct or not. I now think that it is not. The term IAD should be reserved only for the second category—it’s an action that gets transported there. As to the first category, a new term should be coined: ITD (instantaneous transportation to distance). As to the third category, the new term could be IMCAD (instantaneous and mutually compensating actions at a distance). However, this all is an afterthought. So, in this post, I only have ended up using the term IAD even for the third category.

Some day I will think more deeply about it and straighten out the terminology, may be invent some or new terms to describe all the three situations with adequate directness, and then choose the best… Until then, please excuse me and interpret what I am saying in reference to context. Also, feel free to suggest good alternative terms. Also, let me know if there are any further distinctions to be made, i.e., if the above classification into three categories is not adequate or refined enough. Thanks in advance.


A song I like:

[A wonderful “koLi-geet,” i.e., a fisherman’s song. Written by a poet who hailed not from the coastal “konkaN” region but from the interior “desh.” But it sounds so authentically coastal… Listening to it today instantly transported me back to my high-school days.]

(Marathi) “suTalaa vaadaLi vaaraa…”
Singing, Music and Lyrics: Shaahir Amar Sheikh

 


History: Originally published on 2019.07.04 22:53 IST. Extended and streamlined considerably on 2019.07.05 11:04 IST. The songs section added: 2019.07.05 17:13 IST. Further streamlined, and also further added a new section (no. 6.) on 2019.07.5 22:37 IST. … Am giving up on this post now. It grew from about 650 words (in a draft for a comment at Schlafly’s blog) to 3080 words as of now. Time to move on.

Still made further additions and streamlining for a total of ~3500 words, on 2019.07.06 16:24 IST.

Advertisements

Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)


Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.


A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.


Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.


Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.


“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.


Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.


Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics


The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.


Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.


Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

  • Physical laws are always causal.
  • Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
  • The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
  • Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
  • Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
  • Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
  • Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
  • Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.


A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi

 

 

A list of books for understanding the non-relativistic QM

TL;DR: NFY (Not for you).


In this post, I will list those books which have been actually helpful to me during my self-studies of QM.

But before coming to the list, let me first note down a few points which would be important for engineers who wish to study QM on their own. After all, my blog is regularly visited by engineers too. That’s what the data about the visit patterns to various posts says.

Others (e.g. physicists) may perhaps skip over the note in the next section, and instead jump directly over to the list itself. However, even if the note for engineers is too long, perhaps, physicists should go through it too. If they did, they sure would come to know a bit more about the kind of background from which the engineers come.


I. A note for engineers who wish to study QM on their own:

The point is this: QM is vast, even if its postulates are just a few. So, it takes a prolonged, sustained effort to learn it.

For the same reason (of vastness), learning QM also involves your having to side-by-side learn an entirely new approach to learning itself. (If you have been a good student of engineering, chances are pretty good that you already have some first-hand idea about this meta-learning thing. But the point is, if you wish to understand QM, you have to put it to use once again afresh!)

In terms of vastness, QM is, in some sense, comparable to this cluster of subjects spanning engineering and physics: engineering thermodynamics, statistical mechanics, kinetics, fluid mechanics, and heat- and mass-transfer.

I.1 Thermodynamics as a science that is hard to get right:

The four laws of thermodynamics (including the zeroth and the third) are easy enough to grasp—I mean, in the simpler settings. But when it comes to this subject (as also for the Newtonian mechanics, i.e., from the particle to the continuum mechanics), God lies not in the postulates but in their applications.

The statement of the first law of thermodynamics remains the same simple one. But complexity begins to creep in as soon as you begin to dig just a little bit deeper with it. Entire categories of new considerations enter the picture, and the meaning of the same postulates gets both enriched and deepened with them. For instance, consider the distinction of the open vs. the closed vs. the isolated systems, and the corresponding changes that have to be made even to the mathematical statements of the law. That’s just for the starters. The complexity keeps increasing: studies of different processes like adiabatic vs. isochoric vs. polytropic vs. isentropic etc., and understanding the nature of these idealizations and their relevance in diverse practical applications such as: steam power (important even today, specifically, in the nuclear power plants), IC engines, jet turbines, refrigeration and air-conditioning, furnaces, boilers, process equipment, etc.; phase transitions, material properties and their variations; empirical charts….

Then there is another point. To really understand thermodynamics well, you have to learn a lot of other subjects too. You have to go further and study some different but complementary sciences like heat and mass transfer, to begin with. And to do that well, you need to study fluid dynamics first. Kinetics is practically important too; think of process engineering and cost of energy. Ideas from statistical mechanics are important from the viewpoint of developing a fundamental understanding. And then, you have to augment all this study with all the empirical studies of the irreversible processes (think: the boiling heat transfer process). It’s only when you study such an entire gamut of topics and subjects that you can truly come to say that you now have some realistic understanding of the subject matter that is thermodynamics.

Developing understanding of the aforementioned vast cluster of subjects (of thermal sciences) is difficult; it requires a sustained effort spanning over years. Mistakes are not only very easily possible; in engineering schools, they are routine. Let me illustrate this point with just one example from thermodynamics.

Consider some point that is somewhat nutty to get right. For instance, consider the fact that no work is done during the free expansion of a gas. If you are such a genius that you could correctly get this point right on your very first reading, then hats off to you. Personally, I could not. Neither do I know of even a single engineer who could. We all had summarily stumbled on some fine points like this.

You see, what happens here is that thermodynamics and statistical mechanics involve entirely different ways of thinking, but they both are being introduced almost at the same time during your UG studies. Therefore, it is easy enough to mix up the some disparate metaphors coming from these two entirely different paradigms.

Coming to the specific example of the free expansion, initially, it is easy enough for you to think that since momentum is being carried by all those gas molecules escaping the chamber during the free expansion process, there must be a leakage of work associated with it. Further, since the molecules were already moving in a random manner, there must be an accompanying leakage of the heat too. Both turn out to be wrong ways of thinking about the process! Intuitions about thermodynamics develop only slowly. You think that you understood what the basic idea of a system and an environment is like, but the example of the free expansion serves to expose the holes in your understanding. And then, it’s not just thermo and stat mech. You have to learn how to separate both from kinetics (and they all, from the two other, closely related, thermal sciences: fluid mechanics, and heat and mass transfer).

But before you can learn to separate out the unique perspectives of these subject matters, you first have to learn their contents! But the way the university education happens, you also get exposed to them more or less simultaneously! (4 years is as nothing in a career that might span over 30 to 40 years.)

Since you are learning a lot many different paradigms at the same time, it is easy enough to naively transfer your fledgling understanding of one aspect of one paradigm (say, that of the particle or statistical mechanics) and naively insert it, in an invalid manner, into another paradigm which you are still just learning to use at roughly the same time (thermodynamics). This is what happens in the case of the free expansion of gases. Or, of throttling. Or, of the difference between the two… It is a rare student who can correctly answer all the questions on this topic, during his oral examination.

Now, here is the ultimate point: Postulates-wise, thermodynamics is independent of the rest of the subjects from the aforementioned cluster of subjects. So, in theory, you should be able to “get” thermodynamics—its postulates, in all their generality—even without ever having learnt these other subjects.

Yet, paradoxically enough, we find that complicated concepts and processes also become easier to understand when they are approached using many different conceptual pathways. A good example here would be the concept of entropy.

When you are a XII standard student (or even during your first couple of years in engineering), you are, more or less, just getting your feet wet with the idea of the differentials. As it so happens, before you run into the concept of entropy, virtually every physics concept was such that it was a ratio of two differentials. For instance, the instantaneous velocity is the ratio of d(displacement) over d(time). But the definition of entropy involves a more creative way of using the calculus: it has a differential (and that too an inexact differential), but only in the numerator. The denominator is a “plain-vanilla” variable. You have already learnt the maths used in dealing with the rates of changes—i.e. the calculus. But that doesn’t mean that you have an already learnt physical imagination with you which would let you handle this kind of a definition—one that involves a ratio of a differential quantity to an ordinary variable. … “Why should only one thing change even as the other thing remains steadfastly constant?” you may wonder. “And if it is anyway going to stay constant, then is it even significant? (Isn’t the derivative of a constant the zero?) So, why not just throw the constant variable out of the consideration?” You see, one major reason you can’t deal with the definition of entropy is simply because you can’t deal with the way its maths comes arranged. Understanding entropy in a purely thermodynamic—i.e. continuum—context can get confusing, to say the least. But then, just throw in a simple insight from Boltzmann’s theory, and suddenly, the bulb gets lit up!

So, paradoxically enough, even if multiple paradigms mean more work and even more possibilities of confusion, in some ways, having multiple approaches also does help.

When a subject is vast, and therefore involves multiple paradigms, people regularly fail to get certain complex ideas right. That happens even to very smart people. For instance, consider Maxwell’s daemon. Not many people could figure out how to deal with it correctly, for such a long time.

…All in all, it is only some time later, when you have already studied all these topics—thermodynamics, kinetics, statistical mechanics, fluid mechanics, heat and mass transfer—that finally things begin to fall in place (if they at all do, at any point of time!). But getting there involves hard effort that goes on for years: it involves learning all these topics individually, and then, also integrating them all together.

In other words, there is no short-cut to understanding thermodynamics. It seems easy enough to think that you’ve understood the 4 laws the first time you ran into them. But the huge gaps in your understanding begin to become apparent only when it comes to applying them to a wide variety of situations.

I.2 QM is vast, and requires multiple passes of studies:

Something similar happens also with QM. It too has relatively few postulates (3 to 6 in number, depending on which author you consult) but a vast scope of applicability. It is easy enough to develop a feeling that you have understood the postulates right. But, exactly as in the case of thermodynamics (or Newtonian mechanics), once again, the God lies not in the postulates but rather in their applications. And in case of QM, you have to hasten to add: the God also lies in the very meaning of these postulates—not just their applications. QM carries a one-two punch.

Similar to the case of thermodynamics and the related cluster of subjects, it is not possible to “get” QM in the first go. If you think you did, chances are that you have a superhuman intelligence. Or, far, far more likely, the plain fact of the matter is that you simply didn’t get the subject matter right—not in its full generality. (Which is what typically happens to the CS guys who think that they have mastered QM, even if the only “QM” they ever learnt was that of two-state systems in a finite-dimensional Hilbert space, and without ever acquiring even an inkling of ideas like radiation-matter interactions, transition rates, or the average decoherence times.)

The only way out, the only way that works in properly studying QM is this: Begin studying QM at a simpler level, finish developing as much understanding about its entire scope as possible (as happens in the typical Modern Physics courses), and then come to studying the same set of topics once again in a next iteration, but now to a greater depth. And, you have to keep repeating this process some 4–5 times. Often times, you have to come back from iteration n+2 to n.

As someone remarked at some forum (at Physics StackExchange or Quora or so), to learn QM, you have to give it “multiple passes.” Only then can you succeed understanding it. The idea of multiple passes has several implications. Let me mention only two of them. Both are specific to QM (and not to thermodynamics).

First, you have to develop the art of being able to hold some not-fully-satisfactory islands of understanding, with all the accompanying ambiguities, for extended periods of time (which usually runs into years!). You have to learn how to give a second or a third pass even when some of the things right from the first pass are still nowhere near getting clarified. You have to learn a lot of maths on the fly too. However, if you ask me, that’s a relatively easier task. The really difficult part is that you have to know (or learn!) how to keep forging ahead, even if at the same time, you carry a big set of nagging doubts that no one seems to know (or even care) about. (To make the matters worse, professional physicists, mathematicians and philosophers proudly keep telling you that these doubts will remain just as they are for the rest of your life.) You have to learn how to shove these ambiguous and un-clarified matters to some place near the back of your mind, you have to learn how to ignore them for a while, and still find the mental energy to once again begin right from the beginning, for your next pass: Planck and his cavity radiation, Einstein, blah blah blah blah blah!

Second, for the same reason (i.e. the necessity of multiple passes and the nature of QM), you also have to learn how to unlearn certain half-baked ideas and replace them later on with better ones. For a good example, go through Dan Styer’s paper on misconceptions about QM (listed near the end of this post).

Thus, two seemingly contradictory skills come into the play: You have to learn how to hold ambiguities without letting them affect your studies. At the same time, you also have to learn how not to hold on to them forever, or how to unlearn them, when the time to do becomes ripe.

Thus, learning QM does not involve just learning of new contents. You also have learn this art of building a sufficiently “temporary” but very complex conceptual structure in your mind—a structure that, despite all its complexity, still is resilient. You have to learn the art of holding such a framework together over a period of years, even as some parts of it are still getting replaced in your subsequent passes.

And, you have to compensate for all the failings of your teachers too (who themselves were told, effectively, to “shut up and calculate!”) Properly learning QM is a demanding enterprise.


II. The list:

Now, with that long a preface, let me come to listing all the main books that I found especially helpful during my various passes. Please remember, I am still learning QM. I still don’t understand the second half of most any UG book on QM. This is a factual statement. I am not ashamed of it. It’s just that the first half itself managed to keep me so busy for so long that I could not come to studying, in an in-depth manner, the second half. (By the second half, I mean things like: the QM of molecules and binding, of their spectra, QM of solids, QM of complicated light-matter interactions, computational techniques like DFT, etc.) … OK. So, without any further ado, let me jot down the actual list.  I will subdivide it in several sub-sections


II.0. Junior-college (American high-school) level:

Obvious:

  • Resnick and Halliday.
  • Thomas and Finney. Also, Allan Jeffrey

II.1. Initial, college physics level:

  • “Modern physics” by Beiser, or equivalent
  • Optional but truly helpful: “Physical chemistry” by Atkins, or equivalent, i.e., only the parts relevant to QM. (I know engineers often tend to ignore the chemistry books, but they should not. In my experience, often times, chemistry books do a superior job of explaining physics. Physics, to paraphrase a witticism, is far too important to be left to the physicists!)

II.2. Preparatory material for some select topics:

  • “Physics of waves” by Howard Georgi. Excellence written all over, but precisely for the same reason, take care to avoid the temptation to get stuck in it!
  • Maths: No particular book, but a representative one would be Kreyszig, i.e., with Thomas and Finney or Allan Jeffrey still within easy reach.
    • There are a few things you have to relearn, if necessary. These include: the idea of the limits of sequences and series. (Yes, go through this simple a topic too, once again. I mean it!). Then, the limits of functions.
      Also try to relearn curve-tracing.
    • Unlearn (or throw away) all the accounts of complex numbers which remain stuck at the level of how \sqrt{-1} was stupefying, and how, when you have complex numbers, any arbitrary equation magically comes to have roots, etc. Unlearn all that talk. Instead, focus on the similarities of complex numbers to both the real numbers and vectors, and also their differences from each. Unlike what mathematicians love to tell you, complex numbers are not just another kind of numbers. They don’t represent just the next step in the logic of how the idea of numbers gets generalized as go from integers to real numbers. The reason is this: Unlike the integers, rationals, irrationals and reals, complex numbers take birth as composite numbers (as a pair of numbers that is ordered too), and they remain that way until the end of their life. Get that part right, and ignore all the mathematicians’ loose talk about it.
      Study complex numbers in a way that, eventually, you should find yourself being comfortable with the two equivalent ways of modeling physical phenomena: as a set of two coupled real-valued differential equations, and as a single but complex-valued differential equation.
    • Also try to become proficient with the two main expansions: the Taylor, and the Fourier.
    • Also develop a habit of quickly substituting truncated expansions (i.e., either a polynomial, or a sum complex exponentials having just a few initial harmonics, not an entire infinity of them) into any “arbitrary” function as an ansatz, and see how the proposed theory pans out with these. The goal is to become comfortable, at the same time, with a habit of tracing conceptual pathways to the meaning of maths as well as with the computational techniques of FDM, FEM, and FFT.
    • The finite differences approximation: Also, learn the art of quickly substituting the finite differences (\Delta‘s) in place of the differential quantities (d or \partial) in a differential equation, and seeing how it pans out. The idea here is not just the computational modeling. The point is: Every differential equation has been derived in reference to an elemental volume which was then taken to a vanishingly small size. The variation of quantities of interest across such (infinitesimally small) volume are always represented using the Taylor series expansion.
      (That’s correct! It is true that the derivations using the variational approach don’t refer to the Taylor expansion. But they also don’t use infinitesimal volumes; they refer to finite or infinite domains. It is the variation in functions which is taken to the vanishingly small limit in their case. In any case, if your derivation has an infinitesimall small element, bingo, you are going to use the Taylor series.)
      Now, coming back to why you must learn develop the habit of having a finite differences approximation in place of a differential equation. The thing is this: By doing so, you are unpacking the derivation; you are traversing the analysis in the reverse direction, you are by the logic of the procedure forced to look for the physical (or at least lower-level, less abstract) referents of a mathematical relation/idea/concept.
      While thus going back and forth between the finite differences and the differentials, also learn the art of tracing how the limiting process proceeds in each such a case. This part is not at all as obvious as you might think. It took me years and years to figure out that there can be infinitesimals within infinitesimals. (In fact, I have blogged about it several years ago here. More recently, I wrote a PDF document about how many numbers are there in the real number system, which discusses the same idea, from a different angle. In any case, if you were not shocked by the fact that there can be an infinity of infinitesimals within any infinitesimal, either think sufficiently long about it—or quit studying foundations of QM.)

II.3. Quantum chemistry level (mostly concerned with only the TISE, not TDSE):

  • Optional: “QM: a conceptual approach” by Hameka. A fairly well-written book. You can pick it up for some serious reading, but also try to finish it as fast as you can, because you are going to relean the same stuff once again through the next book in the sequence. But yes, you can pick it up; it’s only about 200 pages.
  • “Quantum chemistry” by McQuarrie. Never commit the sin of bypassing this excellent book.
    Summarily ignore your friend (who might have advised you Feynman vol. 3 or Susskind’s theoretical minimum or something similar). Instead, follow my advice!
    A suggestion: Once you finish reading through this particular book, take a small (40 page) notebook, and write down (in the long hand) just the titles of the sections of each chapter of this book, followed by a listing of the important concepts / equations / proofs introduced in it. … You see, the section titles of this book themselves are complete sentences that encapsulate very neat nuggets. Here are a couple of examples: “5.6: The harmonic oscillator accounts for the infrared spectrum of a diatomic molecule.” Yes, that’s a section title! Here is another: “6.2: If a Hamiltonian is separable, then its eigenfunctions are products of simpler eigenfunctions.” See why I recommend this book? And this (40 page notebook) way of studying it?
  • “Quantum physics of atoms, molecules, solids, nuclei, and particles” (yes, that’s the title of this single volume!) by Eisberg and Resnick. This Resnick is the same one as that of Resnick and Halliday. Going through the same topics via yet another thick book (almost 850 pages) can get exasperating, at least at times. But guess if you show some patience here, it should simplify things later. …. Confession: I was too busy with teaching and learning engineering topics like FEM, CFD, and also with many other things in between. So, I could not find the time to read this book the way I would have liked to. But from whatever I did read (and I did go over a fairly good portion of it), I can tell you that not finishing this book was a mistake on my part. Don’t repeat my mistake. Further, I do keep going back to it, and may be as a result, I would one day have finished it! One more point. This book is more than quantum chemistry; it does discuss the time-dependent parts too. The only reason I include it in this sub-section (chemistry) rather than the next (physics) is because the emphasis here is much more on TISE than TDSE.

II.4. Quantum physics level (includes TDSE):

  • “Quantum physics” by Alastair I. M. Rae. Hands down, the best book in its class. To my mind, it easily beats all of the following: Griffiths, Gasiorowicz, Feynman, Susskind, … .
    Oh, BTW, this is the only book I have ever come across which does not put scare-quotes around the word “derivation,” while describing the original development of the Schrodinger equation. In fact, this text goes one step ahead and explicitly notes the right idea, viz., that Schrodinger’s development is a derivation, but it is an inductive derivation, not deductive. (… Oh God, these modern American professors of physics!)
    But even leaving this one (arguably “small”) detail aside, the book has excellence written all over it. Far better than the competition.
    Another attraction: The author touches upon all the standard topics within just about 225 pages. (He also has further 3 chapters, one each on relativity and QM, quantum information, and conceptual problems with QM. However, I have mostly ignored these.) When a book is of manageable size, it by itself is an overload reducer. (This post is not a portion from a text-book!)
    The only “drawback” of this book is that, like many British authors, Rae has a tendency to seamlessly bunch together a lot of different points into a single, bigger, paragraph. He does not isolate the points sufficiently well. So, you have to write a lot of margin notes identifying those distinct, sub-paragraph level, points. (But one advantage here is that this procedure is very effective in keeping you glued to the book!)
  • “Quantum physics” by Griffiths. Oh yes, Griffiths is on my list too. It’s just that I find it far better to go through Rae first, and only then come to going through Griffiths.
  • … Also, avoid the temptation to read both these books side-by-side. You will soon find that you can’t do that. And so, driven by what other people say, you will soon end up ditching Rae—which would be a grave mistake. Since you can keep going through only one of them, you have to jettison the other. Here, I would advise you to first complete Rae. It’s indispensable. Griffiths is good too. But it is not indispensable. And as always, if you find the time and the inclination, you can always come back to Griffiths.

II.5. Side reading:

Starting sometime after finishing the initial UG quantum chemistry level books, but preferably after the quantum physics books, use the following two:

  • “Foundations of quantum mechanics” by Travis Norsen. Very, very good. See my “review” here [^]
  • “Foundations of quantum mechanics: from photons to quantum computers” by Reinhold Blumel.
    Just because people don’t rave a lot about this book doesn’t mean that it is average. This book is peculiar. It does look very average if you flip through all its pages within, say, 2–3 minutes. But it turns out to be an extraordinarily well written book once you begin to actually read through its contents. The coverage here is concise, accurate, fairly comprehensive, and, as a distinctive feature, it also is fairly up-to-date.
    Unlike the other text-books, Blumel gives you a good background in the specifics of the modern topics as well. So, once you complete this book, you should find it easy (to very easy) to understand today’s pop-sci articles, say those on quantum computers. To my knowledge, this is the only text-book which does this job (of introducing you to the topics that are relevant to today’s research), and it does this job exceedingly well.
  • Use Blumel to understand the specifics, and use Norsen to understand their conceptual and the philosophical underpinnings.

II.Appendix: Miscellaneous—no levels specified; figure out as you go along:

  • “Schrodinger’s cat” by John Gribbin. Unquestionably, the best pop-sci book on QM. Lights your fire.
  • “Quantum” by Manjit Kumar. Helps keep the fire going.
  • Kreyszig or equivalent. You need to master the basic ideas of the Fourier theory, and of solutions of PDEs via the separation ansatz.
  • However, for many other topics like spherical harmonics or calculus of variations, you have to go hunting for explanations in some additional books. I “learnt” the spherical harmonics mostly through some online notes (esp. those by Michael Fowler of Univ. of Virginia) and QM textbooks, but I guess that a neat exposition of the topic, couched in contexts other than QM, would have been helpful. May be there is some ancient acoustics book that is really helpful. Anyway, I didn’t pursue this topic to any great depth (in fact I more or less skipped over it) because as it so happens, analytical methods fall short for anything more complex than the hydrogenic atoms.
  • As to the variational calculus, avoid all the physics and maths books like a plague! Instead, learn the topic through the FEM books. Introductory FEM books have become vastly (i.e. categorically) better over the course of my generation. Today’s FEM text-books do provide a clear evidence that the authors themselves know what they are talking about! Among these books, just for learning the variational calculus aspects, I would advise going through Seshu or Fish and Belytschko first, and then through the relevant chapter from Reddy‘s book on FEM. In any case, avoid Bathe, Zienkiewicz, etc.; they are too heavily engineering-oriented, and often, in general, un-necessarily heavy-duty (though not as heavy-duty as Lancosz). Not very suitable for learning the basics of CoV as is required in the UG QM. A good supplementary book covering CoV is noted next.
  • “From calculus to chaos: an introduction to dynamics” by David Acheson. A gem of a book. Small (just about 260 pages, including program listings—and just about 190 pages if you ignore them.) Excellent, even if, somehow, it does not appear on people’s lists. But if you ask me, this book is a must read for any one who has anything to do with physics or engineering. Useful chapters exist also on variational calculus and chaos. Comes with easy to understand QBasic programs (and their updated versions, ready to run on today’s computers, are available via the author’s Web site). Wish it also had chapters, say one each, on the mechanics of materials, and on fracture mechanics.
  • Linear algebra. Here, keep your focus on understanding just the two concepts: (i) vector spaces, and (ii) eigen-vectors and -values. Don’t worry about other topics (like LU decomposition or the power method). If you understand these two topics right, the rest will follow “automatically,” more or less. To learn these two topics, however, don’t refer to text-books (not even those by Gilbert Strang or so). Instead, google on the online tutorials on computer games programming. This way, you will come to develop a far better (even robust) understanding of these concepts. … Yes, that’s right. One or two games programmers, I very definitely remember, actually did a much superior job of explaining these ideas (with all their complexity) than what any textbook by any university professor does. (iii) Oh yes, BTW, there is yet another concept which you should learn: “tensor product”. For this topic, I recommend Prof. Zhigang Suo‘s notes on linear algebra, available off iMechanica. These notes are a work in progress, but they are already excellent even in their present form.
  • Probability. Contrary to a wide-spread impression (and to what one group of QM interpreters say), you actually don’t need much of statistics or probability in order to get the essence of QM right. Whatever you need has already been taught to you in your UG engineering/physics courses.Personally, though I haven’t yet gone through them, the two books on my radar (more from the data science angle) are: “Elementary probability” by Stirzaker, and “All of statistics” by Wasserman. But, frankly speaking, as far as QM itself is concerned, your intuitive understanding of probability as developed through your routine UG courses should be enough, IMHO.
  • As to AJP type of articles, go through Dan Styer‘s paper on the nine formulations (doi:10.1119/1.1445404). But treat his paper on the common misconceptions (10.1119/1.18288) with a bit of caution; some of the ideas he lists as “misconceptions” are not necessarily so.
  • arXiv tutorials/articles: Sometime after finishing quantum chemistry and before beginning quantum physics, go through the tutorial on QM by Bram Gaasbeek [^]. Neat, small, and really helpful for self-studies of QM. (It was written when the author was still a student himself.) Also, see the article on the postulates by Dorabantu [^]. Definitely helpful. Finally, let me pick up just one more arXiv article: “Entanglement isn’t just for spin” by Dan Schroeder [^]. Comes with neat visualizations, and helps demystify entanglement.
  • Computational physics: Several good resources are available. One easy to recommend text-book is the one by Landau, Perez and Bordeianu. Among the online resources, the best collection I found was the one by Ian Cooper (of Univ. of Sydney) [^]. He has only MatLab scripts, not Python, but they all are very well documented (in an exemplary manner) via accompanying PDF files. It should be easy to port these programs to the Python eco-system.

Yes, we (finally) are near the end of this post, so let me add the mandatory catch-all clauses: This list is by no means comprehensive! This list supersedes any other list I may have put out in the past. This list may undergo changes in future.

Done.

OK. A couple of last minute addenda: For contrast, see the article “What is the best textbook for self-studying quantum mechanics?” which has appeared, of all places, on the Forbes!  [^]. (Looks like the QC-related hype has found its way into the business circles as well!) Also see the list at BookScrolling.com: “The best books to learn about quantum physics” [^].

OK. Now, I am really done.


A song I like:
(Marathi) “kiteedaa navyaane tulaa aaThavaave”
Music: Mandar Apte
Singer: Mandar Apte. Also, a separate female version by Arya Ambekar
Lyrics: Devayani Karve-Kothari

[Arya Ambekar’s version is great too, but somehow, I like Mandar Apte’s version better. Of course, I do often listen to both the versions. Excellent.]


[Almost 5000 More than 5,500 words! Give me a longer break for this time around, a much longer one, in fact… In the meanwhile, take care and bye until then…]

Blog-Filling—Part 3

Note: A long Update was added on 23 November 2017, at the end of the post.


Today I got just a little bit of respite from what has been a very tight schedule, which has been running into my weekends, too.

But at least for today, I do have a bit of a respite. So, I could at least think of posting something.

But for precisely the same reason, I don’t have any blogging material ready in the mind. So, I will just note something interesting that passed by me recently:

  1. Catastrophe Theory: Check out Prof. Zhigang Suo’s recent blog post at iMechanica on catastrophe theory, here [^]; it’s marked by Suo’s trademark simplicity. He also helpfully provides a copy of Zeeman’s 1976 SciAm article, too. Regular readers of this blog will know that I am a big fan of the catastrophe theory; see, for instance, my last post mentioning the topic, here [^].
  2. Computational Science and Engineering, and Python: If you are into computational science and engineering (which is The Proper And The Only Proper long-form of “CSE”), and wish to have fun with Python, then check out Prof. Hans Petter Langtangen’s excellent books, all under Open Source. Especially recommended is his “Finite Difference Computing with PDEs—A Modern Software Approach” [^]. What impressed me immediately was the way the author begins this book with the wave equation, and not with the diffusion or potential equation as is the routine practice in the FDM (or CSE) books. He also provides the detailed mathematical reason for his unusual choice of ordering the material, but apart from his reason(s), let me add in a comment here: wave \Rightarrow diffusion \Rightarrow potential (Poisson-Laplace) precisely was the historical order in which the maths of PDEs (by which I mean both the formulations of the equations and the techniques for their solutions) got developed—even though the modern trend is to reverse this order in the name of “simplicity.” The book comes with Python scripts; you don’t have to copy-paste code from the PDF (and then keep correcting the errors of characters or indentations). And, the book covers nonlinearity too.
  3. Good Notes/Teachings/Explanations of UG Quantum Physics: I ran across Dan Schroeder’s “Entanglement isn’t just for spin.” Very true. And it needed to be said [^]. BTW, if you want a more gentle introduction to the UG-level QM than is presented in Allan Adam (et al)’s MIT OCW 8.04–8.06 [^], then make sure to check out Schroeder’s course at Weber [^] too. … Personally, though, I keep on fantasizing about going through all the videos of Adam’s course and taking out notes and posting them at my Web site. [… sigh]
  4. The Supposed Spirituality of the “Quantum Information” Stored in the “Protein-Based Micro-Tubules”: OTOH, if you are more into philosophy of quantum mechanics, then do check out Roger Schlafly’s latest post, not to mention my comment on it, here [^].

The point no. 4. above was added in lieu of the usual “A Song I Like” section. The reason is, though I could squeeze in the time to write this post, I still remain far too rushed to think of a song—and to think/check if I have already run it here or not. But I will try add one later on, either to this post, or, if there is a big delay, then as the next “blog filler” post, the next time round.

[Update on 23 Nov. 2017 09:25 AM IST: Added the Song I Like section; see below]

OK, that’s it! … Will catch you at some indefinite time in future here, bye for now and take care…


A Song I Like:

(Western, Instrumental) “Theme from ‘Come September'”
Credits: Bobby Darin (?) [+ Billy Vaughn (?)]

[I grew up in what were absolutely rural areas in Maharashtra, India. All my initial years till my 9th standard were limited, at its upper end in the continuum of urbanity, to Shirpur, which still is only a taluka place. And, back then, it was a decidedly far more of a backward + adivasi region. The population of the main town itself hadn’t reached more than 15,000 or so by the time I left it in my X standard; the town didn’t have a single traffic light; most of the houses including the one we lived in) were load-bearing structures, not RCC; all the roads in the town were of single lanes; etc.

Even that being the case, I happened to listen to this song—a Western song—right when I was in Shirpur, in my 2nd/3rd standard. I first heard the song at my Mama’s place (an engineer, he was back then posted in the “big city” of the nearby Jalgaon, a district place).

As to this song, as soon as I listened to it, I was “into it.” I remained so for all the days of that vacation at Mama’s place. Yes, it was a 45 RPM record, and the permission to put the record on the player and even to play it, entirely on my own, was hard won after a determined and tedious effort to show all the elders that I was able to put the pin on to the record very carefully. And, every one in the house was an elder to me: my siblings, cousins, uncle, his wife, not to mention my parents (who were the last ones to be satisfied). But once the recognition arrived, I used it to the hilt; I must have ended up playing this record for at least 5 times for every remaining day of the vacation back then.

As far as I am concerned, I am entirely positive that appreciation for a certain style or kind of music isn’t determined by your environment or the specific culture in which you grow up.

As far as songs like these are concerned, today I am able to discern that what I had immediately though indirectly grasped, even as a 6–7 year old child, was what I today would describe as a certain kind of an “epistemological cleanliness.” There was a clear adherence to certain definitive, delimited kind of specifics, whether in terms of tones or rhythm. Now, it sure did help that this tune was happy. But frankly, I am certain, I would’ve liked a “clean” song like this one—one with very definite “separations”/”delineations” in its phrases, in its parts—even if the song itself weren’t to be so directly evocative of such frankly happy a mood. Indian music, in contrast, tends to keep “continuity” for its own sake, even when it’s not called for, and the certain downside of that style is that it leads to a badly mixed up “curry” of indefinitely stretched out weilings, even noise, very proudly passing as “music”. (In evidence: pick up any traditional “royal palace”/”kothaa” music.) … Yes, of course, there is a symmetrical downside to the specific “separated” style carried by the Western music too; the specific style of noise it can easily slip into is a disjointed kind of a noise. (In evidence, I offer 90% of Western classical music, and 99.99% of Western popular “music”. As to which 90%, well, we have to meet in person, and listen to select pieces of music on the fly.)

Anyway, coming back to the present song, today I searched for the original soundtrack of “Come September”, and got, say, this one [^]. However, I am not too sure that the version I heard back then was this one. Chances are much brighter that the version I first listened to was Billy Vaughn’s, as in here [^].

… A wonderful tune, and, as an added bonus, it never does fail to take me back to my “salad days.” …

… Oh yes, as another fond memory: that vacation also was the very first time that I came to wear a T-shirt; my Mama had gifted it to me in that vacation. The actual choice to buy a T-shirt rather than a shirt (+shorts, of course) was that of my cousin sister (who unfortunately is no more). But I distinctly remember she being surprised to learn that I was in no mood to have a T-shirt when I didn’t know what the word meant… I also distinctly remember her assuring me using sweet tones that a T-shirt would look good on me! … You see, in rural India, at least back then, T-shirts weren’t heard of; for years later on, may be until I went to Nasik in my 10th standard, it would be the only T-shirt I had ever worn. … But, anyway, as far as T-shirts go… well, as you know, I was into software engineering, and so….

Bye [really] for now and take care…]

 

A prediction. Also, a couple of wishes…

The Prediction:

While the week of the Nobel prizes always has a way to generate a sense of suspense, of excitement, and even of wonderment, as far as I am concerned, the one prize that does that in the real sense to me is, of course, the Physics Nobel. … Nothing compares to it. Chemistry can come close, but not always. [And, Mr. Nobel was a good guy; he instituted no prize for maths! [LOL!]]. …

The Physics Nobel is the King of all awards in all fields, as far as I am concerned.

That’s why, this year, I have this feeling of missing something. … The reason is, this year’s Physics Nobel is already “known”; it will go to Kip Thorne and pals.

[I will not eat crow even if they don’t get it. [… Unless, of course, you know a delicious recipe or two for the same, and also demonstrate it to me, complete with you sampling it first.]]

But yes, Kip Thorne richly deserves it, and he will get it. That’s the prediction. I wanted to slip it in even if only few hours before the announcement arrives.

I will update this post later right today/tonight, after the Physics Nobel is actually announced.


Now let me come to the couple of wishes, as mentioned in the title. I will try to be brief. [Have been too busy these days… OK. Will let you know. We are going in for accreditation, and so, it’s been all heavy documentation-related work for the past few months. Despite all that hard-work, we still have managed to slip a bit on the progress, and so, currently, we are working on all week-ends and on most public holidays, too. [Yes, we came to work yesterday.] So, it’s only somehow that I manage to find some time to slip in this post—which is written absolutely on the fly, with no second thoughts or re-reading before posting. … So excuse me if there is a bit of lack of balance in the presentation, and of course, typos etc.]


Wish # 1:

The first wish is that a Physics Nobel should go, in a combined way, to what actually are two separate, but very intimately related, and two most significant advances in the physical understanding of man: (i) chaos theory (including fractals) and (ii)catastrophe theory.

If you don’t like the idea of two ideas being given a single Nobel, then, well, let me put it this way: the Nobel should be given for achieving the most significant advancements in the field of the differential nonlinearities, for a very substantial progress in the physical understanding of the behaviour of nonlinear physical systems, forging pathways for predictive capacity.

Let me emphasize, this has been one of the most significant advances in physics in the last century. No, saying so is emphatically not a hyperbole.

And, yes, it’s an advance in physics, primarily, and then, also in maths—but only secondarily.

… It’s unfortunate that an advancement which has been this remarkable never did register as such with most of the S&T “manpower”, esp., engineers and practical designers. It’s also unfortunate that the twin advancement arrived on the scene at the time of bad cultural (even epistemological) trends, and so, the advancements got embedded in a fabric of hyperbole, even nonsense.

But regardless of the cultural tones in which the popular presentations of these advancements (esp. of the chaos theory) got couched, taken as a science, the studies of nonlinearity in the physical systems has been a very, very, original, and a very, very creative, advancement. It needs to be recognized as such.

That way, I don’t much care for what it helped produce on the maths side of it. But yes, even a not very extraordinarily talented undergraduate in CS (one with a special interest in deterministic methods in cryptography) would be able to tell you how much light got shone on their discipline because of the catastrophe and chaos theories.

The catastrophe theory has been simply marvellous in one crucial aspect: it actually pushed the boundaries of what is understood by the term: mathematics. The theory has been daring enough to propose, literally for the first time in the entire history of mankind, a well-refined qualitative approach to an infinity of quantitative processes taken as a group.

The distinction between the qualitative and the quantitative had kept philosophers (and laymen) pre-occupied for millenia. But the nonlinear theory has been the first theoretical approach that tells you how to spot and isolate the objective bases for distinguishing what we consider as the qualitative changes.

Remove the understanding given by the nonlinear theory—by the catastrophe-theoretical approach—and, once in the domain of the linear theory, the differences in kind immediately begin to appear as more or less completely arbitrary. There is no place in theory for them—the qualitative distinctions are external to the theory because a linear system always behaves exactly the same with any quantitative changes made, at any scale, to any of the controlling parameters. Since in the linear theory the qualitative changes are not produced from within the theory itself, such distinctions must be imported into it out of some considerations that are in principle external to the theory.

People often confuse such imports with “applications.” No, when it comes to the linear theory, it’s not the considerations of applications which can be said to be driving any divisions of qualitative changes. The qualitative distinctions are basically arbitrary in a linear theory. It is important to realize that that usual question: “Now where do we draw the line?” is basically absolutely superfluous once you are within the domain of the linear systems. There are no objective grounds on the basis of which such distinctions can be made.

Studies of the nonlinear phenomena sure do precede the catastrophe and the chaos theories. Even in the times before these two theories came on the scene, applied physicists would think of certain ideas such as differences of regimes, esp. in the areas like fluid dynamics.

But to understand the illuminating power of the nonlinear theory, just catch hold of an industrial CFD guy (or a good professor of fluid dynamics from a good university [not, you know, from SPPU or similar universities]), and ask him whether there can be any deeper theoretical significance to the procedure of the Buckingham Pi Theorem, to the necessity, in his art (or science) of having to use so many dimensionless numbers. (Every mechanical/allied engineering undergraduate has at least once in life cursed the sheer number of them.) The competent CFD guy (or the good professor) would easily be at a loss. Then, toss a good book on the Catastrophe Theory to him, leave him alone for a couple of weeks or may be a month, return, and raise the same question again. He now may or may not have a very good, “flowy” sort of a verbal answer ready for you. But one look at his face would tell you that it has now begun to reflect a qualitatively different depth of physical understanding even as he tries to tackle that question in his own way. That difference arises only because of the Catastrophe Theory.

As to the Chaos Theory (and I club the fractal theory right in it), more number of people are likely to know about it, and so, I don’t have to wax a lot (whether eloquently or incompetently). But let me tell you one thing.

Feigenbaum’s discovery of the universal constant remains, to my mind, one of the most ingenious advancements in the entire history of physics, even of science. Especially, given the experimental equipment with which he made that discovery—a handheld HP Calculator (not a computer) in the seventies (or may be in the sixties)! … And yes, getting to that universal constant was, if you ask me, an act of discovery, and not of invention. (Invention was very intimately involved in the process; but the overall act and the end-product was one of discovery.)

So, here is a wish that these fundamental studies of the nonlinear systems get their due—the recognition they so well deserve—in the form of a Physics Nobel.

…And, as always, the sooner the better!


Wish # 2:

The second wish I want to put up here is this: I wish there was some commercial/applied artist, well-conversant with the “art” of supplying illustrations for a physics book, who also was available for a long-term project I have in mind.

To share a bit: Years ago (actually, almost two decades ago, in 1998 to be precise), I had made a suggestion that novels by Ayn Rand be put in the form of comics. As far as I was concerned, the idea was novel (i.e. new). I didn’t know at that time that a comics-book version of The Fountainhead had already been conceived of by none other than Ayn Rand herself, and it, in fact, had also been executed. In short, there was a comics-book version of The Fountainhead. … These days, I gather, they are doing something similar for Atlas Shrugged.

If you think about it, my idea was not at all a leap of imagination. Newspapers (even those in India) have been carrying comic strips for decades (right since before my own childhood), and Amar Chitrakatha was coming of age just when I was. (It was founded in 1967 by Mr. Pai.)

Similarly, conceiving of a comics-like book for physics is not at all a very creative act of imagination. In fact, it is not even original. Everyone knows those books by that Japanese linguistics group, the books on topics like the Fourier theory.

So, no claim of originality here.

It’s just that for my new theory of QM, I find that the format of a comics-book would be most suitable. (And what the hell if physicists don’t take me seriously because I put it in this form first. Who cares what they think anyway!)

Indeed, I would even like to write/produce some comics books on maths topics, too. Topics like grads, divs, curls, tensors, etc., eventually. … Guess I will save that part for keeping me preoccupied during my retirement. BTW, my retirement is not all that far away; it’s going to be here pretty soon, right within just five years from now. (Do one thing: Check out what I was writing, say in 2012 on this blog.)

But the one thing I would like write/produce right in the more immediate future is: the comics book on QM, putting forth my new approach.

So, in the closing, here is a request. If you know some artist (or an engineer/physicist with fairly good sketching/computer-drawing skills), and has time at hand, and has the capacity to stay put in a sizeable project, and won’t ask money for it (a fair share in the royalty is a given—provided we manage to find a publisher first, that is), then please do bring this post to his notice.

 


A Song I Like:

And, finally, here is the Marathi song I had promised you the last time round. It’s a fusion of what to my mind is one of the best tunes Shrinivas Khale ever produced, and the best justice to the words and the tunes by the singer. Imagine any one else in her place, and you will immediately come to know what I mean. … Pushpa Pagdhare easily takes this song to the levels of the very best by the best, including Lata Mangeshkar. [Oh yes, BTW, congrats are due to the selection committe of this year’s Lata Mangeshkar award, for selecting Pushpa Pagdhare.]

(Marathi) “yeuni swapnaat maajhyaa…”
Singer: Pushpa Pagdhare
Music: Shrinivas Khale
Lyrics: Devakinandan Saraswat

[PS: Note: I am going to come back and add an update once this year’s Physics Nobel is announced. At that time (or tonight) I will also try to streamline this post.

Then, I will be gone off the blogging for yet another couple of weeks or so—unless it’s a small little “kutty” post of the “Blog-Filler” kind or two.]