Why is the physical space 3-dimensional?

Why I write on this topic?

Well, it so happened that recently (about a month ago) I realized that I didn’t quite understand matrices. I mean, at least not as well as I should. … I was getting interested in the Data Science, browsing through a few books and Web sites on the topic, and soon enough realized that before going further, first, it would be better if I could systematically write down a short summary of the relevant mathematics, starting with the topic of matrices (and probability theory and regression analysis and the lot).

So, immediately, I fired TeXMaker, and started writing an “article” on matrices. But as is my habit, once I began actually typing, slowly, I also began to go meandering—pursuing just this one aside, and then just that one aside, and then just this one footnote, and then just that one end-note… The end product quickly became… unusable. Which means, it was useless. To any one. Including me.

So, after typing in a goodly amount, may be some 4–5 pages, I deleted that document, and began afresh.

This time round, I wrote only the abstract for a “future” document, and that too only in a point-by-point manner—you know, the way they specify those course syllabi? This strategy did help. In doing that, I realized that I still had quite a few issues to get straightened out well. For instance, the concept of the dual space [^][^].

After pursuing this activity very enthusiastically for something like a couple of days or so, my attention, naturally, once again got diverted to something else. And then, to something else. And then, to something else again… And soon enough, I came to even completely forget the original topic—I mean matrices. … Until in my random walk, I hit it once again, which was this week.

Once the orientation of my inspiration thus got once again aligned to “matrices” last week (I came back via eigen-values of differential operators), I now decided to first check out Prof. Zhigang Suo’s notes on Linear Algebra [^].

Yes! Zhigang’s notes are excellent! Very highly recommended! I like the way he builds topics: very carefully, and yet, very informally, with tons of common-sense examples to illustrate conceptual points. And in a very neat order. A lot of the initially stuff is accessible to even high-school students.

Now, what I wanted here was a single and concise document. So, I decided to take notes from his notes, and thereby make a shorter document that emphasized my own personal needs. Immediately thereafter, I found myself being engaged into that activity. I have already finished the first two chapters of his notes.

Then, the inevitable happened. Yes, you guessed it right: my attention (once again) got diverted.


What happened was that I ran into Prof. Scott Aaronson’s latest blog post [^], which is actually a transcript of an informal talk he gave recently. The topic of this post doesn’t really interest me, but there is an offhand (in fact a parenthetical) remark Scott makes which caught my eye and got me thinking. Let me quote here the culprit passage:

“The more general idea that I was groping toward or reinventing here is called a hidden-variable theory, of which the most famous example is Bohmian mechanics. Again, though, Bohmian mechanics has the defect that it’s only formulated for some exotic state space that the physicists care about for some reason—a space involving pointlike objects called “particles” that move around in 3 Euclidean dimensions (why 3? why not 17?).”

Hmmm, indeed… Why 3? Why not 17?

Knowing Scott, it was clear (to me) that he meant this remark not quite in the sense of a simple and straight-forward question (to be taken up for answering in detail), but more or less fully in the sense of challenging the common-sense assumption that the physical space is 3-dimensional.

One major reason why modern physicists don’t like Bohm’s theory is precisely because its physics occurs in the common-sense 3 dimensions, even though, I think, they don’t know that they hate him also because of this reason. (See my 2013 post here [^].)

But should you challenge an assumption just for the sake of challenging one? …

It’s true that modern physicists routinely do that—challenging assumptions just for the sake of challenging them.

Well, that way, this attitude is not bad by itself; it can potentially open doorways to new modes of thinking, even discoveries. But where they—the physicists and mathematicians—go wrong is: in not understanding the nature of their challenges themselves, well enough. In other words, questioning is good, but modern physicists fail to get what the question itself is, or even means (even if they themselves have posed the question out of a desire to challenge every thing and even everything). And yet—even if they don’t get even their own questions right—they do begin to blabber, all the same. Not just on arXiv but also in journal papers. The result is the epistemological jungle that is in the plain sight. The layman gets (or more accurately, is deliberately kept) confused.

Last year, I had written a post about what physicists mean by “higher-dimensional reality.” In fact, in 2013, I had also written a series of posts on the topic of space—which was more from a philosophical view, but unfortunately not yet completed. Check out my writings on space by hitting the tag “space” on my blog [^].

My last year’s post on the multi-dimensional reality [^] did address the issue of the n > 3 dimensions, but the writing in a way was geared more towards understanding what the term “dimension” itself means (to physicists).

In contrast, the aspect which now caught my attention was slightly different; it was this question:

Just how would you know if the physical space that you see around you is indeed was 3-, 4-, or 17-dimensional? What method would you use to positively assert the exact dimensionality of space? using what kind of an experiment? (Here, the experiment is to be taken in the sense of a thought experiment.)

I found an answer this question, too. Let me give you here some indication of it.


First, why, in our day-to-day life (and in most of engineering), do we take the physical space to be 3-dimensional?

The question is understood better if it is put more accurately:

What precisely do we mean when we say that the physical space is 3-dimensional? How do we validate that statement?

The answer is “simple” enough.

Mark a fixed point on the ground. Then, starting from that fixed point, walk down some distance x in the East direction, then move some distance y in the North direction, and then climb some distance z vertically straight up. Now, from that point, travel further by respectively the same distances along the three axes, but in the exactly opposite directions. (You can change the order in which you travel along the three axes, but the distance along a given axis for both the to- and the fro-travels must remain the same—it’s just that the directions have to be opposite.)

What happens if you actually do something like this in the physical reality?

You don’t have to leave your favorite arm-chair; just trace your finger along the edges of your laptop—making sure that the laptop’s screen remains at exactly 90 degrees to the plane of the keyboard.

If you actually undertake this strenuous an activity in the physical reality, you will find that, in physical reality, a “magic” happens: You come back exactly to the same point from where you had begun your journey.

That’s an important point. A very obvious point, but also, in a way, an important one. There are other crucially important points too. For instance, this observation. (Note, it is a physical observation, and not an arbitrary mathematical assumption):

No matter where you stop during the process of going in, say the East direction, you will find that you have not traveled even an inch in the North direction. Ditto, for the vertical axis. (It is to ensure this part that we keep the laptop screen at exactly 90 degrees to the keyboard.)

Thus, your x, y and z readings are completely independent of each other. No matter how hard you slog along, say the x-direction, it yields no fruit at all along the y– or z– directions.

It’s something like this: Suppose there is a girl that you really, really like. After a lot of hard-work, suppose you somehow manage to impress her. But then, at the end of it, you come to realize that all that hard work has done you no good as far as impressing her father is concerned. And then, even if you somehow manage to win her father on your side, there still remains her mother!

To say that the physical space is 3-dimensional is a positive statement, a statement of an experimentally measured fact (and not an arbitrary “geometrical” assertion which you accept only because Euclid said so). It consists of two parts:

The first part is this:

Using the travels along only 3 mutually independent directions (the position and the orientation of the coordinate frame being arbitrary), you can in principle reach any other point in the space.

If some region of space were to remain unreachable this way, if there were to be any “gaps” left in the space which you could not reach using this procedure, then it would imply either (i) that the procedure itself isn’t appropriate to establish the dimensionality of the space, or (ii) that it is, but the space itself may have more than 3 dimensions.

Assuming that the procedure itself is good enough, for a space to have more than 3 dimensions, the “unreachable region” doesn’t have to be a volume. The “gaps” in question may be limited to just isolated points here and there. In fact, logically speaking, there needs to be just one single (isolated) point which remains in principle unreachable by the procedure. Find just one such a point—and the dimensionality of the space would come in question. (Think: The Aunt! (The assumption here is that aunts aren’t gentlemen [^].))

Now what we do find in practice is that any point in the actual physical space indeed is in principle reachable via the above-mentioned procedure (of altering x, y and z values). It is in part for this reason that we say that the actual physical space is 3-D.

The second part is this:

We have to also prove, via observations, that fewer than 3 dimensions do fall short. (I told you: there was the mother!) Staircases and lifts (Americans call them elevators) are necessary in real life.

Putting it all together:

If n =3 does cover all the points in space, and if n > 3 isn’t necessary to reach every point in space, and if n < 3 falls short, then the inevitable conclusion is: n = 3 indeed is the exact dimensionality of the physical space.

QED?

Well, both yes and no.

Yes, because that’s what we have always observed.

No, because all physics knowledge has a certain definite scope and a definite context—it is “bounded” by the inductive context of the physical observations.

For fundamental physics theories, we often don’t exactly know the bounds. That’s OK. The most typical way in which the bounds get discovered is by “lying” to ourselves that no such bounds exist, and then experimentally discovering a new phenomena or a new range in which the current theory fails, and a new theory—which merely extends and subsumes the current theory—is validated.

Applied to our current problem, we can say that we know that the physical space is exactly three-dimensional—within the context of our present knowledge. However, it also is true that we don’t know what exactly the conceptual or “logical” boundaries of this physical conclusion are. One way to find them is to lie to ourselves that there are no such bounds, and continue investigating nature, and hope to find a phenomenon or something that helps find these bounds.

If tomorrow we discover a principle which implies that a certain region of space (or even just one single isolated point in it) remains in principle unreachable using just three dimensions, then we would have to abandon the idea that n = 3, that the physical space is 3-dimensional.

Thus far, not a single soul has been able to do that—Einstein, Minkowski or Poincare included.

No one has spelt out a single physically established principle using which a spatial gap (a region unreachable by the linear combination procedure) may become possible, even if only in principle.

So, it is 3, not 17.

QED.


All the same, it is not ridiculous to think whether there can be 4 or more number of dimensions—I mean for the physical space alone, not counting time. I could explain how. However, I have got too tired typing this post, and so, I am going to just jot down some indicative essentials.

Essentially, the argument rests on the idea that a physical “travel” (rigorously: a physical displacement of a physical object) isn’t the only physical process that may be used in establishing the dimensionality of the physical space.

Any other physical process, if it is sufficiently fundamental and sufficiently “capable,” could in principle be used. The requirements, I think, would be: (i) that the process must be able to generate certain physical effects which involve some changes in their spatial measurements, (ii) that it must be capable of producing any amount of a spatial change, and (iii) that it must allow fixing of an origin.

There would be the other usual requirements such as reproducibility etc., though the homogeneity wouldn’t be a requirement. Also observe Ayn Rand’s “some-but-any” principle [^] at work here.

So long as such requirements are met (I thought of it on the fly, but I think I got it fairly well), the physically occurring process (and not some mathematically dreamt up procedure) is a valid candidate to establish the physically existing dimensionality of the space “out there.”

Here is a hypothetical example.

Suppose that there are three knobs, each with a pointer and a scale. Keeping the three knobs at three positions results in a certain point (and only that point) getting mysteriously lit up. Changing the knob positions then means changing which exact point is lit-up—this one or that one. In a way, it means: “moving” the lit-up point from here to there. Then, if to each point in space there exists a unique “permutation” of the three knob readings (and here, by “permutation,” we mean that the order of the readings at the three knobs is important), then the process of turning the knobs qualifies for establishing the dimensionality of the space.

Notice, this hypothetical process does produce a physical effect that involves changes in the spatial measurements, but it does not involve a physical displacement of a physical object. (It’s something like sending two laser beams in the night sky, and being able to focus the point of intersection of the two “rays” at any point in the physical space.)

No one has been able to find any such process which even if only in principle (or in just thought experiments) could go towards establishing a 4-, 2-, or any other number for the dimensionality of the physical space.


I don’t know if my above answer was already known to physicists or not. I think the situation is going to be like this:

If I say that this answer is new, then I am sure that at some “opportune” moment in future, some American is simply going to pop up from nowhere at a forum or so, and write something which implies (or more likely, merely hints) that “everybody knew” it.

But if I say that the answer is old and well-known, and then if some layman comes to me and asks me how come the physicists keep talking as if it can’t be proved whether the space we inhabit is 3-dimensional or not, I would be at a loss to explain it to him—I don’t know a good explanation or a reference that spells out the “well known” solution that “everybody knew already.”

In my (very) limited reading, I haven’t found the point made above; so it could be a new insight. Assuming it is new, what could be the reason that despite its simplicity, physicists didn’t get it so far?

Answer to that question, in essential terms (I’ve really got too tired today) is this:

They define the very idea of space itself via spanning; they don’t first define the concept of space independently of any operation such as spanning, and only then see whether the space is closed under a given spanning operation or not.

In other words, effectively, what they do is to assign the concept of dimensionality to the spanning operation, and not to the space itself.

It is for this reason that discussions on the dimensionality of space remain confused and confusing.


Food for thought:

What does a 2.5-dimensional space mean? Hint: Lookup any book on fractals.

Why didn’t we consider such a procedure here? (We in fact don’t admit it as a proper procedure) Hint: We required that it must be possible to conduct the process in the physical reality—which means: the process must come to a completion—which means: it can’t be an infinite (indefinitely long or interminable) process—which means, it can’t be merely mathematical.

[Now you know why I hate mathematicians. They are the “gap” in our ability to convince someone else. You can convince laymen, engineers and programmers. (You can even convince the girl, the father and the mother.) But mathematicians? Oh God!…]


A Song I Like:

(English) “When she was just seventeen, you know what I mean…”
Band: Beatles


 

[May be an editing pass tomorrow? Too tired today.]

[E&OE]

QM: The physical view it takes—1

So, what exactly is quantum physics like? What is the QM theory all about?

You can approach this question at many levels and from many angles. However, if an engineer were to ask me this question (i.e., an engineer with sufficiently good grasp of mathematics such as differential equations and linear algebra), today, I would answer it in the following way. (I mean only the non-relativistic QM here; relativistic QM is totally beyond me, at least as of today):

Each physics theory takes a certain physical view of the universe, and unless that view can be spelt out in a brief and illuminating manner, anything else that you talk about it (e.g. the maths of the theory) tends to become floating, even meaningless.

So, when we speak of QM, we have to look for a physical view that is at once both sufficiently accurate and highly meaningful intuitively.

But what do I mean by a physical view? Let me spell it out first in the context of classical mechanics so that you get a sense of that term.

Personally, I like to think of separate stages even within classical mechanics.

Consider first the Newtonian mechanics. We can say that the Newtonian mechanics is all about matter and motion. (Maxwell it was, I think, who characterized it in this beautifully illuminating a way.) Newton’s original mechanics was all about the classical bodies. These were primarily discrete—not quite point particles, but finite ones, with each body confined to a finite and isolated region of space. They had no electrical attributes or features (such as charge, current, or magnetic field strength). But they did possess certain dynamical properties, e.g., location, size, density, mass, speed, and most importantly, momentum—which was, using modern terminology, a vector quantity. The continuum (e.g. a fluid) was seen as an extension of the idea of the discrete bodies, and could be studied by regarding an infinitesimal part of the continuum as if it were a discrete body. The freshly invented tools of calculus allowed Newton to take the transition from the discrete bodies (billiard balls) to both: the point-particles (via the shells-argument) as well as to the continuum (e.g. the drag force on a submerged body.)

The next stage was the Euler-Lagrange mechanics. This stage represents no new physics—only a new physical view. The E-L mechanics essentially was about the same kind of physical bodies, but now a number (often somewhat wrongly called a scalar) called energy being taken as the truly fundamental dynamical attribute. The maths involved the so-called variations in a global integral expression involving an energy-function (or other expressions similar to energy), but the crucial dynamic variable in the end would be a mere number; the number would be the outcome of evaluating a definite integral. (Historically, the formalism was developed and applied decades before the term energy could be rigorously isolated, and so, the original writings don’t use the expression “energy-function.” In fact, even today, the general practice is to put the theory using only the mathematical and abstract terms of the “Lagrangian” or the “Hamiltonian.”) While Newton’s own mechanics was necessarily about two (or more) discrete bodies locally interacting with each other (think collisions, friction), the Euler-Lagrange mechanics now was about one discrete body interacting with a global field. This global field could be taken to be mass-less. The idea of a global something (it only later on came to be called a field) was already a sharp departure from the original Newtonian mechanics. The motion of the massive body could be predicted using this kind of a formalism—a formalism that probed certain hypothetical variations in the global field (or, more accurately, in the interactions that the global field had with the given body). The body itself was, however, exactly as in the original Newtonian mechanics: discrete (or spread over definite and delimited region of space), massive, and without any electrical attributes or features.

The next stage, that of the classical electrodynamics, was about the Newtonian massive bodies but now these were also seen as endowed with the electrical attributes in addition to the older dynamical attributes of momentum or energy. The global field now became more complicated than the older gravitational field. The magnetic features, initially regarded as attributes primarily different from the electrical ones, later on came to be understood as a mere consequence of the electrical ones. The field concept was now firmly entrenched in physics, even though not always very well understood for what it actually was: as a mathematical abstraction. Hence the proliferation in the number of physical aethers. People rightly sought the physical referents for the mathematical abstraction of the field, but they wrongly made hasty concretizations, and that’s how there was a number of aethers: an aether of light, an aether of heat, an aether of EM, and so on. Eventually, when the contradictions inherent in the hasty concretizations became apparent, people threw the baby with the water, and it was not long before Einstein (and perhaps Poincare before him) would wrongly declare the universe to be devoid of any form of aether.

I need to check the original writings by Newton, but from whatever I gather (or compile, perhaps erroneously), I think that Newton had no idea of the field. He did originate the idea of the universal gravitation, but not that of the field of gravity. I think he would have always taken gravity to be a force that was directly operating between two discrete massive bodies, in isolation to anything else—i.e., without anything intervening between them (including any kind of a field). Gravity, a force (instantaneously) operating at a distance, would be regarded as a mere extension of the idea of the force by the direct physical contact. Gravity thus would be an effect of some sort of a stretched spring to Newton, a linear element that existed and operated between only two bodies at its two ends. (The idea of a linear element would become explicit in the lines of force in Faraday’s theorization.) It was just that with gravity, the line-like spring was to be taken as invisible. I don’t know, but that seems like a reasonable implicit view that Newton must have adopted. Thus, the idea of the field, even in its most rudimentary form, probably began only with the advent of the Euler-Lagrange mechanics. It anyway reached its full development in Maxwell’s synthesis of electricity and magnetism into electromagnetism. Remove the notion of the field from Maxwell’s theory, and it is impossible for the theory to even get going. Maxwellian EM cannot at all operate without having a field as an intermediate agency transmitting forces between the interacting massive bodies. On the other hand, Newtonian gravity (at least in its original form and at least for simpler problems) can. In Maxwellian EM, if two bodies suddenly change their relative positions, the rest of the universe comes to feel the change because the field which connects them all has changed. In Newtonian gravity, if two bodies suddenly change their relative positions, each of the other bodies in the universe comes to feel it only because its distances from the two bodies have changed—not because there is a field to mediate that change. Thus, there occurs a very definite change in the underlying physical view in this progression from Newton’s mechanics to Euler-Lagrange-Hamilton’s to Maxwell’s.

So, that’s what I mean by the term: a physical view. It is a view of what kind of objects and interactions are first assumed to exist in the universe, before a physics theory can even begin to describe them—i.e., before any postulates can even begin to be formulated. Let me hasten to add that it is a physical view, and not a philosophical view, even though physicists, and worse, mathematicians, often do confuse the issue and call it a (mere) philosophical discussion (if not a digression). (What better can you expect from mathematicians anyway? Or even from physicists?)

Now, what about quantum mechanics? What kind of objects does it deal with, and what kind of a physical view is required in order to appreciate the theory best?

What kind of objects does QM deal with?

QM once again deals with bodies that do have electromagnetic attributes or features—not just the dynamical ones. However, it now seeks to understand and explain how these features come to operate so that certain experimentally observed phenomena such as the cavity radiation and the gas spectra (i.e., the atomic absorption- and emission-spectra) can be predicted with a quantitative accuracy. In the process, QM keeps the idea of the field more or less intact. (No, strictly speaking it doesn’t, but that’s what physicists think anyway). However, the development of the theory was such that it had to bring the idea of the spatially delimited massive body, occupying a definite place and traveling via definite paths, into question. (In fact, quantum physicists went overboard and threw it out quite gleefully, without a thought.) So, that is the kind of “objects” it must assume before its theorization can at all begin. Physicists didn’t exactly understand what they were dealing with, and that’s how arose all its mysteries.

Now, how about its physical view?

In my (by now revised) opinion, quantum mechanics basically is all about the electronic orbitals and their evolutions (i.e., changes in the orbitals, with time).

(I am deliberately using the term “electronic” orbital, and not “atomic” orbital. When you say “atom,” you must mean something that is localized—else, you couldn’t possibly distinguish this object from that at the gross scale. But not so when it is the electronic orbitals. The atomic nucleus, at least in the non-relativistic QM, can be taken to be a localized and discrete “particle,” but the orbitals cannot be. Since the orbitals are necessarily global, since they are necessarily spread everywhere, there is no point in associating something local with them, something like the atom. Hence the usage: electronic orbitals, not atomic orbitals.)

The electronic orbital is a field whose governing equation is the second-order linear PDE that is Schrodinger’s equation, and the problems in the theory involve the usual kind of IVBV problems. But a further complexity arises in QM, because the real-valued orbital density isn’t the primary unknown in Schrodinger’s equation; the primary unknown is the complex-valued wavefunction.

The Schrodinger equation itself is basically like the diffusion equation, but since the primary unknown is complex-valued, it ends up showing some of the features of the wave equation. (That’s one reason. The other reason is, the presence of the potential term. But then, the potential here is the electric potential, and so, once again, indirectly, it has got to do with the complex nature of the wavefunction.) Hence the name “wave equation,” and the term “wavefunction.” (The “wavefunction” could very well have been called the “diffusionfunction,” but Schrodinger chose to call it the wavefunction, anyway.) Check it out:

Here is the diffusion equation:

\dfrac{\partial}{\partial t} \phi = D \nabla^2 \phi
Here is the Schrodinger equation:
\dfrac{\partial}{\partial t} \Psi = \dfrac{i\hbar}{2\mu} \nabla^2 \Psi + V \Psi

You can always work with two coupled real-valued equations instead of the single, complex-valued, Schrodinger’s equation, but it is mathematically more convenient to deal with it in the complex-valued form. If you were instead to work with the two coupled real-valued  equations, they would still end up giving you exactly the same results as the Schrodinger equation. You will still get the Maxwellian EM after conducting suitable grossing out processes. Yes, Schrodinger’s equation must give rise to the Maxwell’s equations. The two coupled real-valued equations would give you that (and also everything else that the complex-valued Schrodinger’s equation does). Now, Maxwell’s equations do have an inherent  coupling between the electric and magnetic fields. This, incidentally, is the simplest way to understand why the wavefunction must be complex-valued. [From now on, don’t entertain the descriptions like: “Why do the amplitudes have to be complex? I don’t know. No one knows. No one can know.” etc.]

But yes, speaking in overall terms, QM is, basically, all about the electronic orbitals and the changes in them. That is the physical view QM takes.

Hold that line in your mind any time you hit QM, and it will save you a lot of trouble.

When it comes to the basics or the core (or the “heart”) of QM, physicists will never give you the above answer. They will give you a lot many other answers, but never this one. For instance, Richard Feynman thought that the wave-particle duality (as illustrated by the single-particle double-slit interference arrangement) was the real key to understanding the QM theory. Bohr and Heisenberg instead believed that the primacy of the observables and the principle of the uncertainty formed the necessary key. Einstein believed that entanglement was the key—and therefore spent his time using this feature of the QM to deny completeness to the QM theory. (He was right; QM is not complete. He was not on the target, however; entanglement is merely an outcome, not a primary feature of the QM theory.)

They were all (at least partly) correct, but none of their approaches is truly illuminating—not to an engineer anyway.

They were correct in the sense, these indeed are valid features of QM—and they do form some of the most mystifying aspects of the theory. But they are mystifying only to an intuition that is developed in the classical mechanical mould. In any case, don’t mistake these mystifying features for the basic nature of the core of the theory. Discussions couched in terms of the more mysterious-appearing features in fact have come to complicate the quantum story unnecessarily; not helped simplify it. The actual nature of the theory is much more simple than what physicists have told you.

Just the way the field in the EM theory is not exactly the same kind of a continuum as in the original Newtonian mechanics (e.g., in EM it is mass-less, unlike water), similarly, neither the field nor the massive object of the QM is exactly as in their classical EM descriptions. It can’t be expected to be.

QM is about some new kinds of the ultimate theoretical objects (or building blocks) that especially (but not exclusively) make their peculiarities felt at the microscopic (or atomic) scale. These theoretical objects carry certain properties such that the theoretical objects go on to constitute the observed classical bodies, and their interactions go on to produce the observed classical EM phenomena. However, the new theoretical objects are such that they themselves do not (and cannot be expected to) possess all the features of the classical objects. These new theoretical objects are to be taken as more fundamental than the objects theorized in the classical mechanics. (The physical entities in the classical mechanics are: the classical massive objects and the classical EM field).

Now, this description is quite handful; it’s not easy to keep in mind. One needs a simpler view so that it can be held and recalled easily. And that simpler view is what I’ve told you already:

To repeat: QM is all about the electronic orbital and the changes it undergoes over time.

Today, most any physics professor would find this view objectionable. He would feel that it is not even a physics-based view, it is a chemistry-based one, even if the unsteady or the transient aspect is present in the formulation. He would feel that the unsteady aspect in the formulation is artificial; it is more or less slapped externally on to the picture of the steady-state orbitals given in the chemistry textbooks, almost as an afterthought of sorts. In any case, it is not physics—that’s what he would be sure of. By that, he would also be sure to mean that this view is not sufficiently mathematical. He might even find it amusing that a physical view of QM can be this intuitively understandable. And then, if you ask him for a sufficiently physics-like view of QM, he would tell you that a certain set of postulates is what constitutes the real core of the QM theory.

Well, the QM postulates indeed are the starting points of QM theory. But they are too abstract to give you an overall feel for what the theory is about. I assert that keeping the orbitals always at the back of your mind helps give you that necessary physical feel.

OK, so, keeping orbitals at the back of the mind, how do we now explain the wave-particle duality in the single-photon double-slit interference experiment?

Let me stop here for this post; I will open my next post on this topic precisely with that question.


A Song I Like:

(Hindi) “ik ajeeb udaasi hai, meraa man_ banawaasi hai…”
Music: Salil Chowdhury
Singer: Sayontoni Mazumdar
Lyrics: (??)

[No, you (very probably) never heard this song before. It comes not from a regular film, but supposedly from a tele-film that goes by the name “Vijaya,” which was produced/directed by one Krishna Raaghav. (I haven’t seen it, but gather that it was based on a novel of the same name by Sharat Chandra Chattopadhyaya. (Bongs, I think, over-estimate this novelist. His other novel is Devadaas. Yes, Devadaas. … Now you know. About the Chattopadhyaya.)) Anyway, as to this song itself, well, Salil-daa’s stamp is absolutely unmistakable. (If the Marathi listener feels that the flute piece appearing at the very beginning somehow sounds familiar, and then recalls the flute in Hridayanath Mangeshkar’s “mogaraa phulalaa,” then I want to point out that it was Hridayanath who once assisted Salil-daa, not the other way around.) IMO, this song is just great. The tune may perhaps sound like the usual ghazal-like tune, but the orchestration—it’s just extraordinary, sensitive, and overall, absolutely superb. This song in fact is one of Salil-daa’s all-time bests, IMO. … I don’t know who penned the lyrics, but they too are great. … Hint: Listen to this song on high-quality head-phones, not on the loud-speakers, and only when you are all alone, all by yourself—and especially as you are nursing your favorite Sundowner—and especially during the times when you are going jobless. … Try it, some such a time…. Take care, and bye for now]

[E&OE]

“Math rules”?

My work (and working) specialization today is computational science and engineering. I have taught FEM, and am currently teaching both FEM and CFD.

However, as it so happens, all my learning of FEM and CFD has been through self-studies. I have never sat in a class-room and learnt these topics that way. Naturally, there were, and are, gaps in my knowledge.


The most efficient way of learning any subject matter is through traditional formal learning—I mean to say, our usual university system. The reason is not just that a teacher is available to teach you the material; even books can do that (and often times, books are actually better than teachers). The real advantage of the usual university education is the existence of those class-mates of yours.

Your class-mates indirectly help you in many ways.

Come the week of those in-semester unit tests, at least in the hostels of Indian engineering schools, every one suddenly goes in the studies mode. In the hostel hallways, you casually pass someone by, and he puts a simple question to you. It is, perhaps, his genuine difficulty. You try to explain it to him, find that there are some gaps in your own knowledge, too. After a bit of a discussion, some one else joins the discussion, and then you all have to sheepishly go back to the notes or books or solve a problem together. It helps all of you.

Sometimes, the friend could be even just showing off to you—he wouldn’t ask you a question if he knew you could answer it. You begin answering in your usual magnificently nonchalant manner, and soon reach the end of your wits. (A XI standard example: If the gravitational potential inside the earth is constant, how come a ball dropped in a well falls down? [That is your friend’s question, just to tempt you in the wrong direction all the way through.]… And what would happen if there is a bore all through the earth’s volume, assuming that the earth’s core is solid all the way through?) His showing off helps you.

No, not every one works in this friendly a mode. But enough of them do that one gets [too] used to this way of studying/learning.

And, it is this way of studying which is absent not only in the learning by pure self-studies alone, but also in those online/MOOC courses. That is the reason why NPTEL videos, even if downloaded and available on the local college LAN, never get referred to by individual students working in complete isolation. Students more or less always browse them in groups even if sitting on different terminals (and they watch those videos only during the examination weeks!)

Personally, I had got [perhaps excessively] used to this mode of learning. [Since my Objectivist learning has begun interfering here, let me spell the matter out completely: It’s a mix of two modes: your own studies done in isolation, and also, as an essential second ingredient, your interaction with your class-mates (which, once again, does require the exercise of your individual mind, sure, but the point is: there are others, and the interaction is exposing the holes in your individual understanding).]

It is to this mix that I have got too used to. That’s why, I have acutely felt the absence of the second ingredient, during my studies of FEM and CFD. Of course, blogging fora like iMechanica did help me a lot when it came to FEM, but for CFD, I was more or less purely on my own.

That’s the reason why, even if I am a professor today and am teaching CFD not just to UG but also to PG students, I still don’t expect my knowledge to be as seamlessly integrated as for the other things that I know.

In particular, one such a gap got to the light recently, and I am going to share my fall—and rise—with you. In all its gloriously stupid details. (Feel absolutely free to leave reading this post—and indeed this blog—any time.)


In CFD, the way I happened to learn it, I first went through the initial parts (the derivations part) in John Anderson, Jr.’s text. Then, skipping the application of FDM in Anderson’s text more or less in its entirety, I went straight to Versteeg and Malasekara. Also Jayathi Murthy’s notes at Purdue. As is my habit, I was also flipping through Ferziger+Peric, and also some other books in the process, but it was to a minor extent. The reason for skipping the rest of the portion in Anderson was, I had gathered that FVM is the in-thing these days. OpenFOAM was already available, and its literature was all couched in terms of FVM, and so it was important to know FVM. Further, I could also see the limitations of FDM (like requirement of a structured Cartesian mesh, or special mesh mappings, etc.)

Overall, then, I had never read through the FDM modeling of Navier-Stokes until very recent times. The Pune University syllabus still requires you to do FDM, and I thus began reading through the FDM parts of Anderson’s text only a couple of months ago.

It is when I ran into having to run the FDM Python code for the convection-diffusion equation that a certain lacuna in my understanding became clear to me.


Consider the convection-diffusion equation, as given in Prof. Lorena Barba’s Step No.8, here [^]:

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

I had never before actually gone through these equations until last week. Really.

That’s because I had always approached the convection-diffusion system via FVM, where the equation would be put using the Eulerian frame, and it therefore would read something like the following (using the compact vector/tensor notation):

\dfrac{\partial}{\partial t}(\rho \phi) +  \nabla \cdot (\rho \vec{u} \phi)  = \nabla \cdot (\Gamma \nabla \phi) + S
for the generic quantity \phi.

For the momentum equations, we substitute \vec{u} in place of \phi, \mu in place of \Gamma, and S_u - \nabla P in place of S, and the equation begins to read:
\dfrac{\partial}{\partial t}(\rho \vec{u}) +  \nabla \cdot (\rho \vec{u} \otimes \vec{u})  = \nabla \cdot (\mu \nabla \vec{u}) - \nabla P + S_u

For an incompressible flow of a Newtonian fluid, the equation reduces to:

\dfrac{\partial}{\partial t}(\vec{u}) +  \nabla \cdot (\vec{u} \otimes \vec{u})  = \nu \nabla^2 \vec{u} - \dfrac{1}{\rho} \nabla P + \dfrac{1}{\rho} S_u

This was the framework—the Eulerian framework—which I had worked with.

Whenever I went through the literature mentioning FDM for NS equations (e.g. the computer graphics papers on fluids), I more or less used to skip looking at the maths sections, simply because there is such a variety of reporting NS, and those initial sections of the papers all are covering the same background material. (Ferziger and Peric, I recall off-hand, mention of some 72 ways of writing down the NS equations.)  The meat of the paper comes only later.


The trouble occurred when, last week, I began really reading through (as in contrast to rapidly glancing over) Barba’s Step No. 8 as mentioned above. Let me copy-paste the convection-diffusion equations once again here, for ease of reference.

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

Look at the left hand-side (LHS for short). What do you see?

What I saw was an application of the following operator—an operator that appears only in the Lagrangian framework:

\dfrac{\partial}{\partial t} + (\vec{u} \cdot \nabla)

Clearly, according to what I saw, the left hand-side of the convection-diffusion equation, as written above, is nothing but this operator, as applied to \vec{u}.

And with that “vision,” began my fall.

“How can she use the Lagrangian expression if she is going to use a fixed Cartesian—i.e. Eulerian—grid? After all, she is doing FDM here, isn’t she?” I wondered.

If it were to be a computer graphics paper using FDM, I would have skipped over it, presuming that they would sure transform this equation to the Eulerian form some time later on. But, here, I was dealing with a resource for the core engineering branches (like mech/aero/met./chem./etc.), and I also had a lab right this week to cover this topic. I couldn’t skip over it; I had to read it in detail. I knew that Prof. Barba couldn’t possibly make a mistake like that. But, in this lesson, even right up to the Python code (which I read for the first time only last week), there wasn’t even a hint of a transformation to the Eulerian frame. (Initially, I even did a search on the string: “Euler” on that page; no beans.)

There must be some reason to it, I thought. Still stuck with reading a Lagrangian frame for the equation, I then tried to imagine a reasonable interpretation:

Suppose there is one material particle at each of the FDM grid nodes? What would happen with time? Simplify the problem all the way down. Suppose the velocity field is already specified at each node as the initial condition, and we are concerned only with its time-evolution. What would happen with time? The particles would leave their initial nodal positions, and get advected/diffused away. In a single time-step, they would reach their new spatial positions. If the problem data are arbitrary, their positions at the end of the first time-step wouldn’t necessarily coincide with grid points. If so, how can she begin her next time iteration starting from the same grid points?

I had got stuck.

I thought through it twice, but with the same result. I searched through her other steps. (Idly browsing, I even looked up her CV: PhD from CalTech. “No, she couldn’t possibly be skipping over the transformation,” I distinctly remember telling myself for the nth time.)

Faced with a seemingly unyielding obstacle, I had to fall back on to my default mode of learning—viz., the “mix.” In other words, I had to talk about it with someone—any one—any one, who would have enough context. But no one was available. The past couple of days being holidays at our college, I was at home, and thus couldn’t even catch hold of my poor UG students.

But talking, I had to do. Finally, I decided to ask someone about it by email, and so, looked up the email ID of a CFD expert, and asked him if he could help me with something that is [and I quote] “seemingly very, very simple (conceptual) matter” which “stumps me. It is concerned with the application of Lagrangian vs. Eulerian frameworks. It seems that the answer must be very simple, but somehow the issue is not clicking-in together or falling together in place in the right way, for me.” That was yesterday morning.

It being a week-end, his reply came fairly rapidly, by the yesterday afternoon (I re-checked emails at around 1:30 PM); he had graciously agreed to help me. And so, I rapidly wrote up a LaTeX document (for equations) and sent it to him as soon as I could. That was yesterday, around 3:00 PM. Satisfied that finally I am talking to someone, I had a late lunch, and then crashed for a nice ciesta. … Holidays are niiiiiiiceeeee….

Waking up at around 5:00 PM, the first thing I did, while sipping a cup of tea, was to check up on the emails: no reply from him. Not expected this soon anyway.

Still lingering in the daze of that late lunch and the ciesta, idly, I had a second look at the attached document which I had sent. In that problem-document, I had tried to make the comparison as easy for the receiver to see, and so, I had taken care to write down the particular form of the equation that I was looking for:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + \dfrac{\partial uv}{\partial x} + \dfrac{\partial v^2}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

“Uh… But why would I keep the product terms u^2 inside the finite difference operator?” I now asked myself, still in the lingering haze of the ciesta. “Wouldn’t it complicate, say, specifying boundary conditions and all?” I was trying to pick up my thinking speed. Still yawning, I idly took a piece of paper, and began jotting down the equations.

And suddenly, way before writing down the very brief working-out by hand, the issue had become clear to me.

Immediately, I made me another cup of tea, and while still sipping it, launched TexMaker, wrote another document explaining the nature of my mistake, and attached it to a new email to the expert. “I got it” was the subject line of the new email I wrote. Hitting the “Send” button, I noticed what time it was: around 7 PM.

Here is the “development” I had noted in that document:

Start with the equation for momentum along the x-axis, expressed in the Eulerian (conservation) form:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)

Consider only the left hand-side (LHS for short). Instead of treating the product terms $u^2$ and $uv$ as final variables to be discretized immediately, use the product rule of calculus in the same Eulerian frame, rearrange, and apply the zero-divergence property for the incompressible flow:

\text{LHS} = \dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u\dfrac{\partial u}{\partial x} + u \dfrac{\partial v}{\partial y} + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[\dfrac{\partial u}{\partial x} + \dfrac{\partial v}{\partial y} \right] + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[ 0 \right] + v \dfrac{\partial u}{\partial y}; \qquad\qquad \because \nabla \cdot \vec{u} = 0 \text{~if~} \rho = \text{~constant}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y}

We have remained in the Eulerian frame throughout these steps, but the final equation which we got in the end, happens to be identical in its terms to that for the Lagrangian frame—when the flow is incompressible.

For a compressible flow, the equations should continue looking different, because \rho would be a variable, and so would have to be accounted for with a further application of the product rule, in evaluating \frac{\partial}{\partial t}(\rho u), \frac{\partial}{\partial x}(\rho u^2) and \frac{\partial}{\partial x}(\rho uv) etc.

But as it so happens, for the current case, even if the final equations look exactly the same, we should not supply the same physical imagination. We don’t imagine the Lagrangian particles at nodes. Our imagination continues remaining Eulerian throughout the development, with our focus not on the advected particles’ positions but on the flow variables u and v at the definite (fixed) points in space.


Sometimes, just expressing your problem to someone else itself pulls you out of your previous mental frame, and that by itself makes the problem disappear—in other words, the problem gets solved without your “solving” it. But to do that, you need someone else to talk to!


But how could I make such stupid and simple a mistake, you ask? This is something even a UG student at an IIT would be expected to know! [Whether they always do, or not, is a separate issue.]

Two reasons:

First: As I said, there are gaps in my knowledge of computational mechanics. More gaps than you would otherwise expect, simply because I had never had class-mates with whom to discuss my learning of computational  mechanics, esp., CFD.

Second: I was getting deeper into the SPH in the recent weeks, and thus was biased to read only the Lagrangian framework if I saw that expression.

And a third, more minor reason: One tends to be casual with the online resources. “Hey it is available online already. I could reuse it in a jiffy, if I want.” Saying that always, and indefinitely postponing actually reading through it. That’s the third reason.


And if I could make so stupid a mistake, and hold it for such a long time (a day or so), how could I then see through it, even if only eventually?

One reason: Crucial to that development is the observation that the divergence of velocity is zero for an incompressible flow. My mind was trained to look for it because even if the Pune University syllabus explicitly states that derivations will not be asked on the examinations, just for the sake of solidity in students’ understanding, I had worked through all the details of all the derivations in my class. During those routine derivations, you do use this crucial property in simplifying the NS equations, but on the right hand-side, i.e., for the surface forces term, in simplifying for the Newtonian fluid. Anderson does not work it out fully [see his p. 66] nor do Versteeg and Malasekara, but I anyway had, in my class… It was easy enough to spot the same pattern—even before jotting it down on paper—once it began appearing on the left hand-side of the same equation.

Hard-work pays off—if not today, tomorrow.


CFD books always emphasize the idea that the 4 combinations produced by (i) differential-vs-integral forms and (ii) Lagrangian-vs-Eulerian forms all look different, and yet, they still are the same. Books like Anderson’s take special pains to emphasize this point. Yes, in a way, all equations are the same: all the four mathematical forms express the same physical principle.

But seen from another perspective, here is an example of two equations which look exactly the same in every respect, but in fact aren’t to be viewed as such. One way of reading this equation is to imagine inter-connected material particles getting advected according to that equation in their local framework. Another way of reading exactly the same equation is to imagine a fluid flowing past those fixed FDM nodes, with only the nodal flow properties changing according to that equation.

Exactly the same maths (i.e. the same equation), and of course, also the same physical principle, but a different physical imagination.

And you want to tell me “math [sic] rules?”


I Song I Like:

(Hindi) “jaag dil-e-deewaanaa, rut jaagee…”
Singer: Mohamad Rafi
Music: Chitragupt
Lyrics: Majrooh Sultanpuri

[As usual, may be another editing pass…]

[E&OE]

A bit about the Dirac delta (and the SPH)

I have been thinking about (and also reading on!) SPH recently.

“SPH” here means: Smoothed Particle Hydrodynamics. Here is the Wiki article on SPH [^] if all you want is to gain some preliminary idea (or better still, if that’s your purpose, just check out some nice YouTube videos after googling on the full form of the term).


If you wish to know the internals of SPH in a better way: The SPH literature is fairly large, but a lot of it also happens to be in the public domain. Here are a few references:

  • A neat presentation by Maneti [^]
  • Micky Kelager’s project report listed here [^]. The PDF file is here [(5.4 MB) ^]
  • Also check out Cossins for a more in-depth working out of the maths [^].
  • The 1992 review by Monaghan himself also is easily traceable on the ‘net
  • The draft of a published book [(large .PDF file, 107 MB) ^] by William Hoover; this link is listed right on his home page [^]. Also check out another book on molecular dynamics which he has written and also put in the public domain.

For gentler introductions to SPH that come with pseudo-code, check out:

  • Browne and Lewinder [(.PDF, 5.2 MB) ^], and
  • David Bindel’s notes [(.PDF, ) ^].

I have left out several excellent introductory articles/slides by others, e.g. by Mathias Muller (and may expand on this list a day or two later).


The SPH theory begins with the identity:

f(x) = \int\limits_{\Omega} \text{d}\Omega_{x'}\,f(x')\,\delta(x- x')

where \delta(x- x') is Dirac’s delta, and x' is not a derivative of x but a dummy variable mimicking x; for a diagrammatic illustration, see Maneti’s slides mentioned above.

It is thus, in connection with SPH (but not of QM) that I thought of going a little deeper with Dirac’s delta.

After some searches, I found an article by Balki on this topic [^], and knowing the author, immediately sat reading it. [Explanations and clarifications: 1. “Balki” means: Professor Balakrishnan of the Physics department of IIT Madras. 2. I know the author; the author does not know me. 3. Everyone on the campus calls him Balki (though I don’t know if they do that in his presence, too).] The link given here is to a draft version; the final print version is available for free from the Web site of the journal: [^].

A couple of days later, I was trying to arrange in my mind the material for an introductory presentation on SPH. (I was doing that even if no one has invited me yet to deliver it.) It was in this connection that I did some more searches on Dirac’s delta. (I began by going one step“up” the directory tree of the first result and thus landed at this directory [^] maintained by Dr. Pande of IIT Hyderabad [^]. … There is something to be said about keeping your directories brows-able if you are going share the entire content one way or the other; it just makes searching related contents easier!)

Anyway, thus, starting there, my further Google searches yielded the following articles/essays/notes: [^], [^], [^], [^], [^], [^], [^], and [^] . And, of course, the Wiki [^].

As any one would expect, some common points were of course repeated in each of these references. However, going through the articles/notes, though quite repetitive, didn’t get all that boring to me: each individual brings his own unique way of explaining a certain material, and Dirac’s delta being a concept that is both so subtle and so abstract, any person who [dare] attempts explaining it cannot help but bring his own individuality to that explanation. (Yes, the concept is subtle. The gifted American mathematician John von Neumann had spent some time showing how Dirac’s notions were mathematically faulty/untenable/not rigorous/something similar. … Happens.)

Anyway, as I expected, Balki’s article turned out to be the easiest and the most understanding-inducing a read among them all! [No, my attending IIT M had nothing to do with this expectation.]

Yet, there remained one minor point which was not addressed very directly in the above-mentioned references—not even by Balki. (Though his treatment is quite clear about the point, he seems to have skipped one small step I think is necessary.) The point I was looking for, is concerned with a more complete answer to this question:

Why is it that the \delta is condemned to live only under an integral sign? Why can’t it have any life of its own, i.e., outside the integral sign?

The question, of course is intimately related to the other peculiar aspects of Dirac’s delta as well. For instance, as the tutorial at Pande’s site points out [^]:

The delta functions should not be considered to be an infinitely high spike of zero width since it scales as: \int_{-\infty}^{\infty} a\,\delta(x)\,\text{d}x = a .

Coming back to the caged life of the poor \delta, all authors give hints, but none jots down all the details of the physical (“intuitive”) reasoning lying behind this peculiar nature of the delta.

Then, imagining as if I am lecturing to an audience of engineering UG students led me to a clue which answers that question—to the detail I wanted to see. I of course don’t know if this clue of mine is mathematically valid or not. … It’s just that I “day-dreamt” one form of a presentation, found that it wouldn’t be hitting the chord with the audience and so altered it a bit, tried “day-dreaming” again, and repeated the process some 3–4 times over the past week. Finally, this morning, I got to the point where I thought I now have got the right clue which can make the idea clearer to the undergraduates of engineering.

I am going to cover that point (the clue which I have) in my next post, which I expect to write, may be, next week-end or so. (If I thought I could write that post without drawing figures, I would have written the answer right away.) Anyway, in the meanwhile, I would like to share all these references on SPH and on Dirac’s delta, and bring the issue (i.e., the question) to your attention.

… No, the point I have in mind isn’t at all a major one. It’s just that it leads to a presentation of the concept that is more direct than what the above references cover. (I can’t better Balki, but I can fill in the gaps in his explanations—at least once in a while.)

Anyway, if you know of any other direct and mathematically valid answers to that question, please point them out to me. Thanks in advance.

 


A Song I Like:

(Marathi) “mana chimba paavasaaLi, jhaaDaat rang ole…”
Music: Kaushal Inamdar
Lyrics: N. D. Mahanor
Singer: Hamsika Iyer

 

[E&OE]