# The One vs. the Many

This post continues from my last post. In that post, I had presented a series of diagrams depicting the states of the universe over time, and I had then asked you a simple question pertaining to the physics of it: what the series depicted, physically speaking.

I had also given an answer to that question, the one which most people would give. It would run something like this:

There are two blocks/objects/entities which are initially moving closer towards each other. Following their motions, they come closer to each other, touch each other, and then reverse the directions of their motions. Thus, there is a collision of sorts. (We deliberately didn’t go into the maths of it, e.g., such narrower, detailed or higher-level aspects such as whether the motions were uniform or whether they had accelerations/decelerations (implying forces) or not, etc.)

I had then told you that the preceding was not the only answer possible. At least one more answer that captures the physics of it, also is certainly possible. This other answer in fact leads to an entirely different kind of mathematics! I had asked you to think about such alternative(s).

In this post, let me present the alternative description.

The alternative answer is what school/early college-level text-books never present to students. Neither do the pop-sci. books. However, the alternative approach has been documented, in some or the other form, at least for centuries if not for millenia. The topic is routinely taught in the advanced UG and PG courses in physics. However, the university courses always focus on the maths of it, not the physics. The physical ideas are never explicitly discussed in them. The text-books, too, dive straight into the relevant mathematics. The refusal of physicists (and of mathematicians) to dwell on the physical bases of this alternative description is in part responsible for the endless confusion and debates surrounding such issues as quantum entanglement, action at a distance, etc.

There also is another interesting side to it. Some aspects of this kind of a thinking are also evident in the philosophical/spiritual/religious/theological thinking. I am sure that you would immediately notice the resonance to such broader ideas as we subsequently discuss the alternative approach. However, let me stress that, in this post, we focus only on the physics-related issues. Thus, if I at times just say “universe,” it is to be understood that the word pertains only to the physical universe (i.e. the sum total of the inanimate objects, and also the inanimate aspects of living beings), not to any broader, spiritual or philosophical issue.

OK. Now, on to the alternative description itself. It runs something like this:

There is only one physical object which physically exists, and it is the physical universe. The grey blocks that you see in the series of diagrams are not independent objects, really speaking. In this particular depiction, what look like two independent “objects” are, really speaking, only two spatially isolated parts of what actually is one and only one object. In fact, the “empty” or the “white” space you see in between the objects is not, really speaking, empty at all—it does not represent the literal void or the nought, so to speak. The region of space corresponding to the “empty” portions is actually occupied by a physical something. In fact, since there is only one physical object to all exist, it is that same—singleton—physical object which is present also in the apparently empty portions.

This is not to deny that the distinction between the grey and the white/“empty” parts is not real. The physically existing distinction between them—the supposed qualitative differences among them—arises only because of some quantitative differences in some property/properties of the universe-object. In other words, the universe does not exist uniformly across all its parts. There are non-uniformities within it, some quantitative differences existing over different parts of itself. Notice, up to this point, we are talking of parts and variations within the universe. Both these words: “parts” and “within” are to be taken in the broadest possible sense, as in  the sense of“logical parts” and “logically within”.

However, one set of physical attributes that the universe carries pertains to the spatial characteristics such as extension and location. A suitable concept of space can therefore be abstracted from these physically existing characteristics. With the concept of space at hand, the physical universe can then be put into an abstract correspondence with a suitable choice of a space.

Thus, what this approach naturally suggests is the idea that we could use a mathematical field-function—i.e. a function of the coordinates of a chosen space—in order to describe the quantitative variations in the properties of the physical universe. For instance, assuming a $1D$ universe, it could be a function that looks something like what the following diagram shows.

Here, the function shows that a certain property (like mass density) exists with a zero measure in the regions of the supposedly empty space, whereas it exists with a finite measure, say with density of $\rho_{g}$ in the grey regions. Notice that if the formalism of a field-function (or a function of a space) is followed, then the property that captures the variations is necessarily a density. Just the way the mass density is the density of mass, similarly, you can have a density of any suitable quantity that is spread over space.

Now, simply because the density function (shown in blue) goes to zero in certain regions, we cannot therefore claim that nothing exists in those regions. The reason is: we can always construct another function that has some non-zero values everywhere, and yet it shows sufficiently sharp differences between different regions.

For instance, we could say that the graph has $\rho_{0} \neq 0$ value in the “empty” region, whereas it has a $\rho_{g}$ value in the interior of the grey regions.

Notice that in the above paragraph, we have subtly introduced two new ideas: (i) some non-zero value, say $\rho_{0}$, as being assigned even to the “empty” region—thereby assigning a “something”, a matter of positive existence, to the “empty”-ness; and (ii) the interface between the grey and the white regions is now asserted to be only “sufficiently” sharp—which means, the function does not take a totally sharp jump from $\rho_{0}$ to $\rho_{g}$ at a single point $x_i$ which identifies the location of the interface. Notice that if the function were to have such a totally sharp jump at a single point, it would not in fact even be a proper function, because there would be an infinity of density values between and including $\rho_{0}$ and $\rho_{g}$ existing at the same point $x_i$. Since the density would not have a unique value at $x_i$, it won’t be a function.

However, we can always replace the infinitely sharp interface of zero thickness by a sufficiently sharp (and not infinitely sharp) interface of a sufficiently small but finite thickness.

Essentially, what this trick does is to introduce three types of spatial regions, instead of two: (i) the region of the “empty” space, (ii) the region of the interface (iii) the interior, grey, region.

Of course, what we want are only two regions, not three. After all, we need to make a distinction only between the grey and the white regions. Not an issue. We can always club the interface region with either of the remaining two. Here is the mathematical procedure to do it.

Introduce yet another quantitative measure, viz., $\rho_{c}$, called the critical density. Using it, we can in fact divide the interface dispense region into further two parts: one which has $\rho < \rho_c$ and another one which has $\rho \geq \rho_c$. This procedure does give us a point-thick locus for the distinction between the grey and the white regions, and yet, the actual changes in the density always remain fully smooth (i.e. density can remain an infinitely differentiable function).

All in all, the property-variation at the interface looks like this:

Indeed, our previous solution of clubbing the interface region into the grey region is nothing but having $\rho_c = \rho_0$, whereas clubbing the interface in the “empty” space region is tantamount to having $\rho_c = \rho_g$.

In any case, we do have a sharp demarcation of regions, and yet, the density remains a continuous function.

We can now claim that such is what the physical reality is actually like; that the depiction presented in the original series of diagrams, consisting of infinitely sharp interfaces, cannot be taken as the reference standard because that depiction itself was just that: a mere depiction, which means: an idealized description. The actual reality never was like that. Our ultimate standard ought to be reality itself. There is no reason why reality should not actually be like what our latter description shows.

This argument does hold. Mankind has never been able to think of a single solid argument against having the latter kind of a description.

Even Euclid had no argument for the infinitely sharp interfaces his geometry implies. Euclid accepted the point, the line and the plane as the already given entities, as axioms. He did not bother himself with locating their meaning in some more fundamental geometrical or mathematical objects or methods.

What can be granted to Euclid can be granted to us. He had some axioms. We don’t believe them. So we will have our own axioms. As part of our axioms, interfaces are only finitely sharp.

Notice that the perceptual evidence remains the same. The difference between the two descriptions pertains to the question of what is it that we regard as object(s), primarily. The considerations of the sharpness or the thickness of the interface is only a detail, in the overall scheme.

In the first description, the grey regions are treated as objects in their own right. And there are many such objects.

In the second description, the grey regions are treated not as objects in their own right, but merely as distinguishable (and therefore different) parts of a single object that is the universe. Thus, there is only one object.

So, we now have two alternative descriptions. Which one is correct? And what precisely should we regard as an object anyway? … That, indeed, is a big question! 🙂

More on that question, and the consequences of the answers, in the next post in this series…. In it, I will touch upon the implications of the two descriptions for such things as (a) causality, (b) the issue of the aether—whether it exists and if yes, what its meaning is, (c) and the issue of the local vs. non-local descriptions (and implications therefore, in turn, for such issues as quantum entanglement), etc. Stay tuned.

A Song I Like:

(Hindi) “kitni akeli kitni tanha see lagi…”
Singer: Lata Mangeshkar
Music: Sachin Dev Burman
Lyrics: Majrooh Sultanpuri

[May be one editing pass, later? May be. …]

# Introducing a Very Foundational Issue of Physics (and of Maths)

OK, so I am finally done with moving my stuff, and so, from now on, should be able to find at least some time for ‘net activities, including browsing and blogging (not to mention also picking up writing my position paper on QM from where I left it).

Alright, so let me resume my blogging right away by touching on a very foundational aspect of physics (and also of maths).

Before you can even think of building a theory of physics, you must first adopt, implicitly or explicitly, a viewpoint concerning what kind of physical objects are assumed to exist in the physical universe.

For instance, Newtonian mechanics assumes that the physical universe is made from massive and charge-less solid bodies that experience and exert the inter-body forces of gravity and those arising out of their direct contact. In contrast, the later development of the Maxwellian electrodynamics assumes that there are two types of objects: massive and charged solid bodies, and the electromagnetic and gravitational fields which they set-up and with which they interact. Last year, I had written a post spelling out the different kinds of physical objects that are assumed to exist in the Newtonian mechanics, in the classical electrodynamics, etc.; see here [^].

In this post, I want to highlight yet another consideration which enters physics at the most fundamental level. Let me illustrate the issue involved via a simple example.

Consider a 2D universe. The following series of diagrams depicts this universe as it exists at different instants of time, from $t_{1}$ through $t_{9}$. Each diagram in the series represents the entire universe.

Assume that the changes in time actually occur continuously; it’s just that while drawing diagrams, we can depict the universe only at isolated (or “discrete”) instants of time.

Now, consider this seemingly very simple question:

What precisely does the above series of diagrams depict, physically speaking?

Can you provide a brief description (say, running into 2–3 lines) as to what is happening here, physics-wise?

At this point, you may perhaps be thinking that the answer is obvious. The answer is so obvious, you could be thinking, that it is very stupid of me to even think of raising such a question.

“Why, of course, what that series of pictures depicts is this: there are two blocks/objects/entities which are initially moving towards each other. Eventually they come so close to each other that they even touch each other. They thus undergo a collision, and as a result, they begin to move apart. … Plain and simple.”

You could be thinking along some lines like that.

But let me warn you, that precisely is your potential pitfall—i.e., thinking that the question is so simple, and the answer so obvious. Actually, as it turns out, there is no unique answer to that question.

That’s why, no matter how dumb the above question may look to you, let me ask you once again to take a moment to think afresh about it. And then, whatever be your answer, write it down. In your answer, try to be as brief and as precise as possible.

I will continue with this issue in my next post, to be written and posted after a few days. I am deliberately taking a break here because I do want you to give it a shot—writing down a precise answer. Unless you actually try out this exercise for yourself, you won’t come to appreciate either of the following two, separate points:

1. how difficult it can be to write very precise answers to what appear to be the simplest of questions, and
2. how unwittingly and subtly some unwarranted assumptions can so easily creep in, in a physical description—and therefore, in mathematics.

You won’t come to appreciate how deceptive this question really is unless you actually give it a try. And it is to ensure this part that I have to take a break here.

Enjoy!

# “Measure for Measure”—a pop-sci video on QM

This post is about a video on QM for the layman. The title of the video is: “Measure for Measure: Quantum Physics and Reality” [^]. It is also available on YouTube, here [^].

I don’t recall precisely where on the ‘net I saw the video being mentioned. Anyway, even though its running time is 01:38:43 (i.e. 1 hour, 38 minutes, making it something like a full-length feature film), I still went ahead, downloaded it and watched it in full. (Yes, I am that interested in QM!)

The video was shot live at an event called “World Science Festival.” I didn’t know about it beforehand, but here is the Wiki on the festival [^], and here is the organizer’s site [^].

The event in the video is something like a panel discussion done on stage, in front of a live audience, by four professors of physics/philosophy. … Actually five, including the moderator.

Brian Greene of Columbia [^] is the moderator. (Apparently, he co-founded the World Science Festival.) The discussion panel itself consists of: (i) David Albert of Columbia [^]. He speaks like a philosopher but seems inclined towards a specific speculative theory of QM, viz. the GRW theory. (He has that peculiar, nasal, New York accent… Reminds you of Dr. Harry Binswanger—I mean, by the accent.) (ii) Sheldon Goldstein of Rutgers [^]. He is a Bohmian, out and out. (iii) Sean Carroll of CalTech [^]. At least in the branch of the infinity of the universes in which this video unfolds, he acts 100% deterministically as an Everettian. (iv) Ruediger Schack of Royal Holloway (the spelling is correct) [^]. I perceive him as a QBist; guess you would, too.

Though the video is something like a panel discussion, it does not begin right away with dudes sitting on chairs and talking to each other. Even before the panel itself assembles on the stage, there is a racy introduction to the quantum riddles, mainly on the wave-particle duality, presented by the moderator himself. (Prof. Greene would easily make for a competent TV evangelist.) This part runs for some 20 minutes or so. Then, even once the panel discussion is in progress, it is sometimes interwoven with a few short visualizations/animations that try to convey the essential ideas of each of the above viewpoints.

I of course don’t agree with any one of these approaches—but then, that is an entirely different story.

Coming back to the video, yes, I do want to recommend it to you. The individual presentations as well as the panel discussions (and comments) are done pretty well, in an engaging and informal way. I did enjoy watching it.

The parts which I perhaps appreciated the most were (i) the comment (near the end) by David Albert, between 01:24:19–01:28:02, esp. near 1:27:20 (“small potatoes”) and, (ii) soon later, another question by Brian Greene and another answer by David Albert, between 01:33:26–01:34:30.

In this second comment, David Albert notes that “the serious discussions of [the foundational issues of QM] … only got started 20 years ago,” even though the questions themselves do go back to about 100 years ago.

That is so true.

The video was recorded recently. About 20 years ago means: from about mid-1990s onwards. Thus, it is only from mid-1990s, Albert observes, that the research atmosphere concerning the foundational issues of QM has changed—he means for the better. I think that is true. Very true.

For instance, when I was in UAB (1990–93), the resistance to attempting even just a small variation to the entrenched mainstream view (which means, the Copenhagen interpretation (CI for short)) was so enormous and all pervading, I mean even in the US/Europe, that I was dead sure that a graduate student like me would never be able to get his nascent ideas on QM published, ever. It therefore came as a big (and a very joyous) surprise to me when my papers on QM actually got accepted (in 2005). … Yes, the attitudes of physicists have changed. Anyway, my point here is, the mainstream view used to be so entrenched back then—just about 20 years ago. The Copenhagen interpretation still was the ruling dogma, those days. Therefore, that remark by Prof. Albert does carry some definite truth.

Prof. Albert’s observation also prompts me to pose a question to you.

What could be the broad social, cultural, technological, economic, or philosophic reasons behind the fact that people (researchers, graduate students) these days don’t feel the same kind of pressure in pursuing new ideas in the field of Foundations of QM? Is the relatively greater ease of publishing papers in foundations of QM, in your opinion, an indication of some negative trends in the culture? Does it show a lowering of the editorial standards? Or is there something positive about this change? Why has it become easier to discuss foundations of QM? What do you think?

I do have my own guess about it, and I would sure like to share it with you. But before I do that, I would very much like to hear from you.

Any guesses? What could be the reason(s) why the serious discussions on foundations of QM might have begun to occur much more freely only after mid-1990s—even though the questions had been raised as early as in 1920s (or earlier)?

Over to you.

Greetings in advance for the Republic Day. I [^] am still jobless.

[E&OE]

# QM: The physical view it takes—1

So, what exactly is quantum physics like? What is the QM theory all about?

You can approach this question at many levels and from many angles. However, if an engineer were to ask me this question (i.e., an engineer with sufficiently good grasp of mathematics such as differential equations and linear algebra), today, I would answer it in the following way. (I mean only the non-relativistic QM here; relativistic QM is totally beyond me, at least as of today):

Each physics theory takes a certain physical view of the universe, and unless that view can be spelt out in a brief and illuminating manner, anything else that you talk about it (e.g. the maths of the theory) tends to become floating, even meaningless.

So, when we speak of QM, we have to look for a physical view that is at once both sufficiently accurate and highly meaningful intuitively.

But what do I mean by a physical view? Let me spell it out first in the context of classical mechanics so that you get a sense of that term.

Personally, I like to think of separate stages even within classical mechanics.

Consider first the Newtonian mechanics. We can say that the Newtonian mechanics is all about matter and motion. (Maxwell it was, I think, who characterized it in this beautifully illuminating a way.) Newton’s original mechanics was all about the classical bodies. These were primarily discrete—not quite point particles, but finite ones, with each body confined to a finite and isolated region of space. They had no electrical attributes or features (such as charge, current, or magnetic field strength). But they did possess certain dynamical properties, e.g., location, size, density, mass, speed, and most importantly, momentum—which was, using modern terminology, a vector quantity. The continuum (e.g. a fluid) was seen as an extension of the idea of the discrete bodies, and could be studied by regarding an infinitesimal part of the continuum as if it were a discrete body. The freshly invented tools of calculus allowed Newton to take the transition from the discrete bodies (billiard balls) to both: the point-particles (via the shells-argument) as well as to the continuum (e.g. the drag force on a submerged body.)

The next stage was the Euler-Lagrange mechanics. This stage represents no new physics—only a new physical view. The E-L mechanics essentially was about the same kind of physical bodies, but now a number (often somewhat wrongly called a scalar) called energy being taken as the truly fundamental dynamical attribute. The maths involved the so-called variations in a global integral expression involving an energy-function (or other expressions similar to energy), but the crucial dynamic variable in the end would be a mere number; the number would be the outcome of evaluating a definite integral. (Historically, the formalism was developed and applied decades before the term energy could be rigorously isolated, and so, the original writings don’t use the expression “energy-function.” In fact, even today, the general practice is to put the theory using only the mathematical and abstract terms of the “Lagrangian” or the “Hamiltonian.”) While Newton’s own mechanics was necessarily about two (or more) discrete bodies locally interacting with each other (think collisions, friction), the Euler-Lagrange mechanics now was about one discrete body interacting with a global field. This global field could be taken to be mass-less. The idea of a global something (it only later on came to be called a field) was already a sharp departure from the original Newtonian mechanics. The motion of the massive body could be predicted using this kind of a formalism—a formalism that probed certain hypothetical variations in the global field (or, more accurately, in the interactions that the global field had with the given body). The body itself was, however, exactly as in the original Newtonian mechanics: discrete (or spread over definite and delimited region of space), massive, and without any electrical attributes or features.

The next stage, that of the classical electrodynamics, was about the Newtonian massive bodies but now these were also seen as endowed with the electrical attributes in addition to the older dynamical attributes of momentum or energy. The global field now became more complicated than the older gravitational field. The magnetic features, initially regarded as attributes primarily different from the electrical ones, later on came to be understood as a mere consequence of the electrical ones. The field concept was now firmly entrenched in physics, even though not always very well understood for what it actually was: as a mathematical abstraction. Hence the proliferation in the number of physical aethers. People rightly sought the physical referents for the mathematical abstraction of the field, but they wrongly made hasty concretizations, and that’s how there was a number of aethers: an aether of light, an aether of heat, an aether of EM, and so on. Eventually, when the contradictions inherent in the hasty concretizations became apparent, people threw the baby with the water, and it was not long before Einstein (and perhaps Poincare before him) would wrongly declare the universe to be devoid of any form of aether.

I need to check the original writings by Newton, but from whatever I gather (or compile, perhaps erroneously), I think that Newton had no idea of the field. He did originate the idea of the universal gravitation, but not that of the field of gravity. I think he would have always taken gravity to be a force that was directly operating between two discrete massive bodies, in isolation to anything else—i.e., without anything intervening between them (including any kind of a field). Gravity, a force (instantaneously) operating at a distance, would be regarded as a mere extension of the idea of the force by the direct physical contact. Gravity thus would be an effect of some sort of a stretched spring to Newton, a linear element that existed and operated between only two bodies at its two ends. (The idea of a linear element would become explicit in the lines of force in Faraday’s theorization.) It was just that with gravity, the line-like spring was to be taken as invisible. I don’t know, but that seems like a reasonable implicit view that Newton must have adopted. Thus, the idea of the field, even in its most rudimentary form, probably began only with the advent of the Euler-Lagrange mechanics. It anyway reached its full development in Maxwell’s synthesis of electricity and magnetism into electromagnetism. Remove the notion of the field from Maxwell’s theory, and it is impossible for the theory to even get going. Maxwellian EM cannot at all operate without having a field as an intermediate agency transmitting forces between the interacting massive bodies. On the other hand, Newtonian gravity (at least in its original form and at least for simpler problems) can. In Maxwellian EM, if two bodies suddenly change their relative positions, the rest of the universe comes to feel the change because the field which connects them all has changed. In Newtonian gravity, if two bodies suddenly change their relative positions, each of the other bodies in the universe comes to feel it only because its distances from the two bodies have changed—not because there is a field to mediate that change. Thus, there occurs a very definite change in the underlying physical view in this progression from Newton’s mechanics to Euler-Lagrange-Hamilton’s to Maxwell’s.

So, that’s what I mean by the term: a physical view. It is a view of what kind of objects and interactions are first assumed to exist in the universe, before a physics theory can even begin to describe them—i.e., before any postulates can even begin to be formulated. Let me hasten to add that it is a physical view, and not a philosophical view, even though physicists, and worse, mathematicians, often do confuse the issue and call it a (mere) philosophical discussion (if not a digression). (What better can you expect from mathematicians anyway? Or even from physicists?)

Now, what about quantum mechanics? What kind of objects does it deal with, and what kind of a physical view is required in order to appreciate the theory best?

What kind of objects does QM deal with?

QM once again deals with bodies that do have electromagnetic attributes or features—not just the dynamical ones. However, it now seeks to understand and explain how these features come to operate so that certain experimentally observed phenomena such as the cavity radiation and the gas spectra (i.e., the atomic absorption- and emission-spectra) can be predicted with a quantitative accuracy. In the process, QM keeps the idea of the field more or less intact. (No, strictly speaking it doesn’t, but that’s what physicists think anyway). However, the development of the theory was such that it had to bring the idea of the spatially delimited massive body, occupying a definite place and traveling via definite paths, into question. (In fact, quantum physicists went overboard and threw it out quite gleefully, without a thought.) So, that is the kind of “objects” it must assume before its theorization can at all begin. Physicists didn’t exactly understand what they were dealing with, and that’s how arose all its mysteries.

Now, how about its physical view?

In my (by now revised) opinion, quantum mechanics basically is all about the electronic orbitals and their evolutions (i.e., changes in the orbitals, with time).

(I am deliberately using the term “electronic” orbital, and not “atomic” orbital. When you say “atom,” you must mean something that is localized—else, you couldn’t possibly distinguish this object from that at the gross scale. But not so when it is the electronic orbitals. The atomic nucleus, at least in the non-relativistic QM, can be taken to be a localized and discrete “particle,” but the orbitals cannot be. Since the orbitals are necessarily global, since they are necessarily spread everywhere, there is no point in associating something local with them, something like the atom. Hence the usage: electronic orbitals, not atomic orbitals.)

The electronic orbital is a field whose governing equation is the second-order linear PDE that is Schrodinger’s equation, and the problems in the theory involve the usual kind of IVBV problems. But a further complexity arises in QM, because the real-valued orbital density isn’t the primary unknown in Schrodinger’s equation; the primary unknown is the complex-valued wavefunction.

The Schrodinger equation itself is basically like the diffusion equation, but since the primary unknown is complex-valued, it ends up showing some of the features of the wave equation. (That’s one reason. The other reason is, the presence of the potential term. But then, the potential here is the electric potential, and so, once again, indirectly, it has got to do with the complex nature of the wavefunction.) Hence the name “wave equation,” and the term “wavefunction.” (The “wavefunction” could very well have been called the “diffusionfunction,” but Schrodinger chose to call it the wavefunction, anyway.) Check it out:

Here is the diffusion equation:

$\dfrac{\partial}{\partial t} \phi = D \nabla^2 \phi$
Here is the Schrodinger equation:
$\dfrac{\partial}{\partial t} \Psi = \dfrac{i\hbar}{2\mu} \nabla^2 \Psi + V \Psi$

You can always work with two coupled real-valued equations instead of the single, complex-valued, Schrodinger’s equation, but it is mathematically more convenient to deal with it in the complex-valued form. If you were instead to work with the two coupled real-valued  equations, they would still end up giving you exactly the same results as the Schrodinger equation. You will still get the Maxwellian EM after conducting suitable grossing out processes. Yes, Schrodinger’s equation must give rise to the Maxwell’s equations. The two coupled real-valued equations would give you that (and also everything else that the complex-valued Schrodinger’s equation does). Now, Maxwell’s equations do have an inherent  coupling between the electric and magnetic fields. This, incidentally, is the simplest way to understand why the wavefunction must be complex-valued. [From now on, don’t entertain the descriptions like: “Why do the amplitudes have to be complex? I don’t know. No one knows. No one can know.” etc.]

But yes, speaking in overall terms, QM is, basically, all about the electronic orbitals and the changes in them. That is the physical view QM takes.

Hold that line in your mind any time you hit QM, and it will save you a lot of trouble.

When it comes to the basics or the core (or the “heart”) of QM, physicists will never give you the above answer. They will give you a lot many other answers, but never this one. For instance, Richard Feynman thought that the wave-particle duality (as illustrated by the single-particle double-slit interference arrangement) was the real key to understanding the QM theory. Bohr and Heisenberg instead believed that the primacy of the observables and the principle of the uncertainty formed the necessary key. Einstein believed that entanglement was the key—and therefore spent his time using this feature of the QM to deny completeness to the QM theory. (He was right; QM is not complete. He was not on the target, however; entanglement is merely an outcome, not a primary feature of the QM theory.)

They were all (at least partly) correct, but none of their approaches is truly illuminating—not to an engineer anyway.

They were correct in the sense, these indeed are valid features of QM—and they do form some of the most mystifying aspects of the theory. But they are mystifying only to an intuition that is developed in the classical mechanical mould. In any case, don’t mistake these mystifying features for the basic nature of the core of the theory. Discussions couched in terms of the more mysterious-appearing features in fact have come to complicate the quantum story unnecessarily; not helped simplify it. The actual nature of the theory is much more simple than what physicists have told you.

Just the way the field in the EM theory is not exactly the same kind of a continuum as in the original Newtonian mechanics (e.g., in EM it is mass-less, unlike water), similarly, neither the field nor the massive object of the QM is exactly as in their classical EM descriptions. It can’t be expected to be.

QM is about some new kinds of the ultimate theoretical objects (or building blocks) that especially (but not exclusively) make their peculiarities felt at the microscopic (or atomic) scale. These theoretical objects carry certain properties such that the theoretical objects go on to constitute the observed classical bodies, and their interactions go on to produce the observed classical EM phenomena. However, the new theoretical objects are such that they themselves do not (and cannot be expected to) possess all the features of the classical objects. These new theoretical objects are to be taken as more fundamental than the objects theorized in the classical mechanics. (The physical entities in the classical mechanics are: the classical massive objects and the classical EM field).

Now, this description is quite handful; it’s not easy to keep in mind. One needs a simpler view so that it can be held and recalled easily. And that simpler view is what I’ve told you already:

To repeat: QM is all about the electronic orbital and the changes it undergoes over time.

Today, most any physics professor would find this view objectionable. He would feel that it is not even a physics-based view, it is a chemistry-based one, even if the unsteady or the transient aspect is present in the formulation. He would feel that the unsteady aspect in the formulation is artificial; it is more or less slapped externally on to the picture of the steady-state orbitals given in the chemistry textbooks, almost as an afterthought of sorts. In any case, it is not physics—that’s what he would be sure of. By that, he would also be sure to mean that this view is not sufficiently mathematical. He might even find it amusing that a physical view of QM can be this intuitively understandable. And then, if you ask him for a sufficiently physics-like view of QM, he would tell you that a certain set of postulates is what constitutes the real core of the QM theory.

Well, the QM postulates indeed are the starting points of QM theory. But they are too abstract to give you an overall feel for what the theory is about. I assert that keeping the orbitals always at the back of your mind helps give you that necessary physical feel.

OK, so, keeping orbitals at the back of the mind, how do we now explain the wave-particle duality in the single-photon double-slit interference experiment?

Let me stop here for this post; I will open my next post on this topic precisely with that question.

A Song I Like:

(Hindi) “ik ajeeb udaasi hai, meraa man_ banawaasi hai…”
Music: Salil Chowdhury
Singer: Sayontoni Mazumdar
Lyrics: (??)

[No, you (very probably) never heard this song before. It comes not from a regular film, but supposedly from a tele-film that goes by the name “Vijaya,” which was produced/directed by one Krishna Raaghav. (I haven’t seen it, but gather that it was based on a novel of the same name by Sharat Chandra Chattopadhyaya. (Bongs, I think, over-estimate this novelist. His other novel is Devadaas. Yes, Devadaas. … Now you know. About the Chattopadhyaya.)) Anyway, as to this song itself, well, Salil-daa’s stamp is absolutely unmistakable. (If the Marathi listener feels that the flute piece appearing at the very beginning somehow sounds familiar, and then recalls the flute in Hridayanath Mangeshkar’s “mogaraa phulalaa,” then I want to point out that it was Hridayanath who once assisted Salil-daa, not the other way around.) IMO, this song is just great. The tune may perhaps sound like the usual ghazal-like tune, but the orchestration—it’s just extraordinary, sensitive, and overall, absolutely superb. This song in fact is one of Salil-daa’s all-time bests, IMO. … I don’t know who penned the lyrics, but they too are great. … Hint: Listen to this song on high-quality head-phones, not on the loud-speakers, and only when you are all alone, all by yourself—and especially as you are nursing your favorite Sundowner—and especially during the times when you are going jobless. … Try it, some such a time…. Take care, and bye for now]

[E&OE]

/