A series of posts on a few series of tweets (by me) on (my research on foundations of) QM—1

0. Initial remarks:

OK. It’s been a little while since I wrote my last post here.

Actually, it so happened that for a while after my last post I didn’t find anything well suited for writing a blog-post. I was also busy studying topics from Data Science. It’s true that during this time I did make a few comments at others’ blogs, but these were pretty context-specific. I couldn’t easily think of making a (more general-purpose) post out of them.

At the same time, some of the things that I read on QM—whether in pop-sci books or at others’ blogs—did prompt me to note a few comments. These were very brief points. They were better fitting only as tweets—as side-remarks made in the passing. So, I tweeted them. My twitter page is here [^].

… I now realize that quite a few of such tweets (on QM) have got accumulated. So it’s high time that these occasional notings got moved here too, together with some explanation to go with them. That’s precisely what I am going to do now, in this series of posts.

Most of these points (from the tweets) refer to my Outline document on QM which was posted at iMechanica about 6 months ago [^]. The tweets wouldn’t make any sense to someone if he hasn’t thoroughly gone through this document first. So, I do assume this context here.

In fact, most of these tweets are rather direct implications of what I had already noted in the Outline document. These points (from the tweets) were quite clear to me even back then, when I wrote the document.

However, while writing that document, my purpose was, first and foremost, to state the most salient building blocks and points of the theory and to focus on the overall way in which they connect together. Thus, what I wanted to give, via that document, was a definitive sense of the overall framework—hopefully in a logically complete manner. I was in fact worried a bit that some parts of these complex considerations might get slipped out of my mind once again as they had done in the past (before I wrote that document!) [In retrospect, I think that on this count, I did a pretty good job in the Outline document. I haven’t been able to think of a really essential part of the framework which I had in mind and which inadvertently got left out from it.]

Another reason I didn’t go into detailed implications right in that document was this: I also thought that anyone who knows the mainstream QM well, and also “gets” the logic given in my document well, would be able to very easily reach these further inferences completely on his own—for instance, my position on the wave-particle duality. So, I didn’t separately mention such points in that document even if I knew that points like these would be of  much greater interest to the layman. The Outline, although very simple it looks, was definitely not written for the layman. (I tried to keep it as simple in exposition as possible, in part because I didn’t care to be seen as a respectable physicist anyway. All that I was concerned about was QM, and the new conceptual framework.)

So, all in all, it’s not an accident that I should be touching on many points like the wave-particle duality only later on, first via tweets! These really are only implications / consequences.

Anyway, here in this series we now go with these tweets of mine (made over the past month). While reproducing them here, I have expanded the short-forms or abbreviations, and also have added few additional bits of content too, just to get more streamlined sentences. Each tweet is then followed by some explanation, which very rapidly became very long—long enough that I couldn’t possibly compress all the QM-related material (tweets and my explanations of them) into a single post. So, I have no choice but to make a series of them!


1. Schemes for nonlinear QM proposed by others:

I tweeted on 12 July 2019 to this effect:

“Schrödinger eqn. revisited” by Schleich et al.  [(.PDF) ^] . Yes, it presents a nonlinearity. But no, it doesn’t even consider the physical fact that all the potentials in reality come about only from superpositions of the singular potentials of individual electrons and protons. See my Outline document.”

Indeed, what I said here applies to each and every nonlinearity-based argument (except for mine!) which has ever been offered by way of attempting a resolution to the riddles of QM—in particular, the measurement problem.

To quote from Ian Stewart’s book: “Does God Play Dice? (2/e)”, several people have proposed nonlinear theories, including:

“L. Diosi, N. Gisin, G. C. Ghirardi, R. Grassi, P. Pearle, A. Rimini, and I. Percival.”

I had very briefly gone through some of these proposals. Actually, I had mostly got to know about their proposals by reading descriptions and remarks on them as made by other commentators. However, at times, I also went rapidly browsing through some of their arXiv papers. I had come to the conclusion that what they were putting forth wasn’t anything like my ideas (later mentioned in the Outline document). To quote Stewart here,

“In all of these theories the interaction of a quantum system with its environment produces an irreversible change that turns the quantum state into an eigenstate. However, all of these theories are probabilistic: the initial quantum state undergoes a kind of random diffusion which ultimately leads to an eigenstate.” (ibid.)

To be honest, I am not sure whether all these proposals could be characterized as involving random diffusion or not. I don’t know these theories to the required level of detail to be able to confirm or deny Stewart’s characterization. However, there certainly is this element of an initial quantum state getting collapsed precisely to the measured eigenstate, which appears in all of them—and I don’t accept that idea in the first place (as explicitly put forth in my Outline document).

In a slightly different context, Stewart also notes:

“There is some interest among physicists in what they call `quantum chaos’, but quantum chaos is about the relation between non-chaotic quantum systems and chaotic classical approximations—not chaos as a mechanism for quantum indeterminacy.” (ibid.)

OK, this is one conclusion which I very distinctly remember I had reached on my own too. I guess this was in November 2018, when I had googled on “quantum chaos.” Subsequently, I re-checked the matter again (just to be sure) in February ’19 (i.e., just days after posting my Outline document.)

I agree that Stewart’s characterization is right on the target here. IMHO, you don’t need to take recourse to the prior studies of “quantam chaos” very seriously if either the QM foundations or the very feasibility of the quantum computer are your concerns.


2. A bit on my PhD-time research:

I made a series of 4 tweets on 18 July 2019. The first two of these dealt with my old, PhD time approach to photon propagation. Let me note here a clarification regarding what all other work I had performed during my PhD, before coming to my old (PhD-time) approach to QM (which I will address in the next section).

The first thing to note is that my work on QM had formed only a part (may be about 1/4th part or so) of whatever studies and research I had done during my PhD.

The other parts of my PhD thesis were notably related to the studies of the classical second-order partial-differential equations, and their computational modeling using stochastic processes. The equations on which I thus focused my attention were: the Helmholtz equation, the diffusion equation, and the Poisson-Laplace equation. In addition, I had also picked up a study of elasticity, and had added a conjecture about the possible applicability of some random-walks type of processes for modeling the classical tensor fields (of stresses and strains as used in engineering). Let me go over all these topics in brief.

2.1 Work on the diffusion equation:

I think I have posted many entries at this blog about my work on diffusion equation. So let me not regress into it all once again. Let me just note that I basically showed that, contrary to what post-graduate texts in maths (published by AMS) say, the diffusion equation does not necessarily imply an instantaneous action at a distance (IAD).

The IAD in diffusion, I pointed out, was an outcome of the features of the solution theory (Fourier’s theory, and also of Einstein’s analysis of the Brownian movement). But IAD was not necessarily implied either by the local physics of diffusion phenomena, or by the partial differential equation that is the diffusion equation. [Here, remember, a differential equation always, invariably, necessarily, etc., is local in nature—it refers to an infinitesimal CV (control volume) or CM (control mass).]

In particular, I pointed out that the compactness of the support of the solution was the crucial issue here—whether the support was infinite (as in Fourier theory and in 2nd half of Einstein’s c. 1905 paper), or finite (as in any subdomain-based numerical method, or in the Brownian movement, i.e., the first half of the same paper by Einstein). In my view of the things, you can always transition from a collection of finite subdomains to an infinity of infinitesimal CVs that are still distributed over only a finite interval, via a suitable limiting process. The finite support, of course, could grow in extent with time.

These observations had never been made in about 200 years of the existence of Fourier’s theory. (Go ahead, hunt for the precedents!) You have to make this distinction between a (local) PDE and its (possibly global) solutions obtained after conducting integration operations, and in this entire process, you have to be careful about not elevating a mere ansatz or an integration method to the high pedestal of “the” (provably unique) solution. That’s in effect what I had argued.

2.2 Work on the diffraction phenomenon (Huygens-Fresnel theory):

I also had a neat (though smallish) result concerning the obliquity factor in diffraction. I went through Huygens’, Fresnel’s and Kirchhoff’s analyses of the diffraction phenomenon (involving the Helmholtz equation—i.e., the spatial part of the wave PDE), and then pointed out the reasons why the obliquity factor could not be regarded an essential characteristic of the diffraction phenomenon itself.

Once again, the obliquity factor turned out to be a feature of how the analysis—specifically, the integration operations—had been set up. It was a feature of the mathematical solution procedure adopted for this problem. In diffraction, there was no fundamental physical process which operated in an anisotropic way, compelling the wavefield to have a greater amplitude in the forward direction and zero in the backward direction.

However, explanations for some 187 years (since Fresnel’s work) had characterized diffraction as an inherently anisotropic phenomenon. Yes, right up to my old copy of Resnick & Halliday. There was a surprise in it for me because while Fresnel was just a railroad engineer who had taught himself maths, Kirchhoff surely was a master of PDEs and their integration techniques. But this fact still had escaped even Kirchhoff.

I pointed out how even if you do keep isotropy to the Huygens’ wavelets, given the geometry of the interaction of Huygens’ wavelets and the surfaces where BCs are applied, you would still end up with the same amplitudes as those obtained by Fresnel’s or Kirchhoff’s analyses.

Come to think of it, you could even pick up this line of argument and apply it to any analysis that seeks to derive an expression for a field inside a finite domain by appeal to a pair of forward- and backward-going processes occurring within that domain; e.g., an analysis involving the advanced and retarded waves, or the transactional waves in certain interpretations of QM, etc. You just have to be careful about what BCs and integrals are being set up and how the integration processes are being conducted, that’s all!

2.3 Computational modeling of transient heat conduction:

I then tried to apply the random walks-based approach (RW) to model transients in the heat conduction, as they occur in a moving boundary problem, viz. the melting snowman. Since my focus was on conduction, I grossly simplified all the other aspects of this problem. (Having just come out of an illness, I would get easily tired back then.) The problem I considered was that of melting of a snowman.

Consider a snowman in the form of a vertical right-circular cylinder which is placed on a relatively large block of ice below. The snowman absorbs heat from the atmosphere by radiation and convection at its external surfaces. The absorbed thermal energy then flows through the volume of the cylinder to the relatively large block of ice underneath (which was regarded as infinitely large in the simulations). The temperature gradients of course come to exist. The heat in the atmosphere brings the external surface to the melting point of ice even as the interior portions remain below it. So, the surface melts—phase-transition ensures a constancy of temperature at the surface. The melting is more pronounced at the sharp corners. The resulting water gradually slips down, forming a thin and continuous layer on the external surface. (I ignored the fluid flow in my simulation.) All in all, the sharp cylindrical snowman slowly acquires a thumb-like shape over a period of time, and then still continues to shrink down in size.

I first tried to apply RW for heat conduction in this scenario, but soon found that there was a great deal of noise due to randomness. So, I set up a “conversion” from the particles-based approach of RWs to a local, continuum-based approach, thus ending up with a description which was essentially equivalent to a cellular automata-based one. I then performed the simulations with this CA-based approach (in 3D), compared the changing external contours of the melting snowman with an actual experiment (done at home, for less than Rs. 200/- as the total cost—for thermocouple wires, basically), and presented a paper at an international conference.

This piece of work added the necessary component of “engineering” and “experimentation” to my thesis. While my guide was always happy with my progress, he also was a bit worried that examiners might look at my thesis and conclude that it was all a useless piece of theoretical, almost scientific work—it had little “practical” component to it, and so, couldn’t qualify for a degree in engineering. So, he was quite relieved when I discussed this idea of snowman with him—he immediately gave me a go ahead!

2.4 Conjecture for using RWs for modeling tensor fields:

Then, in addition, I also had this conjecture regarding the feasibility of random walks for simulating tensor fields. Since I haven’t spoken at length about it here at this blog, let me note something here.

There were certain rigorous mathematical arguments (coming from Ivy League professors of mechanics as well as from seemingly competent but obscure Russian authors) which had purportedly shown that stochastic processes like random walks could provably not be used for simulating the stress/strain fields.

Yet, I was confident of my conjecture, out of some basic considerations which I had in mind. So I gave a conference presentation on it (in an international conference on mathematics), and also included it in my thesis.

Much later on (after my PhD defence), I grew further confident that this conjecture should definitely come to hold; that it could be proved. That is to say, the earlier (intricate) proofs by reputed mechanicians / mathematicians could be shown to have holes in them. (Not that my argument was flawless either. A professor had spotted a weak link in my argument at that conference, and had brought it to my attention in a most gentle, indirect manner.)

Then, some time still later on, I ran into some “simple” but directly useful work by a young Chinese author (perhaps a PhD student). If I remember it right, he had published this paper while working in China itself. His work was similar to an intermediate step I had in mind, but it was much more complete, even neat. No, he was not concerned with the random walks as such. All that he did was to give a working model for constructing stress/strain fields, by starting with a finite 3D unit cell having an internal structure of a truss and treating it as if it were a finite approximation for an infinitesimal CV of the continuum. I had somewhat similar ideas, and had in fact inserted a couple of screen-shots of the truss-based simulations I had conducted for a preliminary study. But he had gone much further. If I recall his paper right, he had even arrived at the right values for the truss-related parameters (like stiffnesses of the members) if this unit cell was to converge to the continuum equations of elasticity in the limit of vanishing size.

Now, by regarding the process of re-distribution of forces along the truss members as an abstract flow, and by randomizing it (discretizing it in the process), it should be easily possible to come to a proof of my conjecture. Also a neat computational simulation. Of course, the issue is not as simple as it looks on the surface. Free surfaces in a multiply-connected domain pose a tricky issue—they deform freely, and so, uniqueness becomes tricky to handle. Even then, with sufficient care (or appeal to ideas from CoV) I am sure that it can be done.

OK. I will do it some other time in future! (This has been a TBD paper on my list for almost a decade or so by now; I simply don’t run into suitable ME/MTech students for me to guide on this topic! … Anyway, this blog is in copyright, just in case you didn’t notice it…)


3. My PhD-time work on QM (photon propagation):

Alright, finally we come to my PhD-time work on photons propagation. In a series of tweets, I said (on 18 July 2019):

“1/4. My old (PhD-time) approach, then called “new approach” and also as FAQ (Fields As Quanta): I’ve abandoned it; the one in the Outline document replaces it completely. FAQ anyway dealt with only the propagation of only the photons, not their generation or absorption (i.e. it didn’t deal with the creation/annihilation operators). FAQ didn’t deal with the propagation of other particles, viz., electrons, protons, or neutrons either.”

and

“2/4. FAQ still remains valid as an abstract description, as referring to the propagation characteristics of photons in the limit that the medium is continuous (i.e., it is homogenized from discrete and dispersed atomic nuclei), i.e., if the propagation dynamics is diffusive, not ballistic.”

About this second tweet, I subsequently had second thoughts soon after, and so I noted, right on the next day (on 19 July 2019) the following comment (a reply) to it:

“Umm… I am not sure precisely what all considerations should enter into taking the limits (for arriving at the propagation characteristics of photons as conceptualized in my older, PhD-time, approach). Would have to work through how the Schrodinger formalism (and hence my new approach) goes from \Psi and photons to the classical, dynamical EM fields. To be done in future. But yes, FAQ dynamics *was* diffusive, that’s for certain.”

Thus, I first said that FAQ still remains valid, when seen as an abstract description. However, just one day later, I also pointed out the more basic and possibly tricky issues there might be—viz., finding the right kind of limiting processes which start from the Schrodinger formalism and end up at Maxwell’s equations.

I feel confident that people must have thrashed out this topic (TDSE \Rightarrow EM) long time ago. It’s just that I myself have never studied the topic so far (in fact I haven’t even done the literature search on it), and so, I don’t have a good idea about what all technical issues might get involved in it.

Thus, I will have to first study this topic (from the mainstream QM to EM). Only then would I be able to understand the mapping well enough that I could understand the Hertzian waves right in the QM settings. It’s only after this stage that I will be I be able to say something definitively about the manner in which FAQ can really hold, and if yes, how well. Worrying about the right kind of a limiting procedure would be just a part of it, but an important one. … So yes, you can take these particular tweets with a pinch of salt.


4. How did I get to my old PhD-time approach for photons (i.e. FAQ), in the first place?

OK. Now that we are at it, here is a question that might have arisen in your mind: If I didn’t know QM well back then (during my PhD-studies days), then how could I dare propose this approach (viz. FAQ) so confidently?

Ummm… Let’s leave the daring and the confidence parts aside for now. Let’s focus on the “how” of it—how I got to my ideas. This part is much more interesting. At least to me.

How precisely did I end up at the idea of FAQ?

Well, I began with a kind of a “correspondence principle” (not in the Copenhagen sense of the term; read on). Briefly, the “correspondence” which I had in mind was the fact that single photons one-at-a-time mark only isolated dots on the CCD surface, but in the large-flux situations, their density pattern converges to the continuum interference pattern as described by Young.

So, I imagined a point-source emitting photons. Mind you, photons for me were, back then, spatially discrete particles of light, a la Einstein and Feynman—both their ideas had held a tremendous sway over my thinking back then.

I then imagined an ideal absorber in the form of a spherical surface kept at some distance from the source, somewhat like your usual Gaussian surface from electrostatics, but the difference here was that while the Gaussian surface is imaginary and allows anything to move through it freely, here, it was an actual absorber, albeit imaginary. This spherical surface was centered on the same point source. I asked myself what kind of variations in density should light show, in the continuum description, on this concentric spherical surface if its radius was varied a bit. In essence, I was developing my logic by starting from Gauss’ theorem and the Poisson-Laplace equation.

I then transitioned, in my ideas, to the Helmholtz equation by imagining a time-steady waviness to the field. Now, if the radius of the sphere were constrained to be an integral multiple of the spatial period (i.e. wavelength) of light, then the total quantity of photons being absorbed at the spherical surface should remain the same for a sphere of any such a radius. The only rationale which could justify this assumption was: to have a conservation principle in place, by asserting that photons are conserved while they still are in transit through the empty space (i.e. before they get absorbed on the spherical surface). Again, remember, I was using the idea of photons as if they were spatially discrete particles, like the grains of mustard seed.

Conservation principles are neat, I had learnt mostly in reference to the ample evidence I found in engineering sciences. (Even if I were to know about Noether’s theorem, I would have disregarded it—such was, and still is, my temperament. I think that this theorem is merely a reformulation of a very narrow range of physics—one that is restricted to merely 2nd-order linear PDEs. Anyway, read on…)

If the photon number conservation was to be had in theory (during propagation) at integral multiples of \lambda for the radius of the sphere, then was there any sound reason to give up conservation when the radius was (n+1/2)\lambda? (Here I am assuming that at zero radius, the light has the maximum amplitude.) Couldn’t we explain the complete darkness at these odd radii by positing that the photon was still there—it’s just that the sphere of that particular radius didn’t absorb it? After all, we could always posit a variable called the absorption fraction which would be related to the local amplitude of the spatial wave, right? That’s how I decided to conserve the photon number, and thereby, shift the burden of the variable levels of brightness at the absorber by appeal to a photon-absorption process that varied in efficiency precisely in response to the local wave amplitude associated with the tiny grain which was photon. (I regarded this grain as a localized condition in the luminiferous aether.)

Now, the next question was: If the photons had a ballistic dynamics (i.e. a straight-line motion), then the point on the spherical surface where a given photon eventually would land, would have already been determined right at the source point—some internal processes in the emitter material would be responsible for ejecting it at random orientations, which would also determine its landing location. (Dear Bohmians, do you see something familiar? However, please note, this was entirely my own thinking. I had not come across Bohm back then. Please read on.)

I thought that while this was possible, it was also possible that the photons could also undergo random-walks. How did I introduce random walks?

Well, the direct experimental evidence showed that this propagation problem had two essential features: (i) many discrete spots which go in a limit to a continuous pattern of finite densities, and (ii) random locations on the absorber surface where the grainy photons land, i.e., no correlation between the two points where any two successive photons get absorbed.

Since the continuum viewpoint of light (Young’s waves) had to be reached in the limit, it was important to keep in mind always. It was here that I happened to recall Huygens’ principle. I was also quite at home with the idea of randomly intersecting a 3D surface with a linear probe—I had already studied stereology at the University of Alabama at Birmingham (UAB).

Huygens’ principle involved every point of space as if it were some kind of a “source” for the new (Huygens’) wavelets. The Young pattern could be obtained by superposing all the Huygens’ wavelets. The discrete spots could be had by dividing the surface of the Huygens wavelets and taking the individual surface patches to vanishing size (a la mesh refinement). This satisfactorily addressed the first essential feature noted above (viz. discrete spots). As to the second feature (randomness) it could also be satisfied by randomizing the selection of the spherical patch on the Huygens’ wavelet (a la stereology).

This much part, I in fact had already completed when I was right at UAB, completely on my own, though I had never shared this idea with anyone. I guess it was already over before 1992 came to an end.

More than a decade later, now in Pune: Starting with Gauss’ theorem, and touching on the Huygens process and stereology, and now, also throwing in the vector addition rules for ensuring that right phases appear throughout the propagation, and so, local amplitudes also come out right in the large-flux situation, I could get to my diffusive dynamics for the spatially discrete photons.

I did suspect that this procedure (of randomizing the selection of a point on any of Huygens’ wavelets) meant that the photons would have to be imagined either as (i) getting scattered everywhere during their propagation, or (ii) possibly getting annihilated after travelling even just an infinitesimal distance in empty space, and then, somehow, also getting re-created  (the time lag between the annihilation and the subsequent creation being zero), effectively satisfying the conservation principle. On either count, the photon would keep changing its directions randomly, because the point on the surface of the Huygens wavelet was randomized.

Of course, I could not figure out a good physical reason for such a process.

Scattering of one photon by other photons seemed implausible—though I couldn’t figure out any particular reason why it would be implausible. Anyway reliance on scattering led to an impossible situation when there was only one photon inside the interference chamber.

There also was no proper physicist who would even so much as be willing to just listen to me. (I tried more than 15–20.) On the other hand, so many leading ones among them were offering descriptions of QM in terms of a random “quantum foam/froth” which produces and annihilates any particles anywhere anytime—even massive ones and even in empty space at any random time. So, I thought that my idea of continuous disappearance and appearance but in a different direction, would not be found too odd.

(Discussions of foundations of QM has improved by leaps and bounds since engineers started taking interest in building QC. In fact, recently, a somewhat similar remark also came from Dr. Sabine Hossenfelder on her blog. But I am talking of those days—around 2005 times.)

Of course, since I myself didn’t have even an iota of a physical understanding regarding such virtual annihilation/creation pairs for photons, but since they were necessary in my scheme because I had randomized not the source point but the Huygens surface, rather than going full wacko (as most any physicist in my situation would), I did what any graduate student of engineering would do: I simply refrained from mentioning any such implications for a possible physics of it, and instead chose to phrase my description of the process in terms which heavily relied on the well-established, well-reputed, classical principle of Huygens’.

No one ever asked any questions on this part either. Neither in conference, nor in PhD defence, nor even after sharing my papers with physicists (some of who had on their own requested my papers). So, it kindaa went through!

Phewww…. All the hoops that a hapless PhD student has to jump through, just to get to his degree! (In my case, it was even worse: these were the closed surfaces of the Huygens wavelets, not mere closed curves as in the hoops.)

So, that’s how I had arrived at my PhD time approach. I did it by randomizing the spherical surfaces employed in the Huygens’ process, and by imagining a spatially discrete particle of the photon at all such locations at each one of the subsequent instants. The movement of the photon, when it goes on cutting the respective surfaces of all the freshly generated series of Huygens’ wavelets, when the cutting is randomized, obviously forms a simplest kind of a Weiner process—it’s the direct counterpart of the random-walks, but for wave-fields.

People right from Ulam et al. had proposed and used random walks (aka Monte Carlo) for diffusive and potential fields, for 50+ years. However, none had added just some more calculations with the wave- and displacement-vectors to account for the phases, and thereby generalized the random-walks to be able to handle the wavefields too. That was another neat thing to know. (Yes, please, do go ahead! Do hunt for the precedents!!)

Anyway, that’s how the FAQ dynamics came to be diffusive.

And all said and done, it did come to reproduce a seemingly same kind of a transition from a pattern of random dots to the Young interference pattern as experiments had shown!

One final point. But why did I disregard the ballistic dynamics—which would have all randomness concentrated only in the source and let photons fly straight? Yes, come to think of it, if you do assume a spatially discrete nature for the photon, then there is obviously no good reason to deny such a possibility.

Here, I am not sure, because I don’t remember having writing down any note on it. So it’s kindaa hard to tell now, from a distance of years. I will try to reconstruct some possible considerations starting from some indirect points, and purely from memory.

I seem to recall that I was apprehensive that what I called “size effects” might come into picture and make this approach unsound. I mean to say, a perfectly uniform randomness (distributed over the entire emitter surface) was hard to imagine as the emitter surface became ever smaller, and reached the natural limit of a single atom. For one thing, the emitted quantity might get affected, I thought. Secondly, single atoms, acting as emitters, had to have some directionality to their emissions because their orbitals [whatever it meant—I didn’t have a good idea about them back then] weren’t always spherically symmetric. I think I had considered this point.

Did I consider the delayed-choice kind of considerations? I think I did, but in some simple indirect ways, not very carefully or systematically. I mean to say, I don’t remember going through write-ups on the delayed-choice experiments at all, and then taking any decision. I rather remember thinking in terms like a camera shutter suddenly coming in the way of a photon when it’s still in mid-flight and all. If the shutter were to be a perfect sink (one that didn’t re-emit the photon), or if it were to re-emit photons from a different location on the shutter surface (after internal energy undergoing some unpredictable oscillations within the shutter material), then it would adversely affect the final pattern on the screen, I had thought. The real-time changes for the propagating photon might get better handled by distributing randomness over the entire spatial region of the chamber, I had thought.

But I think that all in all, it wasn’t any such careful consideration. I chose the randomized Huygens’ process because I thought it gave good enough an explanation.

In the final analysis, there are too many problems with this entire approach—even with just a spatially discrete photon anyway, and furthermore if it comes embedded in a description that has no IAD anywhere in itself. Some or the other part of QM will then have to keep getting violated. You just can’t avoid it. So, the best way to understand QM is not to begin with photons but with electrons—and with the Schrodinger formalism. The measurement problem is the only remaining issue then.


5. Homework for the skeptics among you:

Go through my PhD abstract posted at iMechanica even before the defence [(.PDF) ^], and check out if what I wrote above, purely on the fly and purely from memory, matches with what I had officially reported back then, or not. If you find serious discrepancies, please bring them to my notice. Thanks in advance.


Of course, now that I’ve completely abandoned the grainy description of photons as the actual physical reality, all the above doesn’t much matter. FAQ, even if valid, would have to be taken as only a higher-level, abstract description of an entirely different kind of a mechanism.

So, let’s leave this entire PhD-time approach right behind us (forever), and continue with the next tweet in this series. They directly deal with the aspects of my latest approach (as in the Outline document)… However, I will pick it up in the next post. It’s almost 5900 words already! Give me a break of at least a 10–15 days. Until then, take care and goodbye.


A song I like:

(Marathi) “ambaraatalyaa niLyaa ghanaachee”
Singer: Ramdas Kamat
Music and Lyrics: Veena Chitako

 

Determinism, Indeterminism, and the nature of the laws of physics…

The laws of physics are causal, but this fact does not imply that they can be used to determine each and everything that you feel should be determinable using them, in each and every context in which they apply. What matters is the nature of the laws themselves. The laws of physics are not literally boundless; nothing in the universe is. They are logically bounded by the kind of abstractions they are.


Let’s take a concrete example.

Take a bottle, pour a little water and detergent in it, shake well, and have fun watching the Technicolor wonder which results. Bubbles form; they show resplendent colors. Then, some of them shrink, others grow, one or two of them eventually collapse, and the rest of the network of the similar bubbles adjusts itself. The process continues.

Looking at it in an idle way can be fun: those colorful tendrils of water sliding over those thin little surfaces, those fascinating hues and geometric patterns… That dynamics which unfolds at such a leisurely pace. … Just watching it all can make for a neat time-sink—at least for a while.

But merely having fun watching bubbles collapse is not physics. Physics proper begins with a lawful description of the many different aspects of the visually evident spectacle—be it the explanation as to how those unreal-looking colors come about, or be it an explanation of the mechanisms involved in their shrinkage or growth, and eventual collapse, … Or, a prediction of exactly which bubble is going to collapse next.


For now, consider the problem of determining, given a configuration of some bubbles at a certain time t_0, predicting exactly which bubble is going to collapse next, and why… To solve this problem, we have to study many different processes involved in the bubbles dynamics…


Theories do exist to predict various aspects of the bubble collapse process taken individually. Further it should also be possible to combine them together. The explanation involves such theories as: the Navier-Stokes equations, which govern the flow of soap water in the thin films, and of the motion of the air entrapped within each bubble; the phenomenon of film-breakage, which can involves either the particles-based approaches to modeling of fluids, or, if you insist on a continuum theory, then theories of crack initiatiation and growth in thin lamella/shells; the propagation of a film-breakage, and the propagation of the stress-strain waves associated with the process; and also, theories concerning how the collapse process gets preferentially localized to only one (or at most few) bubbles, which involves again, nonlinear theories from mechanics of materials, and material science.

All these are causal theories. It should also be possible to “throw them together” in a multi-physics simulation.

But even then, they still are not very useful in predicting which bubble in your particular setup is going to collapse next, and when, because not the combination of these theories, but even each theory involved is too complex.

The fact of the matter is, we cannot in practice predict precisely which bubble is going to collapse next.


The reason for our inability to predict, in this context, does not have to do just with the precision of the initial conditions. It’s also their vastness.

And, the known, causal, physical laws which tell us how a sensitive dependence on the smallest changes in the initial conditions deterministically leads to such huge changes in the outcomes, that using these laws to actually make a prediction squarely lies outside of our capacity to calculate.

Even simple (first- or second-order) variations to the initial conditions specified over a very small part of the network can have repercussions for the entire evolution, which is ultimately responsible for predicting which bubble is going to collapse next.


I mention this situation because it is amply illustrative of a special kind of problems which we encounter in physics today. The laws governing the system evolution are known. Yet, in practice, they cannot be applied for performing calculations in every given situation which falls under their purview. The reason for this circumstance is that the very paradigm of formulating physical laws falls short. Let me explain what I mean very briefly here.


All physical laws are essentially quantitative in nature, and can be thought of as “functions,” i.e., as mappings from a specific set of inputs to a specific set of outputs. Since the universe is lawful, given a certain set of values for the inputs, and the specific function (the law) which does the mapping, the output is  uniquely determined. Such a nature of the physical laws has come to be known as determinism. (At least that’s what the working physicist understands by the term “determinism.”) The initial conditions together with the governing equation completely determine the final outcome.

However, there are situations in which even if the laws themselves are deterministic, they still cannot practically be put to use in order to determine the outcomes. One such a situation is what we discussed above: the problem of predicting the next bubble which will collapse.

Where is the catch? It is in here:

When you say that a physical law performs a mapping from a set of input to the set of outputs, this description is actually vastly more general than what appears on the first sight.

Consider another example, the law of Newtonian gravity.

If you have only two bodies interacting gravitationally, i.e., if all other bodies in the universe can be ignored (because their influence on the two bodies is negligibly small in the problem as posed), then the set of the required input data is indeed very small. The system itself is simple because there is only one interaction going on—that between two bodies. The simplicity of the problem design lends a certain simplicity to the system behaviour: If you vary the set of input conditions slightly, then the output changes proportionately. In other words, the change in the output is proportionately small. The system configuration itself is simple enough to ensure that such a linear relation exists between the variations in the input, and the variations in the output. Therefore, in practice, even if you specify the input conditions somewhat loosely, your prediction does err, but not too much. Its error too remains bounded well enough that we can say that the description is deterministic. In other words, we can say that the system is deterministic, only because the input–output mapping is robust under minor changes to the input.

However, if you consider the N-body problem in all its generality, then the very size of the input set itself becomes big. Any two bodies from the N-bodies form a simple interacting pair. But the number of pairs is large, and worse, they all are coupled to each other through the positions of the bodies. Further, the nonlinearities involved in such a problem statement work to take away the robustness in the solution procedure. Not only is the size of the input set big, the end-solution too varies wildly with even a small variation in the input set. If you failed to specify even a single part of the input set to an adequate precision, then the predicted end-state can deterministically become very wildly different. The input–output mapping is deterministic—but it is not robust under minor changes to the input. A small change in the initial angle can lead to an object ending up either on this side of the Sun or that. Small changes produce big variations in predictions.

So, even if the mapping is known and is known to work (deterministically), you still cannot use this “knowledge” to actually perform the mapping from the input to the output, because the mapping is not robust to small variations in the input.

Ditto, for the soap bubbles collapse problem. If you change the initial configuration ever so slightly—e.g., if there was just a small air current in one setup and a more perfect stillness in another setup, it can lead to wildly different predictions as to which bubble will collapse next.

What holds for the N-body problem also holds for the bubble collapse process. The similarity is that these are complex systems. Their parts may be simple, and the physical laws governing such simple parts may be completely deterministic. Yet, there are a great many parts, and they all are coupled together such that a small change in one part—one interaction—gets multiplied and felt in all other parts, making the overall system fragile to small changes in the input specifications.

Let me add: What holds for the N-body problem or the bubble-collapse problems also holds for quantum-mechanical measurement processes. The latter too involves a large number of parts that are nonlinearly coupled to each other, and hence, forms a complex system. It is as futile to expect that you would be able to predict the exact time of the next atomic decay as it is to expect that you will be able to predict which bubble collapses next.

But all the above still does not mean that the laws themselves are indeterministic, or that, therefore, physical theories must be regarded as indeterministic. The complex systems may not be robust. But they still are composed from deterministically operating parts. It’s just that the configuration of these parts is far too complex.


It would be far too naive to think that it should be possible to make exact (non-probabilistic) predictions even in the context of systems that are nonlinear, and whose parts are coupled together in complex manner. It smacks of harboring irresponsible attitudes to take this naive expectation as the standard by which to judge physical theories, and since they don’t come up to your expectations, to jump to the conclusion that physical theories are indeterministic in nature. That’s what has happened to QM.

It should have been clear to the critic of the science that the truth-hood of an assertion (or a law, or a theory) is not subject to whether every complex manner in which it can be recombined with other theoretical elements leads to robust formulations or not. The truth-hood of an assertion is subject only to whether it by itself and in its own context corresponds to reality or not.

The error involved here is similar, in many ways, to expecting that if a substance is good for your health in a certain quantity, then it must be good in every quantity, or that if two medicines are without side-effects when taken individually, they must remain without any harmful effects even when taken in any combination—that there should be no interaction effects. It’s the same error, albeit couched in physicists’ and philosopher’s terms, that’s all.

… Too much emphasis on “math,” and too little an appreciation of the qualitative features, only helps in compounding the error.


A preliminary version of this post appeared as a comment on Roger Schlafly’s blog, here [^]. Schlafly has often wondered about the determinism vs. indeterminism issue on his blog, and often, seems to have taken positions similar to what I expressed here in this post.

The posting of this entry was motivated out of noticing certain remarks in Lee Smolin’s response to The Edge Question, 2013 edition [^], which I recently mentioned at my own blog, here [^].


A song I like:
(Marathi) “kaa re duraavaa, kaa re abolaa…”
Singer: Asha Bhosale
Music: Sudhir Phadke
Lyrics: Ga. Di. Madgulkar


[In the interests of providing better clarity, this post shall undergo further unannounced changes/updates over the due course of time.

Revision history:
2019.04.24 23:05: First published
2019.04.25 14:41: Posted a fully revised and enlarged version.
]

Flames not so old…

The same picture, but two American interpretations, both partly misleading (to varying degrees):

NASA releases a photo [^] on the FaceBook, on 24 August at 14:24, with this note:

The visualization above highlights NASA Earth satellite data showing aerosols on August 23, 2018. On that day, huge plumes of smoke drifted over North America and Africa, three different tropical cyclones churned in the Pacific Ocean, and large clouds of dust blew over deserts in Africa and Asia. The storms are visible within giant swirls of sea salt aerosol (blue), which winds loft into the air as part of sea spray. Black carbon particles (red) are among the particles emitted by fires; vehicle and factory emissions are another common source. Particles the model classified as dust are shown in purple. The visualization includes a layer of night light data collected by the day-night band of the Visible Infrared Imaging Radiometer Suite (VIIRS) on Suomi NPP that shows the locations of towns and cities.

[Emphasis in bold added by me.]

For your convenience, I reproduce the picture here:

Aerosol data by NASA

Aerosol data by NASA. Red means: Carbon emissions. Blue means: Sea Salt. Purple means: Dust particles.

Nicole Sharp blogs [^] about it at her blog FYFD, on Aug 29, 2018 10:00 am, with this description:

Aerosols, micron-sized particles suspended in the atmosphere, impact our weather and air quality. This visualization shows several varieties of aerosol as measured August 23rd, 2018 by satellite. The blue streaks are sea salt suspended in the air; the brightest highlights show three tropical cyclones in the Pacific. Purple marks dust. Strong winds across the Sahara Desert send large plumes of dust wafting eastward. Finally, the red areas show black carbon emissions. Raging wildfires across western North America are releasing large amounts of carbon, but vehicle and factory emissions are also significant sources. (Image credit: NASA; via Katherine G.)

[Again, emphasis in bold is mine.]

As of today, Sharp’s post has collected some 281 notes, and almost all of them have “liked” it.

I liked it too—except for the last half of the last sentence, viz., the idea that vehicle and factory emissions are significant sources (cf. NASA’s characterization):


My comment:

NASA commits an error of omission. Dr. Sharp compounds it with an error of commission. Let’s see how.

NASA does find it important to mention that the man-made sources of carbon are “common.” However, the statement is ambiguous, perhaps deliberately so. It curiously omits to mention that the quantity of such “common” sources is so small that there is no choice but to regard it as “not critical.” We may not be in a position to call the “common” part an error of commission. But not explaining that the man-made sources play negligible (even vanishingly small) role in Global Warming, is sure an error of omission on NASA’s part.

Dr. Sharp compounds it with an error of commission. She calls man-made sources “significant.”

If I were to have an SE/TE student, I would assign a simple Python script to do a histogram and/or compute the densities of red pixels and have them juxtaposed with areas of high urban population/factory density.


This post may change in future:

BTW, I am only too well aware of the ugly political wars being waged by a lot of people in this area (of Global Warming). Since I do appreciate Dr. Sharp’s blog, I would be willing to delete all references to her writing from this post.

However, I am going to keep NASA’s description and the photo intact. It serves as a good example of how a good visualization can help in properly apprehending big data.

In case I delete references to Sharp’s blog, I will simply add another passage on my own, bringing out how man-made emissions are not the real cause for concern.

But in any case, I would refuse to be drawn into those ugly political wars surrounding the issue of Global Warming. I have neither the interest nor the bandwidth to get into it, and further, I find (though can’t off-hand quote) that several good modelers/scientists have come to offer very good, detailed, and comprehensive perspectives that justify my position (mentioned in the preceding paragraph). [Off-hand, I very vaguely remember an academic, a lady, perhaps from the state of Georgia in the US?]


The value of pictures:

One final point.

But, regardless of it all (related to Global Warming and its politics), this picture does serve to highlight a very important point: the undeniable strength of a good visualization.

Yes I do find that, in a proper context, a picture is worth a thousand words. The obvious validity of this conclusion is not affected by Aristotle’s erroneous epistemology, in particular, his wrong assertion that man thinks in terms of “images.” No, he does not.

So, sure, a picture is not an argument, as Peikoff argued in the late 90s (without using pictures, I believe). If Peikoff’s statement is taken in its context, you would agree with it, too.

But for a great variety of useful contexts, as the one above, I do think that a picture is worth a thousand words. Without such being the case, a post like this wouldn’t have been possible.


A Song I Like:
(Hindi) “dil sajan jalataa hai…”
Singer: Asha Bhosale
Music: R. D. Burman [actually, Bertha Egnos [^]]
Lyrics: Anand Bakshi


Copying it right:

“itwofs” very helpfully informs us [^] that this song was:

Inspired in the true sense, by the track, ‘Korbosha (Down by the river) from the South African stage musical, Ipi Ntombi (1974).”

However, unfortunately, he does not give the name of the original composer. It is: Bertha Egnos (apparently, a white woman from South Africa [^]).

“itwofs” further opines that:

Its the mere few initial bars that seem to have sparked Pancham create the totally awesome track [snip]. The actual tunes are completely different and as original as Pancham can get.

I disagree.

Listen to Korbosha and to this song, once again. You will sure find that it is far more than “mere few initial bars.” On the contrary, except for a minor twist here or there (and that too only in some parts of the “antaraa”/stanza), Burman’s song is almost completely lifted from Egnos’s, as far as the tune goes. And the tune is one of the most basic—and crucial—elements of a song, perhaps the most crucial one.

However, what Burman does here is to “customize” this song to “suit the Indian road conditions tastes.” This task also can be demanding; doing it right takes a very skillful and sensitive composer, and R. D. certainly shows his talents in this regard, too, here. Further, Asha not only makes it “totally, like, totally” Indian, she also adds a personal chutzpah. The combination of Egnos, RD and Asha is awesome.

If the Indian reader’s “pride” got hurt: For a reverse situation of “phoreenn” people customizing our songs, go see how well Paul Mauriat does it.

One final word: The video here is not recommended. It looks (and is!) too gaudy. So, even if you download a YouTube video, I recommend that you search for good Open Source tools and use it to extract just the audio track from this video. … If you are not well conversant with the music software, then Audacity would confuse you. However, as far as just converting MP4 to MP3 is concerned, VLC works just as great; use the menu: Media \ Convert/Save. This menu command works independently of the song playing in the “main” VLC window.


Bye for now… Some editing could be done later on.

General update: Will be away from blogging for a while

I won’t come back for some 2–3 weeks or more. The reason is this.


As you know, I had started writing some notes on FVM. I would then convert my earlier, simple, CFD code snippets, from FDM to FVM. Then, I would pursue modeling Schrodinger’s equation using FVM. That was the plan.

But before getting to the nitty-gritties of FVM itself, I thought of jotting down a note, once and for all, putting in writing my thoughts thus far on the concept of flux.


If you remember, it was several years ago that I had mentioned on this blog that I had sort of succeeded in deriving the Navier-Stokes equation in the Eulerian but differential form (d + E for short).

… Not an achievement by any stretch of imagination—there are tomes written on say, differentiable manifolds and whatnot. I feel sure that deriving the NS equations in the (d + E) form would be less than peanuts for them.

Yet, the fact of the matter is: They actually don’t do that!

Show me a single textbook or a paper that does that. If not at the UG level, then at least at the PG level, but one that is written using the language of only plain calculus, as used by engineers—not that of advanced analysis.

And as to the UG/PG books from engineering:

What people normally do is to derive these equations in its integral form, whether using the Lagrangian or the Eulerian approach. That is, they adopt either the (i + L) approach or the (i + D) approach.

At some rare times, if they at all begin fluid dynamics with a differential form of the NS equations, then they invariably follow the Lagrangian approach, never the Eulerian. That is, they invariably begin with only (d + L)—even in those cases when their objective is to obtain (d + E). Then, after having derived (d +L) , they simply invoke some arbitrary-looking vector calculus identities to “transform” those equations from (d + L) to (d +E).

And, worse:

They never discuss the context, meaning, or proofs of those identities. None from fluid dynamics or CFD side does that. And neither do the books on maths written for scientists and engineers.

The physical bases of the “transformation” process must remain a mystery.


When I started working through it a few years ago, I realized that the one probable reason why they don’t use the (d +E) form right from the beginning is because: forget the NS equations, no one understands even the much simpler idea of the flux—if it is to be couched entirely in the settings of (d+E). You see, the idea of the flux too always remains couched in the integral form, never the differential. For example, see Narasimhan [^]. Or, any other continuum mechanics books that impresses you.

It’s no accident that the Wiki article on Flux [^] says that it

needs attention from an expert in Physics.

And then, more important for us, the text of the article itself admits that the formula it notes, for a definition of flux in differential terms, is

an abuse of notation

See the section here [^].

Also, ask yourself, why is a formula that is free of the abuse of notation not being made available? In spite of all those tomes having been written on higher mathematics?


Further, there were also other related things I wanted to write about, like an easy pathway to the idea of tensors in general, and to that of the stress tensor in particular.

So, I thought of writing it down it for once and for all, in one note. I possibly could convert some parts of it into a paper later on, perhaps. For the time being though, the note would be more in the nature of a tutorial.


I started writing down the note, I guess, from 17 August 2018. However, it kept on growing, and with growth came reorganization of material for a better hierarchy or presentation. It has already gone through some 4–5 thorough re-orgs (meaning: discarding the earlier LaTeX file entirely and starting completely afresh), and it has already become more than 10 LaTeX pages. Even then, I am nowhere near finishing it. I may be just about half-way through—even though I have been working on it for some 7–8 hours every day for the past fortnight.

Yes, writing something in original is a lot of hard work. I mean “original” not in the sense of discovery, but in the sense of a lack of any directly citable material whatsoever, on the topic. Forget copy-pasting. You can’t even just gather a gist of the issue so that you could cite it.

And, the trouble here is, this topic is otherwise so very mature. (It is some 150+ years old.) So, you know that if you go even partly wrong, the whole world is going to pile on you.

And that way, in my experience, when you write originally, there is at least 5–10 pages of material you typically end up throwing away for every page that makes it to the final, published, version. Yes, the garbage thrown out is some 5–10 times the material retained in—no matter how “simple” and “straightforward” the published material might look.

Indeed, I could even make a case that the simpler and the more straight-forward the published material looks, if it also happens to be original, then the more strenuous it has been, on the part of the author.

Few come to grasp this simple an observation, ever, in their entire life.


As a case in point, I wish to recall here my conference paper on diffusion. [To be added here soon enough.]

I have many times silently watched people as they were going through this paper for the first time.

Typically, when engineers read it, they invariably come out with a mild expression which suggests that they probably were thinking of something like: “isn’t it all so simple and straight-forward?” Sometimes they even explicitly ask: “And, what do you say was the new contribution here?” [Even after having gone through both the abstract and the conclusion part of it, that is.]

On the other hand, on the four-five rare occasions when I have had the opportunity to watch professional mathematicians go through this paper of mine, in each case, the expression they invariably gave at the end of finishing it was as if they still were very intently absorbed in it. In particular, they never do ask me what was new about it—they just remain deeply engaged in what looks like an exercise in “fault-finding”, i.e., in checking if any proof, theorem or lemma they had ever had come across could be used in order to demolish the new idea that has been presented. Invariably, they give the same argument by way of an objection. Invariably, I explain why their argument does not address the issue I have raised in the paper. Invariably they chuckle and then go back to the paper and to their intent thinking mode, to see if there is any other weakness to my basic argument…

Till date (even after more than a decade), they haven’t come back.

But in all cases, they were very ready to admit that they were coming across this argument for the first time. I didn’t have to explain it to them that though the language and the tone of the paper looked simple enough, the argument itself was not easy to derive originally.


No, the notes which I am currently working on are nowhere near as original as that. [But yes, original, these are.]

Yet, let me confess, even as I keep prodding through it for the better part of the day the way I have done over the past fortnight or so, I find myself dealing with a certain doubt: wouldn’t they just dismiss it all as being too obvious? as if all the time and effort I spent on it was, more or less, ill spent? that it was all meaningless to begin with?


Anyway, I want to finish this task before resuming blogging—simply because I’ve got a groove about it by now… I am in a complete and pure state of anti-procrastination.

… Well, as they say: Make the hay while the Sun shines…


A Song I Like:
(Marathi) “dnyaandev baaL maajhaa…”
Singer: Asha Bhosale
Lyrics: P. Savalaram
Music: Vasant Prabhu

 

Links…

Here are a few interesting links I browsed recently, listed in no particular order:


“Mathematicians Tame Turbulence in Flattened Fluids” [^].

The operative word here, of course, is: “flattened.” But even then, it’s an interesting read. Another thing: though the essay is pop-sci, the author gives the Navier-Stokes equations, complete with fairly OK explanatory remarks about each term in the equation.

(But I don’t understand why every pop-sci write-up gives the NS equations only in the Lagrangian form, never Eulerian.)


“A Twisted Path to Equation-Free Prediction” [^]. …

“Empirical dynamic modeling.” Hmmm….


“Machine Learning’s `Amazing’ Ability to Predict Chaos” [^].

Click-bait: They use data science ideas to predict chaos!

8 Lyapunov times is impressive. But ignore the other, usual kind of hype: “…the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. ” [italics added.]


“Your Simple (Yes, Simple) Guide to Quantum Entanglement” [^].

Click-bait: “Entanglement is often regarded as a uniquely quantum-mechanical phenomenon, but it is not. In fact, it is enlightening, though somewhat unconventional, to consider a simple non-quantum (or “classical”) version of entanglement first. This enables us to pry the subtlety of entanglement itself apart from the general oddity of quantum theory.”

Don’t dismiss the description in the essay as being too simplistic; the author is Frank Wilczek.


“A theoretical physics FAQ” [^].

Click-bait: Check your answers with those given by an expert! … Do spend some time here…


Tensor product versus Cartesian product.

If you are engineer and if you get interested in quantum entanglement, beware of the easily confusing terms: The tensor product and the Cartesian product.

The tensor product, you might think, is like the Cartesian product. But it is not. See mathematicians’ explanations. Essentially, the basis sets (and the operations) are different. [^] [^].

But what the mathematicians don’t do is to take some simple but non-trivial examples, and actually work everything out in detail. Instead, they just jump from this definition to that definition. For example, see: “How to conquer tensorphobia” [^] and “Tensorphobia and the outer product”[^]. Read any of these last two articles. Any one is sufficient to give you tensorphobia even if you never had it!

You will never run into a mathematician who explains the difference between the two concepts by first directly giving you a vague feel: by directly giving you a good worked out example in the context of finite sets (including enumeration of all the set elements) that illustrates the key difference, i.e. the addition vs. the multiplication of the unit vectors (aka members of basis sets).

A third-class epistemology when it comes to explaining, mathematicians typically have.


A Song I Like:

(Marathi) “he gard niLe megha…”
Singers: Shailendra Singh, Anuradha Paudwal
Music: Rushiraj
Lyrics: Muralidhar Gode

[As usual, a little streamlining may occur later on.]

Some suggested time-pass (including ideas for Python scripts involving vectors and tensors)

Actually, I am busy writing down some notes on scalars, vectors and tensors, which I will share once they are complete. No, nothing great or very systematic; these are just a few notings here and there taken down mainly for myself. More like a formulae cheat-sheet, but the topic is complicated enough that it was necessary that I have them in one place. Once ready, I will share them. (They may get distributed as extra material on my upcoming FDP (faculty development program) on CFD, too.)

While I remain busy in this activity, and thus stay away from blogging, you can do a few things:


1.

Think about it: You can always build a unique tensor field from any given vector field, say by taking its gradient. (Or, you can build yet another unique tensor field, by taking the Kronecker product of the vector field variable with itself. Or, yet another one by taking the Kronecker product with some other vector field, even just the position field!). And, of course, as you know, you can always build a unique vector field from any scalar field, say by taking its gradient.

So, you can write a Python script to load a B&W image file (or load a color .PNG/.BMP/even .JPEG, and convert it into a gray-scale image). You can then interpret the gray-scale intensities of the individual pixels as the local scalar field values existing at the centers of cells of a structured (squares) mesh, and numerically compute the corresponding gradient vector and tensor fields.

Alternatively, you can also interpret the RGB (or HSL/HSV) values of a color image as the x-, y-, and z-components of a vector field, and then proceed to calculate the corresponding gradient tensor field.

Write the output in XML format.


2.

Think about it: You can always build a unique vector field from a given tensor field, say by taking its divergence. Similarly, you can always build a unique scalar field from a vector field, say by taking its divergence.

So, you can write a Python script to load a color image, and interpret the RGB (or HSL/HSV) values now as the xx-, xy-, and yy-components of a symmetrical 2D tensor, and go on to write the code to produce the corresponding vector and scalar fields.


Yes, as my resume shows, I was going to write a paper on a simple, interactive, pedagogical, software tool called “ToyDNS” (from Toy + Displacements, Strains, Stresses). I had written an extended abstract, and it had even got accepted in a renowned international conference. However, at that time, I was in an industrial job, and didn’t get the time to write the software or the paper. Even later on, the matter kept slipping.

I now plan to surely take this up on priority, as soon as I am done with (i) the notes currently in progress, and immediately thereafter, (ii) my upcoming stress-definition paper (see my last couple of posts here and the related discussion at iMechanica).

Anyway, the ideas in the points 1. and 2. above were, originally, a part of my planned “ToyDNS” paper.


3.

You can induce a “zen-like” state in you, or if not that, then at least a “TV-watching” state (actually, something better than that), simply by pursuing this URL [^], and pouring in all your valuable hours into it. … Or who knows, you might also turn into a closet meteorologist, just like me. [And don’t tell anyone, but what they show here is actually a vector field.]


4.

You can listen to this song in the next section…. It’s one of those flowy things which have come to us from that great old Grand-Master, viz., SD Burman himself! … Other songs falling in this same sub-sub-genre include, “yeh kisine geet chheDaa,” and “ThanDi hawaaein,” both of which I have run before. So, now, you go enjoy yet another one of the same kind—and quality. …


A Song I Like:

[It’s impossible to figure out whose contribution is greater here: SD’s, Sahir’s, or Lata’s. So, this is one of those happy circumstances in which the order of the listing of the credits is purely incidental … Also recommended is the video of this song. Mona Singh (aka Kalpana Kartik (i.e. Dev Anand’s wife, for the new generation)) is sooooo magical here, simply because she is so… natural here…]

(Hindi) “phailee huyi hai sapanon ki baahen”
Music: S. D. Burman
Lyrics: Sahir
Singer: Lata Mangeshkar


But don’t forget to write those Python scripts….

Take care, and bye for now…