# Are the recent CS graduates from India that bad?

In the recent couple of weeks, I had not found much time to check out blogs on a very regular basis. But today I did find some free time, and so I did do a routine round-up of the blogs. In the process, I came across a couple of interesting posts by Prof. Dheeraj Sanghi of IIIT Delhi. (Yes, it’s IIIT Delhi, not IIT Delhi.)

The latest post by Prof. Sanghi is about achieving excellence in Indian universities [^]. He offers valuable insights by taking a specific example, viz., that of the IIIT Delhi. I would like to leave this post for the attention of [who else] the education barons in Pune and the SPPU authorities. [Addendum: Also this post [^] by Prof. Pankaj Jalote, Director of IIIT Delhi.]

Prof. Sanghi’s second (i.e. earlier) post is about the current (dismal) state of the CS education in this country. [^].

As someone who has a direct work-experience in both the IT industry as well as in teaching in mechanical engineering departments in “private” engineering colleges in India, the general impression I seem to have developed seemed to be a bit at odds with what was being reported in this post by Prof. Sanghi (and by his readers, in its comments section). Of course, Prof. Sanghi was restricting himself only to the CS graduates, but still, the comments did hint at the overall trend, too.

So, I began writing a comment at Prof. Sanghi’s blog, but, as usual, my comment soon grew too big. It became big enough that I finally had to convert it into a separate post here. Let me share these thoughts of mine, below.

As compared to the CS graduates in India, and speaking in strictly relative terms, the mechanical engineering students seem to be doing better, much better, as far the actual learning being done over the 4 UG years is concerned. Not just the top 1–2%, but even the top 15–20% of the mechanical engineering students, perhaps even the top quarter, do seem to be doing fairly OK—even if it could be, perhaps, only at a minimally adequate level when compared to the international standards.

… No, even for the top quarter of the total student population (in mechanical engineering, in “private” colleges), their fundamental concepts aren’t always as clear as they need to be. More important, excepting the top (may be) 2–5%, others within the top quarter don’t seem to be learning the art of conceptual analysis of mathematics, as such. They probably would not always be able to figure out the meaning of even a simplest variation on an equation they have already studied.

For instance, even after completing a course (or one-half part of a semester-long course) on vibrations, if they are shown the following equation for the classical transverse waves on a string:

$\dfrac{\partial^2 \psi(x,t)}{\partial x^2} + U(x,t) = \dfrac{1}{c^2}\dfrac{\partial^2 \psi(x,t)}{\partial t^2}$,

most of them wouldn’t be able to tell the physical meaning of the second term on the left hand-side—not even if they are asked to work on it purely at their own convenience, at home, and not on-the-fly and under pressure, say during a job interview or a viva voce examination.

However, change the notation used for second term from $U(x,t)$ to $S(x,t)$ or $F(x,t)$, and then, suddenly, the bulb might flash on, but for only some of the top quarter—not all. … This would be the case, even if in their course on heat transfer, they have been taught the detailed derivation of a somewhat analogous equation: the equation of heat conduction with the most general case, including the possibly non-uniform and unsteady internal heat generation. … I am talking about the top 25% of the graduating mechanical engineers from private engineering colleges in SPPU and University of Mumbai. Which means, after leaving aside a lot of other top people who go to IITs and other reputed colleges like BITS Pilani, COEP, VJTI, etc.

IMO, their professors are more responsible for the lack of developing such skills than are the students themselves. (I was talking of the top quarter of the students.)

Yet, I also think that these students (the top quarter) are at least “passable” as engineers, in some sense of the term, if not better. I mean to say, looking at their seminars (i.e. the independent but guided special studies, mostly on the student-selected topics, for which they have to produce a small report and make a 10–15 minutes’ presentation) and also looking at how they work during their final year projects, sure, they do seem to have picked up some definite competencies in mechanical engineering proper. In their projects, most of the times, these students may only be reproducing some already reported results, or trying out minor variations on existing machine designs, which is what is expected at the UG level in our university system anyway. But still, my point is, they often are seen taking some good efforts in actually fabricating machines on their own, and sometimes they even come up with some good, creative, or cost-effective ideas in their design- or fabrication-activities.

Once again, let me remind you: I was talking about only the top quarter or so of the total students in private colleges (and from mechanical engineering).

The bottom half is overall quite discouraging. The bottom quarter of the degree holders are mostly not even worth giving a post X-standard, 3 year’s diploma certificate. They wouldn’t be able to write even a 5 page report on their own. They wouldn’t be able to even use the routine metrological instruments/gauges right. … Let’s leave them aside for now.

But the top quarter in the mechanical departments certainly seems to be doing relatively better, as compared to the those from the CS departments. … I mean to say: if these CS folks are unable to write on their own even just a linked-list program in C (using pointers and memory allocation on the heap), or if their final-year projects wouldn’t exceed (independently written) 100+ lines of code… Well, what then is left on this side for making comparisons anyway? … Contrast: At COEP, my 3rd year mechanical engineering students were asked to write a total of more than 100 lines of C code, as part of their routine course assignments, during a single semester-long course on FEM.

… Continuing with the mechanical engineering students, why, even in the decidedly average (or below average) colleges in Mumbai and Pune, some kids (admittedly, may be only about 10% or 15% of them) can be found taking some extra efforts to learn some extra skills from the outside of our pathetic university system. Learning CAD/CAM/CAE software by attending private training institutes, has become a pretty wide-spread practice by now.

No, with these courses, they aren’t expected to become FEM/CFD experts, and they don’t. But at least they do learn to push buttons and put mouse-clicks in, say, ProE/SolidWorks or Ansys. They do learn to deal with conversions between different file formats. They do learn that meshes generated even in the best commercial software could sometimes be not of sufficiently high quality, or that importing mesh data into a different analysis program may render the mesh inconsistent and crash the analysis. Sometimes, they even come to master setting the various boundary condition options right—even if only in that particular version of that particular software. However, they wouldn’t be able to use a research level software like OpenFOAM on their own—and, frankly, it is not expected of them, not at their level, anyway.

They sometimes are also seen taking efforts on their own, in finding sponsorships for their BE projects (small-scale or big ones), sometimes even in good research institutions (like BARC). In fact, as far as the top quarter of the BE student projects (in the mechanical departments, in private engineering colleges) go, I often do get the definite sense that any lacunae coming up in these projects are not attributable so much to the students themselves as to the professors who guide these projects. The stories of a professor shooting down a good project idea proposed by a student simply because the professor himself wouldn’t have any clue of what’s going on, are neither unheard of nor entirely without merit.

So, yes, the overall trend even in the mechanical engineering stream is certainly dipping downwards, that’s for sure. Yet, the actual fall—its level—does not seem to be as bad as what is being reported about CS.

My two cents.

Today is India’s National Science Day. Greetings!

Will stay busy in moving and getting settled in the new job. … Don’t look for another post for another couple of weeks. … Take care, and bye for now.

[Finished doing minor editing touches on 28 Feb. 2017, 17:15 hrs.]

# The goals are clear, now

This one blog post is actually a combo-pack of some 3 different posts, addressed to three different audiences: (i) to my general readers, (ii) to the engineering academics esp. in India, and (iii) to the QM experts. Let me cover it all in that order.

(I) To the general reader of this blog:

I have a couple of neat developments to report about.

I.1. First, and of immediate importance: I have received, and accepted, a job offer. Of course, the college is from a different university, not SPPU (Savitribai Phule Pune University). Just before attending this interview (in which I accepted the offer), I had also had discussions with the top management of another college, from yet another university (in another city). They too have, since then, confirmed that they are going to invite me once the dates for the upcoming UGC interviews at their college are finalized. I guess I will attend this second interview only if my approvals (the university and the AICTE approvals) for the job I have already accepted and will be joining soon, don’t go through, for whatever reason.

If you ask me, my own gut feel is that the approvals at both these universities should go through. Historically, neither of these two universities have ever had any issue with a mixed metallurgy-and-mechanical background, and especially after the new (mid-2014) GR by the Maharashtra State government (by now 2.5+ years old), the approval at these universities should be more or less only a formality, not a cause for excessive worry as such.

I told you, SPPU is the worst university in Maharashtra. And, Pune has become a real filthy, obnoxious place, speaking of its academic-intellectual atmosphere. I don’t know why the outside world still insists on calling both (the university and the city) great. I can only guess. And my guess is that brand values of institutions tend to have a long shelf life—and it would be an unrealistically longer shelf life, when the economy is mixed, not completely free. That is the broad reason. There is another, more immediate and practical reason to it, too—I mean, regarding how it all actually has come to work.

Most every engineer who graduates from SPPU these days goes into the IT field. They have been doing so for almost two decades by now. Now, in the IT field, the engineering knowledge as acquired at the college/university is hardly of any direct relevance. Hence, none cares for what academically goes on during those four years of the UG engineering—not in India, I mean—not even in IITs, speaking in comparison to what used to be the case some 3 decades ago. (For PG engineering, in most cases, the best of them go abroad or to IITs anyway.) By “none” I mean: first and foremost, the parents of the students; then the students themselves; and then, also the recruiting companies (by which, I mostly mean those from the IT field).

Now, once in the IT industry and thus making a lot of money, these people of course find it necessary to keep the brand value of “Pune University” intact. … Notice that the graduates of IITs and of COEP/VJTI etc. specifically mention their college on their LinkedIn profiles. But none from the other colleges in SPPU do. They always mention only “University of Pune”. The reason is, their colleges didn’t have as much of a brand value as did the university, when all this IT industry trend began. Now, if these SPPU-graduated engineers themselves begin to say that the university they attended was in fact bad (or had gone bad at least when they attended it), it will affect their own career growth, salaries and promotions. So, they never find it convenient to spell out the truth—who would do that? Now, the Pune education barons (not to mention the SPPU authorities) certainly are smart enough to simply latch on to this artificially inflated brand-value. The system works, even though the quality of engineering education as such has very definitely gone down. (In some respects, due to expansion of the engineering education market, the quality has actually gone up—even though my IIT/COEP classmates often find this part difficult to believe. But yes, there have been improvements too. The improvements pertain to such things as syllabii and systems (in the “ISO” sense of the term). But not to the actual delivery—not to the actually imparted education. And that‘s my point.)

When parents and recruiting companies themselves don’t care for the quality of education imparted within the four years of UG engineering, it is futile to expect that mere academicians, as a group, would do much to help the matters.

That’s why, though SPPU has become so bad, it still manages to keep its high reputation of the past—and all its current whimsies (e.g. such stupid issues as the Metallurgy-vs-Mechanical branch jumping, etc.)—completely intact.

Anyway, I am too small to fight the entire system. In any case, I was beyond the end of all my resources.

All in all, yes, I have accepted the job offer.

But despite the complaining/irritating tone that has slipped in the above write-up, I would be lying to you if I said that I was not enthusiastic about my new job. I am.

I.2. Second, and from the long-term viewpoint, the much more important development I have to report (to my general readers) is this.

I now realize that I have come to develop a conceptually consistent physical viewpoint for the maths of quantum mechanics.

(I won’t call it an “interpretation,” let alone a “philosophical interpretation.” I would call it a physics theory or a physical viewpoint.)

This work was in progress for almost a year and a half or more—since October 2015, if I go by my scribblings in the margins of my copy of Griffiths’ text-book. I still have to look-up the scribblings I made in the small pocket notebooks I maintain (more than 10 of them, I’ve finished already for QM alone). I also have yet to systematically gather and order all those other scribblings on the paper napkins I made in the restaurants. Yes, in may case, notings on the napkins is not just a metaphor; I have often actually done such notings, simply because sometimes I do forget to carry my pocket notebooks. At such times, these napkins (or those rough papers from the waiter’s order-pad), do come in handy. I have been storing them in a plastic bag, and a drawer. Once I look up all such notings systematically, I will be able to sequence the progression of my thoughts better. But yes, as a rough and ready estimate, thinking along this new line has been going on for some 1.5 years or more by now.

But it’s only recently, in December 2016 or January 2017, that I slowly grew fully confident that my new viewpoint is correct. I took about a month to verify the same, checking it from different angles, and this process still continues. … But, what the heck, let me be candid about it: the more I think about it, all that it does is to add more conceptual integrations to it. But the basic conceptual scheme, or framework, or the basic line of thought, stays the same. So, it’s it and that’s that.

Of course, detailed write-ups, (at least rough) calculations, and some (rough) simulations still have to be worked out, but I am working on them.

I have already written more than 30 pages in the main article (which I should now be converting into a multi-chapter book), and more than 50 pages in the auxiliary material (which I plan to insert in the main text, eventually).

Yes, I have implemented a source control system (SVN), and have been taking regular backups too, though I need to now implement a system of backups to two different external hard-disks.

But all this on-going process of writing will now get interrupted due to my move to the new job, in another city. My blogging too would get interrupted. So, please stay away from this blog for a while. I will try to resume both ASAP, but as of today, can’t tell when—may be a month or so.

(II) To the engineering academics among my readers, esp. the Indian academics:

I have changed my stance regarding publications. All along thus far, I had maintained that I will not publish anything in one of those “new” journals in which most every Indian engineering professor publishes these days.

However, I now realize that one of the points in the approvals (by universities, AICTE, UGC, NAAC, NBA, etc.) concerns journal papers. I have only one journal paper on my CV. Keeping the potential IPR issues in mind, all my other papers were written in only schematic way (the only exception is the diffusion paper), and for that reason, they were published only in the conference proceedings. (I had explicitly discussed this matter not just with my guide, but also with my entire PhD committee.) Of course, I made sure that all these were international conferences, pretty reputed ones, of pretty low acceptance rates (though these days the acceptance rates at these same conferences have gone significantly up (which, incidentally, should be a “good” piece of news to my new students)). But still, as a result, all but one of my papers have been only conference papers, not journal papers.

After suffering through UGC panel interviews at three different colleges (all in SPPU) I now realize that it’s futile to plead your case in front of them. They are insufferable in every sense; they stick to their guns. You can’t beat their sense of “quality,” as it were.

So, I have decided to follow their (I mean my UGC panel interviewers’) lead, and thus have now decided to publish at least three papers in such journals, right over the upcoming couple of months or so.

Forgive me if I report the same old things (which I had reported in those international conferences about a decade ago). I have been assured that conference papers are worthless and that no one reads them. Reporting the same things in journal papers should enhance, I guess, their readability. So, the investigations I report on will be the same, but now they will appear in the Microsoft Word format, and in international journals.

That’s another reason why my blogging will be sparser in the upcoming months.

That way, in the world of science and research, it has always been quite a generally accepted practice, all over the world, to first report your findings in conferences, seek your peers’ opinions on your work or your ideas, and then expand on (or correct on) the material/study, and then send it to journals. There is nothing wrong in it. Even the topmost physicists have followed precisely the same policy. … Why, come to think of it, the very first paper that ushered humanity into the quantum era itself was only a conference talk. In fact it was just a local conference, albeit in an advanced country. I mean Planck’s very first announcement regarding quantization. … So, it’s a perfectly acceptable practice.

The difference this time (I mean, in my, present, case) will be: I will contract on (and hopefully also dumb down) the contents of my conference papers, so as to match the level of the journals in which my UGC panel interviewers themselves publish.

No, the above was not a piece of sarcasm—at least I didn’t mean it, when I wrote it. I merely meant to highlight an objective fact. Given the typical length, font size, gaps in sections, and the overall treatment of the contents of these journals, I will have to both contract on and dumb down on my write-ups. … I will of course also add some new sentences here and there to escape the no-previous-publication clause, but believe me, in my case, that is a very minor worry. The important thing would be to match the level of the treatment, to use the Microsoft Word’s equation editor, and to cut down on the length. Those are my worries.

Another one of my worries is how to publish two journal papers—one good, and one bad—based on the same idea. I mean, suppose I want to publish something on the nature of the $\delta$ of the calculus of variations, in one of these journals. … Incidentally, I do think that what I wrote on this idea right here on this blog a little ago, is worth publishing even in a good journal, say in Am. J. Phys., or at least in the Indian journal “Resonance.” So, I would like to eventually publish it one of these two journals, too. But for immediately enhancing the number of journal papers on my CV, I should immediately publish a shorter version of the same in one of these new international journals too, on an urgent basis. Now the question is: what all aspects I should withhold for now. That is my worry. That’s why, the way my current thinking goes, instead of publishing any new material (say on the $\delta$ of CoV), I should instead simply recycle the already conference-published material.

One final point. Actually, I never did think that it was immoral to publish in such journals (I mean the ones in which my interviewers from SPPU publish). These journals do have ISSN, and they always are indexed in the Google Scholar (which is an acceptable indexing service even to NBA), and sometimes even in Scopus/PubMed etc. Personally, I had refrained from publishing in them not because I thought that it was immoral to do so, but rather because I thought it was plain stupid. I have been treating the invitations from such journals with a sense of humour all along.

But then, the way our system works, it does have the ways and the means to dumb down one and all. Including me. When my very career is at the stake, I will very easily and smoothly go along, toss away my sense of quality and propriety, and join the crowd. (But at least I will be open and forth-right about it—admitting it publicly, the way I have already done, here.)

So, that’s another reason why my blogging would be sparser over the upcoming few months, esp. this month and the next. I will be publishing in (those) journals, on a high priority.

(III) To the QM experts:

Now, a bit to QM experts. By “experts,” I mean those who have studied QM through university courses (or text-books, as in my case) to PG or PhD level. I mean, the QM as it is taught at the UG level, i.e., the non-relativistic version of it.

If you are curious about the exact nature of my ideas, well, you will have to be patient. Months, perhaps even a year, it’s going to take, before I come to write about it on my blog(s). It will take time. I have been engaged in writing about it for about a month by now, and I speak from this experience. And further, the matter of having to immediately publish journal papers in engineering will also interfere the task of writing.

However, if you are an academic in India (say a professor or even just a serious and curious PhD student of physics/chemistry/engg physics program, say at an IIT/IISc/IISER/similar) and are curious to know about my ideas… Well, just give me a call and let’s decide on a mutually convenient time to meet in person. Ditto, for academics/serious students of physics from abroad visiting India.

No, I don’t at all expect any academics in (or visiting) India to be that curious about my work. But still, theoretically speaking, assuming that someone is interested: just send me an email or call me to fix an appointment, and we will discuss my ideas, in person. We will work out at the black-board (better than working on paper, in my experience).

I am not at all hung up about maintaining secrecy until publication. It’s just that writing takes time.

One part of it is that when you write, people also expect a higher level of precision from you, and ensuring that takes time. Making general but precise statements or claims, on a most fundamental topic of physics—it’s QM itself—is difficult, very difficult. Talking to experts is, in contrast, easy—provided you know what you are talking about.

In a direct personal talk, there is a lot of room for going back and forth, jumping around the topics, hand-waving, which is not available in the mode of writing-by-one-then-reading-by-another. And, talking with experts would be easier for me because they already know the context. That’s why I specified PhD physicists/professors at this stage, and not, say, students of engineering or CS folks merely enthusiastic about QM. (Coming to humanity folks, say philosophers, I think that via this work, I have nothing—or next to nothing—to offer to their specialty.)

Personally, I am not comfortable with video-conferencing, though if the person in question is a serious academic or a reputed corporate/national lab researcher, I would sure give it a thought to it. For instance, if some professor from US/UK that I had already interacted with (say at iMechanica, or at his blog, or via emails) wants to now know about my new ideas and wants a discussion via Skype, I could perhaps go in for it—even though I would not be quite comfortable with the video-conferencing mode as such. The direct, in person talk, working together at the black-board, works best for me. I don’t find Skype comfortable enough even with my own class-mates or close personal relations. It just doesn’t work by me. So, try to keep it out.

For the same reason—the planning and the precision required in writing—I would mostly not be able to even blog about my new ideas. Interactions on blogs tends to introduce too many bifurcations in the discussion, and therefore, even though the different PoV could be valuable, such interactions should be introduced only after the first cut in the writing is already over. That’s why, the most I would be able to manage on this blog would be some isolated aspects—granted that some inconsistencies or contradictions could still easily slip in. I am not sure, but I will try to cover at least some isolated aspects from time to time.

Here’s an instance. (Let me remind you: I am addressing this part to those who have already studied QM through text-books, esp. to PhD physicists. I am not only referring to equations, but more importantly, I am assuming the context of a direct knowledge of how topics like the one below are generally treated in various books and references.)

Did you ever notice just how radical was de Broglie’s idea? I mean, for the electron, the equations de Broglie used were:

$E = \hbar \nu$ and $p = \hbar k$.

Routine stuff, do you say? But notice, in the special relativity, i.e. in the classical electrodynamics, the equation for the energy of a massive particle is:
$E^2 = (pc)^2 + (m_0 c^2)^2$

In arriving at the relation $p = \hbar k$, Einstein had dropped the second term ($m_0^2 c^4$) from the expression for energy because radiation has no mass, and so, his hypothetical particles also would carry no mass.

When de Broglie assumed that this same expression holds also for the electron—its matter waves—what he basically was doing was: to take an expression derived for a massless particle (Einstein’s quantum of light) as is, and to assume that it would apply also for the massive particle (i.e. the electron).

In effect, what de Broglie had ended up asserting was that the matter-waves of the electron had a massless nature.

Got it? See how radical—and how subtly (indirectly, implicitly) slipped in—is that suggestion? Have you seen this aspect highlighted or discussed this way in a good university course or a text-book on modern physics or QM? …

…QM is subtle, very subtle. That’s why working out a conceptually consistent scheme for it is (and has been) such a fun.

The above observation was one of my clues in working out my new scheme. The other was the presence of the classical features in QM. Not only the pop-science books but also text-books on modern physics (and QM) had led me to believe that what the QM represented was completely radical break from the classical physic. Uh-oh. Not quite.

Radical ideas, QM does have. But completely radical? Not quite.

QM, actually, is hybrid. It does have a lot of classical elements built into it, right in its postulates. I had come to notice this part and was uncomfortable with it—I didn’t have the confidence in my own observation; I used to think that when I study more of QM, I would be shown how these classical features fall away. That part never happened, not even as my further studies of QM progressed, and so, I slowly became more confident about it. QM is hybrid, full stop. It does have classical features built right in its postulates, even in its maths. It does not represent a complete break from the classical physics—not as complete a break as physicists lead you to believe. That was my major clue.

Other clues came as my grasp of the nature of the QM maths became better and firmer, which occurred over a period of time. I mean the nature of the maths of: the Fourier theory, the variational calculus, the operator theory, and the higher-dimensional spaces.

I had come to understand the Fourier theory via my research on diffusion, and the variational calculus, via my studies (and teaching!) of FEM. The operator theory, I had come to suspect (simply comparing the way people used to write in the early days of QM, and the way they now write) was not essential to the physics of the QM theory. So I had begun mentally substituting the operators acting on the wavefunction by just a modified wavefunction itself. … Hell, do you express a classical problem—say a Poisson equation problem or a Navier-Stokes problem—via operators? These days people do, but, thankfully, the trend has not yet made it to the UG text-books to a significant extent. The whole idea of the operator theory is irrelevant to physics—its only use and relevance is in maths. … Soon enough, I then realized that the wavefunction itself is a curious construct. It’s pointless debating whether the wavefunction is ontic or epistemic, primarily because the damn thing is dimensionless. Physicists always take care to highlight the fact that its evolution is unitary, but what they never tell you, never ever highlight, is the fact that the damn thing has no dimensions. Qua a dimensionless quantity, it is merely a way of organizing some other quantities that do have a physical existence. As to its unitary evolution, well, all that this idea says is that it is merely a weighting function, so to speak. But it was while teaching thermodynamics (in Mumbai in 2014 and in Pune in 2015) that I finally connected the variational principles with the operator theory, the thermodynamic system with the quantum system, and this way then got my breakthroughs (or at least my clues).

Yet another clue was the appreciation of the fact that the world is remarkably stable. When you throw a ball, it goes through the space as a single object. The part of the huge Hilbert space of the entire universe which represents the ball—all the quantum particles in it—somehow does not come to occupy a bigger part of that space. Their relations to each other somehow stay stable. That was another clue.

As to the higher-dimensional function spaces, again, my climb was slow but steady. I had begun writing my series of posts on the idea of space. It helped. Then I worked through higher-dimensional space. A rough-and-ready version of my understanding was done right on this blog. It was then that my inchoate suspicions about the nature of the Hilbert space finally began to fall in place. There is an entrenched view, viz., that the wavefunction is a “vector” that “lives” only in a higher-dimensional abstract space, and that the existence of the tensor product over the higher-dimensional space makes it in principle impossible to visualize the wavefunction for a multi-particle quantum system, which means, any quantum system which is more complex than the hydrogen atom (i.e. a single electron). Schrodinger didn’t introduce this idea, but when Lorentz pointed out that a higher-dimensional space was implied by Schrodinger’s procedure, Schrodinger first felt frustrated, and later on, in any case, he was unable to overcome this objection. And so, this part got entrenched—and became a part of the mathematicians’ myths of QM. As my own grasp of this part of the maths became better (and it was engineers’ writings on linear algebra that helped me improve my grasp, not physicists’ or mathematicians’ (which I did attempt valiantly, and which didn’t help at all)) I got my further clues. For a clue, see my post on the CoV; I do mention, first, the Cartesian product, and then, a tensor product, in it.

Another clue was a better understanding of the nonlinear vs. linear distinction in maths. It too happened slowly.

As to others’ writings, the most helpful clue came from the “anti-photon” paper by (the Nobel laureate) W. E. Lamb. Among the bloggers, I found some of the write-ups by Lubos Motl to be really helpful; also a few by Schlafly. Discussions on Scott Aaronson’s blog were useful to check out the different perspectives on the quantum problems.

The most stubborn problem for me perhaps was the measurement problem, i.e. the collapse postulate. But to say anything more about it right away would be premature—it would too premature, in fact. I want to do it right—even though I will surely follow the adage that a completed document is better than a perfect document. Perfection may get achieved only on collapse, but I happily don’t carry the notion that a good treatment on the collapse postulate has to be preceded by a collapse.

Though the conceptual framework I now have in mind is new, once it is published, it would not be found, I think, to be very radically new—not by the working physicists or the QM experts themselves anyway. …

.. I mean, personally speaking, when I for the first time thought of this new way of thinking about the QM maths, it was radically new (and radically clarifying) to me. (As I said, it happened slowly, over a period of time, starting, may be, from second half of 2015 or so if not earlier).

But since then, through my regular searches on the ‘net, I have found that other people have been suggesting somewhat similar ideas for quite some time, though they have been, IMO, not as fully consistent as they could have been. For example, see Philip Wallace[^]’s work (which I came across only recently, right this month). Or, see Martin Ligare[^]’s papers (which I ran into just last month, on the evening of 25th January, to be precise). … Very close to my ideas, but not quite the same. And, not as conceptually comprehensive, if that’s the right word to use for it.

My tentative plan as of now is to first finish writing the document (already 30+ pages, as I mentioned above in the first section). This document is in the nature of a conceptual road-map, or a position/research-program paper. Call it a white-paper sort of a document, say. I want to finish it first. Simultaneously, I will also try to do some simulations or so, and only then go for writing papers for (good) journals. … Sharing of ideas on this blog wouldn’t have to wait until the papers though; it could begin much earlier than that, in fact as soon as the position paper is done, which should be after a few months—say by June-July at the earliest. I will try to keep this position paper as brief as possible, say under 100 pages.

Let’s see how it all goes. I will keep you updated. But yes, the goals are clear now.

I wrote this lengthy a post (almost 5000 words) because I did want to get all these things from my mind and on to the blog. But since in the immediate future I would be busy in organizing for the move (right from hunting for a house/flat to rent, to deciding on what all stuff to leave in Pune for the time being and what all to take with me), to the actual move (the actual packing, moving, and unpacking etc.), I wouldn’t get the time to blog over the next 2–3 weeks, may be until it’s March already. Realizing it, I decided to just gather all this material, which is worth 3 posts, and to dump it all together in this single post. So, there.

Bye for now.

[As usual, a minor revision or two may be done later.]

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—4

In this post, I provide my answer to the question which I had raised last time, viz., about the differences between the $\Delta$, the $\text{d}$, and the $\delta$ (the first two, of the usual calculus, and the last one, of the calculus of variations).

Some pre-requisite ideas:

A system is some physical object chosen (or isolated) for study. For continua, it is convenient to select a region of space for study, in which case that region of space (holding some physical continuum) may also be regarded as a system. The system boundary is an abstraction.

A state of a system denotes a physically unique and reproducible condition of that system. State properties are the properties or attributes that together uniquely and fully characterize a state of a system, for the chosen purposes. The state is an axiom, and state properties are its corollary.

State properties for continua are typically expressed as functions of space and time. For instance, pressure, temperature, volume, energy, etc. of a fluid are all state properties. Since state properties uniquely define the condition of a system, they represent definite points in an appropriate, abstract, (possibly) higher-dimensional state space. For this reason, state properties are also called point functions.

A process (synonymous to system evolution) is a succession of states. In classical physics, the succession (or progression) is taken to be continuous. In quantum mechanics, there is no notion of a process; see later in this post.

A process is often represented as a path in a state space that connects the two end-points of the staring and ending states. A parametric function defined over the length of a path is called a path function.

A cyclic process is one that has the same start and end points.

During a cyclic process, a state function returns to its initial value. However, a path function does not necessarily return to the same value over every cyclic change—it depends on which particular path is chosen. For instance, if you take a round trip from point $A$ to point $B$ and back, you may spend some amount of money $m$ if you take one route but another amount $n$ if you take another route. In both cases you do return to the same point viz. $A$, but the amount you spend is different for each route. Your position is a state function, and the amount you spend is a path function.

[I may make the above description a bit more rigorous later on (by consulting a certain book which I don’t have handy right away (and my notes of last year are gone in the HDD crash)).]

The $\Delta$, the $\text{d}$, and the $\delta$:

The $\Delta$ denotes a sufficiently small but finite, and locally existing difference in different parts of a system. Typically, since state properties are defined as (continuous) functions of space and time, what the $\Delta$ represents is a finite change in some state property function that exists across two different but adjacent points in space (or two nearby instants in times), for a given system.

The $\Delta$ is a local quantity, because it is defined and evaluated around a specific point of space and/or time. In other words, an instance of $\Delta$ is evaluated at a fixed $x$ or $t$. The $\Delta x$ simply denotes a change of position; it may or may not mean a displacement.

The $\text{d}$ (i.e. the infinitesimal) is nothing but the $\Delta$ taken in some appropriate limiting process to the vanishingly small limit.

Since $\Delta$ is locally defined, so is the infinitesimal (i.e. $\text{d}$).

The $\delta$ of CoV is completely different from the above two concepts.

The $\delta$ is a sufficiently small but global difference between the states (or paths) of two different, abstract, but otherwise identical views of the same physically existing system.

Considering the fact that an abstract view of a system is itself a system, $\delta$ also may be regarded as a difference between two systems.

Though differences in paths are not only possible but also routinely used in CoV, in this post, to keep matters simple, we will mostly consider differences in the states of the two systems.

In CoV, the two states (of the two systems) are so chosen as to satisfy the same Dirichlet (i.e. field) boundary conditions separately in each system.

The state function may be defined over an abstract space. In this post, we shall not pursue this line of thought. Thus, the state function will always be a function of the physical, ambient space (defined in reference to the extensions and locations of concretely existing physical objects).

Since a state of a system of nonzero size can only be defined by specifying its values for all parts of a system (of which it is a state), a difference between states (of the two systems involved in the variation $\delta$) is necessarily global.

In defining $\delta$, both the systems are considered only abstractly; it is presumed that at most one of them may correspond to an actual state of a physical system (i.e. a system existing in the physical reality).

The idea of a process, i.e. the very idea of a system evolution, necessarily applies only to a single system.

What the $\delta$ represents is not an evolution because it does not represent a change in a system, in the first place. The variation, to repeat, represents a difference between two systems satisfying the same field boundary conditions. Hence, there is no evolution to speak of. When compressed air is passed into a rubber balloon, its size increases. This change occurs over certain time, and is an instance of an evolution. However, two rubber balloons already inflated to different sizes share no evolutionary relation with each other; there is no common physical process connecting the two; hence no change occurring over time can possibly enter their comparative description.

Thus, the “change” denoted by $\delta$ is incapable of representing a process or a system evolution. In fact, the word “change” itself is something of a misnomer here.

Text-books often stupidly try to capture the aforementioned idea by saying that $\delta$ represents a small and possibly finite change that occurs without any elapse of time. Apart from the mind-numbing idea of a finite change occurring over no time (or equally stupefying ideas which it suggests, viz., a change existing at literally the same instant of time, or, alternatively, a process of change that somehow occurs to a given system but “outside” of any time), what they, in a way, continue to suggest also is the erroneous idea that we are working with only a single, concretely physical system, here.

But that is not the idea behind $\delta$ at all.

To complicate the matters further, no separate symbol is used when the variation $\delta$ is made vanishingly small.

In the primary sense of the term variation (or $\delta$), the difference it represents is finite in nature. The variation is basically a function of space (and time), and at every value of $x$ (and $t$), the value of $\delta$ is finite, in the primary sense of the word. Yes, these values can be made vanishingly small, though the idea of the limits applied in this context is different. (Hint: Expand each of the two state functions in a power series and relate each of the corresponding power terms via a separate parameter. Then, put the difference in each parameter through a limiting process to vanish. You may also use the Fourier expansion.))

The difference represented by $\delta$ is between two abstract views of a system. The two systems are related only in an abstract view, i.e., only in (the mathematical) thought. In the CoV, they are supposed as connected, but the connection between them is not concretely physical because there are no two separate physical systems concretely existing, in the first place. Both the systems here are mathematical abstractions—they first have been abstracted away from the real, physical system actually existing out there (of which there is only a single instance).

But, yes, there is a sense in which we can say that $\delta$ does have a physical meaning: it carries the same physical units as for the state functions of the two abstract systems.

An example from biology:

Here is an example of the differences between two different paths (rather than two different states).

Plot the height $h(t)$ of a growing sapling at different times, and connect the dots to yield a continuous graph of the height as a function of time. The difference in the heights of the sapling at two different instants is $\Delta h$. But if you consider two different saplings planted at the same time, and assuming that they grow to the same final height at the end of some definite time period (just pick some moment where their graphs cross each other), and then, abstractly regarding them as some sort of imaginary plants, if you plot the difference between the two graphs, that is the variation or $\delta h(t)$ in the height-function of either. The variation itself is a function (here of time); it has the units, of course, of m.

Summary:

The $\Delta$ is a local change inside a single system, and $\text{d}$ is its limiting value, whereas the $\delta$ is a difference across two abstract systems differing in their global states (or global paths), and there is no separate symbol to capture this object in the vanishingly small limit.

Exercises:

Consider one period of the function $y = A \sin(x)$, say over the interval $[0,2\pi]$; $A = a$ is a small, real-valued, constant. Now, set $A = 1.1a$. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Now, take the derivative, i.e., $y' = A \cos(x)$, with $A = a$ once again. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Which one of the above two is a bigger change/difference?

Also consider this angle: Taking the derivative did affect the whole function. If so, why is it that we said that $\text{d}$ was necessarily a local change?

An important and special note:

The above exercises, I am sure, many (though not all) of the Officially Approved Full Professors of Mechanical Engineering at the Savitribai Phule Pune University and COEP would be able to do correctly. But the question I posed last time was: Would it be therefore possible for them to spell out the physical meaning of the variation i.e. $\delta$? I continue to think not. And, importantly, even among those who do solve the above exercises successfully, they wouldn’t be too sure about their own answers. Upon just a little deeper probing, they would just throw up their hands. [Ditto, for many American physicists.] Even if a conceptual clarity is required in applications.

(I am ever willing and ready to change my mind about it, but doing so would need some actual evidence—just the way my (continuing) position had been derived, in the first place, from actual observations of them.)

The reason I made this special note was because I continue to go jobless, and nearly bank balance-less (and also, nearly cashless). And it all is basically because of folks like these (and the Indians like the SPPU authorities). It is their fault. (And, no, you can’t try to lift what is properly their moral responsibility off their shoulders and then, in fact, go even further, and attempt to place it on mine. Don’t attempt doing that.)

A Song I Like:

[May be I have run this song before. If yes, I will replace it with some other song tomorrow or so. No I had not.]

Hindi: “Thandi hawaa, yeh chaandani suhaani…”
Music and Singer: Kishore Kumar
Lyrics: Majrooh Sultanpuri

[A quick ‘net search on plagiarism tells me that the tune of this song was lifted from Julius La Rosa’s 1955 song “Domani.” I heard that song for the first time only today. I think that the lyrics of the Hindi song are better. As to renditions, I like Kishor Kumar’s version better.]

[Minor editing may be done later on and the typos may be corrected, but the essentials of my positions won’t be. Mostly done right today, i.e., on 06th January, 2017.]

[E&OE]

/

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—3

I was looking for a certain book on heat transfer which I had (as usual) misplaced somewhere, and while searching for that book at home, I accidentally ran into another book I had—the one on Classical Mechanics by Rana and Joag [^].

After dusting this book a bit, I spent some time in one typical way, viz. by going over some fond memories associated with a suddenly re-found book…. The memories of how enthusiastic I once was when I had bought that book; how I had decided to finish that book right within weeks of buying it several years ago; the number of times I might have picked it up, and soon later on, kept it back aside somewhere, etc.  …

Yes, that’s right. I have not yet managed to finish this book. Why, I have not even managed to begin reading this book the way it should be read—with a paper and pencil at hand to work through the equations and the problems. That was the reason why, I now felt a bit guilty. … It just so happened that it was just the other day (or so) when I was happily mentioning the Poisson brackets on Prof. Scott Aaronson’s blog, at this thread [^]. … To remove (at least some part of) my sense of guilt, I then decided to browse at least through this part (viz., Poisson’s brackets) in this book. … Then, reading a little through this chapter, I decided to browse through the preceding chapters from the Lagrangian mechanics on which it depends, and then, in general, also on the calculus of variations.

It was at this point that I suddenly happened to remember the reason why I had never been able to finish (even the portions relevant to engineering from) this book.

The thing was, the explanation of the $\delta$—the delta of the variational calculus.

The explanation of what the $\delta$ basically means, I had found right back then (many, many years ago), was not satisfactorily given in this book. The book did talk of all those things like the holonomic constraints vs. the nonholonomic constraints, the functionals, integration by parts, etc. etc. etc. But without ever really telling me, in a forth-right and explicit manner, what the hell this $\delta$ was basically supposed to mean! How this $\delta y$ was different from the finite changes ($\Delta y$) and the infinitesimal changes ($\text{d}y$) of the usual calculus, for instance. In terms of its physical meaning, that is. (Hell, this book was supposed to be on physics, wasn’t it?)

Here, I of course fully realize that describing Rana and Joag’s book as “unsatisfactory” is making a rather bold statement, a very courageous one, in fact. This book is extraordinarily well-written. And yet, there I was, many, many years ago, trying to understand the delta, and not getting anywhere, not even with this book in my hand. (OK, a confession. The current copy which I have is not all that old. My old copy is gone by now (i.e., permanently misplaced or so), and so, the current copy is the one which I had bought once again, in 2009. As to my old copy, I think, I had bought it sometime in the mid-1990s.)

It was many years later, guess some time while teaching FEM to the undergraduates in Mumbai, that the concept had finally become clear enough to me. Most especially, while I was going through P. Seshu’s and J. N. Reddy’s books. [Reflected Glory Alert! Professor P. Seshu was my class-mate for a few courses at IIT Madras!] However, even then, even at that time, I remember, I still had this odd feeling that the physical meaning was still not clear to me—not as as clear as it should be. The matter eventually became “fully” clear to me only later on, while musing about the differences between the perspective of Thermodynamics on the one hand and that of Heat Transfer on the other. That was some time last year, while teaching Thermodynamics to the PG students here in Pune.

Thermodynamics deals with systems at equilibria, primarily. Yes, its methods can be extended to handle also the non-equilibrium situations. However, even then, the basis of the approach summarily lies only in the equilibrium states. Heat Transfer, on the other hand, necessarily deals with the non-equilibrium situations. Remove the temperature gradient, and there is no more heat left to speak of. There does remain the thermal energy (as a form of the internal energy), but not heat. (Remember, heat is the thermal energy in transit that appears on a system boundary.) Heat transfer necessarily requires an absence of thermal equilibrium. … Anyway, it was while teaching thermodynamics last year, and only incidentally pondering about its differences from heat transfer, that the idea of the variations (of Cov) had finally become (conceptually) clear to me. (No, CoV does not necessarily deal only with the equilibrium states; it’s just that it was while thinking about the equilibrium vs. the transient that the matter about CoV had suddenly “clicked” to me.)

In this post, let me now note down something on the concept of the variation, i.e., towards understanding the physical meaning of the symbol $\delta$.

Please note, I have made an inline update on 26th December 2016. It makes the presentation of the calculus of variations a bit less dumbed down. The updated portion is clearly marked as such, in the text.

The Problem Description:

The concept of variations is abstract. We would be better off considering a simple, concrete, physical situation first, and only then try to understand the meaning of this abstract concept.

Accordingly, consider a certain idealized system. See its schematic diagram below:

There is a long, rigid cylinder made from some transparent material like glass. The left hand-side end of the cylinder is hermetically sealed with a rigid seal. At the other end of the cylinder, there is a friction-less piston which can be driven by some external means.

Further, there also are a couple of thin, circular, piston-like disks ($D_1$ and $D_2$) placed inside the cylinder, at some $x_1$ and $x_2$ positions along its length. These disks thus divide the cylindrical cavity into three distinct compartments. The disks are assumed to be impermeable, and fitting snugly, they in general permit no movement of gas across their plane. However, they also are assumed to be able to move without any friction.

Initially, all the three compartments are filled with a compressible fluid to the same pressure in each compartment, say 1 atm. Since all the three compartments are at the same pressure, the disks stay stationary.

Then, suppose that the piston on the extreme right end is moved, say from position $P_1$ to $P_2$. The final position $P_2$ may be to the left or to the right of the initial position $P_1$; it doesn’t matter. For the current description, however, let’s suppose that the position $P_2$ is to the left of $P_1$. The effect of the piston movement thus is to increase the pressure inside the system.

The problem is to determine the nature of the resulting displacements that the two disks undergo as measured from their respective initial positions.

There are essentially two entirely different paradigms for conducting an analysis of this problem.

The “Vector Mechanics” Paradigm:

The first paradigm is based on an approach that was put to use so successfully by Newton. Usually, it is called the paradigm of vector analysis.

In this paradigm, we focus on the fact that the forced displacement of the piston with time, $x(t)$, may be described using some function of time that is defined over the interval lying between two instants $t_i$ and $t_f$.

For example, suppose the function is:
$x(t) = x_0 + v t$,
where $v$ is a constant. In other words, the motion of the piston is steady, with a constant velocity, between the initial and final instants. Since the velocity is constant, there is no acceleration over the open interval $(t_i, t_f)$.

However, notice that before the instant $t_i$, the piston velocity was zero. Then, the velocity suddenly became a finite (constant) value. Therefore, if you extend the interval to include the end-instants as well, i.e., if you consider the semi-closed interval $[t_i, t_f)$, then there is an acceleration at the instant $t_i$. Similarly, since the piston comes to a position of rest at $t = t_f$, there also is another acceleration, equal in magnitude and opposite in direction, which appears at the instant $t_f$.

The existence of these two instantaneous accelerations implies that jerks or pressure waves are sent through the system. We may model them as vector quantities, as impulses. [Side Exercise: Work out what happens if we consider only the open interval $(t_i, t_f)$.]

We can now apply Newton’s 3 laws, based on the idea that shock-waves must have begun at the piston at the instant $t = t_i$. They must have got transmitted through the gas kept under pressure, and they must have affected the disk $D_1$ lying closest to the piston, thereby setting this disk into motion. This motion must have passed through the gas in the middle compartment of the system as another pulse in the pressure (generated at the disk $D_1$), thereby setting also the disk $D_2$ in a state of motion a little while later. Finally, the pulse must have got bounced off the seal on the left hand side, and in turn, come back to affect the motion of the disk $D_2$, and then of the disk $D_1$. Continuing their travels to and fro, the pulses, and hence the disks, would thus be put in a back and forth motion.

After a while, these transients would move forth and back, superpose, and some of their constituent frequencies would get cancelled out, leaving only those frequencies operative such that the three compartments are put under some kind of stationary states.

In case the gas is not ideal, there would be damping anyway, and after a sufficiently long while, the disks would move through such small displacements that we could easily ignore the ever-decreasing displacements in a limiting argument.

Thus, assume that, after an elapse of a sufficiently long time, the disks become stationary. Of course, their new positions are not the same as their original positions.

The problem thus can be modeled as basically a transient one. The state of the new equilibrium state is thus primarily seen as an effect or an end-result of a couple of transient processes which occur in the forward and backward directions. The equilibrium is seen as not a primarily existing state, but as a result of two equal and opposite transient causes.

Notice that throughout this process, Newton’s laws can be applied directly. The nature of the analysis is such that the quantities in question—viz. the displacements of the disks—always are real, i.e., they correspond to what actually is supposed to exist in the reality out there.

The (values of) displacements are real in the sense that the mathematical analysis procedure itself involves only those (values of) displacements which can actually occur in reality. The analysis does not concern itself with some other displacements that might have been possible but don’t actually occur. The analysis begins with the forced displacement condition, translates it into pressure waves, which in turn are used in order to derive the predicted displacements in the gas in the system, at each instant. Thus, at any arbitrary instant of time $t > t_i$ (in fact, the analysis here runs for times $t \gg t_f$), the analysis remains concerned only with those displacements that are actually taking place at that instant.

The Method of Calculus of Variations:

The second paradigm follows the energetics program. This program was initiated by Newton himself as well as by Leibnitz. However, it was pursued vigorously not by Newton but rather by Leibnitz, and then by a series of gifted mathematicians-physicists: the Bernoulli brothers, Euler, Lagrange, Hamilton, and others. This paradigm is essentially based on the calculus of variations. The idea here is something like the following.

We do not care for a local description at all. Thus, we do not analyze the situation in terms of the local pressure pulses, their momenta/forces, etc. All that we focus on are just two sets of quantities: the initial positions of the disks, and their final positions.

For instance, focus on the disk $D_1$. It initially is at the position $x_{1_i}$. It is found, after a long elapse of time (i.e., at the next equilibrium state), to have moved to $x_{1_f}$. The question is: how to relate this change in $x_1$ on the one hand, to the displacement that the piston itself undergoes from $P_{x_i}$ to $P_{x_f}$.

To analyze this question, the energetics program (i.e., the calculus of variations) adopts a seemingly strange methodology.

It begins by saying that there is nothing unique to the specific value of the position $x_{1_f}$ as assumed by the disk $D_1$. The disk could have come to a halt at any other (nearby) position, e.g., at some other point $x_{1_1}$, or $x_{1_2}$, or $x_{1_3}$, … etc. In fact, since there are an infinity of points lying in a finite segment of line, there could have been an infinity of positions where the disk could have come to a rest, when the new equilibrium was reached.

Of course, in reality, the disk $D_1$ comes to a halt at none of these other positions; it comes to a halt only at $x_{1_f}$.

Yet, the theory says, we need to be “all-inclusive,” in a way. We need not, just for the aforementioned reason, deny a place in our analysis to these other positions. The analysis must include all such possible positions—even if they be purely hypothetical, imaginary, or unreal. What we do in the analysis, this paradigm says, is to initially include these merely hypothetical, unrealistic positions too on exactly the same footing as that enjoyed by that one position which is realistic, which is given by $x_{1_f}$.

Thus, we take a set of all possible positions for each disk. Then, for each such a position, we calculate the “impact” it would make on the energy of the system taken as a whole.

The energy of the system can be additively decomposed into the energies carried by each of its sub-parts. Thus, focusing on disk $D_1$, for each one of its possible (hypothetical) final position, we should calculate the energies carried by both its adjacent compartments. Since a change in $D_1$‘s position does not affect the compartment 3, we need not include it. However, for the disk $D_1$, we do need to include the energies carried by both the compartments 1 and 2. Similarly, for each of the possible positions occupied by the disk $D_2$, it should include the energies of the compartments 2 and 3, but not of 1.

At this point, to bring simplicity (and thereby better) clarity to this entire procedure, let us further assume that the possible positions of each disk forms a finite set. For instance, each disk can occupy only one of the positions that is some $-5, -4, -3, -2, -1, 0, +1, +2, +3, +4$ or $+5$ distance-units away from its initial position. Thus, a disk is not allowed to come to a rest at, say, $2.3$ units; it must do so either at $2$ or at $3$ units. (We will thus perform the initial analysis in terms of only the integer positions, and only later on extend it to any real-valued positions.) (If you are a mechanical engineering student, suggest a suitable mechanism that can ensure only integer relative displacements.)

The change in energy $E$ of a compartment is given by
$\Delta E = P A \Delta x$,
where $P$ is the pressure, $A$ is the cross-sectional area of the cylinder, and $\Delta x$ is the change in the length of the compartment.

Now, observe that the energy of the middle compartment depends on the relative distance between the two disks lying on its sides. Yet, for the same reason, the energy of the middle compartment does depend on both these positions. Hence, we must take a Cartesian product of the relative displacements undergone by both the disks, and only then calculate the system energy for each such a permutation (i.e. the ordered pair) of their positions. Let us go over the details of the Cartesian product.

The Cartesian product of the two positions may be stated as a row-by-row listing of ordered pairs of the relative positions of $D_1$ and $D_2$, e.g., as follows: the ordered pair $(-5, +2)$ means that the disk $D_1$ is $5$ units to the left of its initial position, and the disk $D_2$ is $+2$ units to the right of its initial position. Since each of the two positions forming an ordered pair can range over any of the above-mentioned $11$ number of different values, there are, in all, $11 \times 11 = 121$ number of such possible ordered pairs in the Cartesian product.

For each one of these $121$ different pairs, we use the above-given formula to determine what the energy of each compartment is like. Then, we add the three energies (of the three compartments) together to get the value of the energy of the system as a whole.

In short, we get a set of $121$ possible values for the energy of the system.

You must have noticed that we have admitted every possible permutation into analysis—all the $121$ number of them.

Of course, out of all these $121$ number of permutations of positions, it should turn out that $120$ number of them have to be discarded because they would be merely hypothetical, i.e. unreal. That, in turn, is because, the relative positions of the disks contained in one and only one ordered pair would actually correspond to the final, equilibrium position. After all, if you conduct this experiment in reality, you would always get a very definite pair of the disk-positions, and it this same pair of relative positions that would be observed every time you conducted the experiment (for the same piston displacement). Real experiments are reproducible, and give rise to the same, unique result. (Even if the system were to be probabilistic, it would have to give rise to an exactly identical probability distribution function.) It can’t be this result today and that result tomorrow, or this result in this lab and that result in some other lab. That simply isn’t science.

Thus, out of all those $121$ different ordered-pairs, one and only one ordered-pair would actually correspond to reality; the rest all would be merely hypothetical.

The question now is, which particular pair corresponds to reality, and which ones are unreal. How to tell the real from the unreal. That is the question.

Here, the variational principle says that the pair of relative positions that actually occurs in reality carries a certain definite, distinguishing attribute.

The system-energy calculated for this pair (of relative displacements) happens to carry the lowest magnitude from among all possible $121$ number of pairs. In other words, any hypothetical or unreal pair has a higher amount of system energy associated with it. (If two pairs give rise to the same lowest value, both would be equally likely to occur. However, that is not what provably happens in the current example, so let us leave this kind of a “degeneracy” aside for the purposes of this post.)

(The update on 26 December 2016 begins here:)

Actually, the description  given in the immediately preceding paragraph was a bit too dumbed down. The variational principle is more subtle than that. Explaining it makes this post even longer, but let me give it a shot anyway, at least today.

To follow the actual idea of the variational principle (in a not dumbed-down manner), the procedure you have to follow is this.

First, make a table of all possible relative-position pairs, and their associated energies. The table has the following columns: a relative-position pair, the associated energy $E$ as calculated above, and one more column which for the time being would be empty. The table may look something like what the following (partial) listing shows:

(0,0) -> say, 115 Joules
(-1,0) -> say, 101 Joules
(-2,0) -> say, 110 Joules

(2,2) -> say, 102 Joules
(2,3) -> say, 100 Joules
(2,4) -> say, 101 Joules
(2,5) -> say, 120 Joules

(5,0) -> say, 135 Joules

(5,5) -> say 117 Joules.

Having created this table (of $121$ rows), you then pick each row one by and one, and for the picked up $n$-th row, you ask a question: What all other row(s) from this table have their relative distance pairs such that these pairs lie closest to the relative distance pair of this given row. Let me illustrate this question with a concrete example. Consider the row which has the relative-distance pair given as (2,3). Then, the relative distance pairs closest to this one would be obtained by adding or subtracting a distance of 1 to each in the pair. Thus, the relative distance pairs closest to this one would be: (3,3), (1,3), (2,4), and (2,2). So, you have to pick up those rows which have these four entries in the relative-distance pairs column. Each of these four pairs represents a variation $\delta$ on the chosen state, viz. the state (2,3).

In symbolic terms, suppose for the $n$-th row being considered, the rows closest to it in terms of the differences in their relative distance pairs, are the $a$-th, $b$-th, $c$-th and $d$-th rows. (Notice that the rows which are closest to a given row in this sense, would not necessarily be found listed just above or below that given row, because the scheme followed while creating the list or the vector that is the table would not necessarily honor the closest-lying criterion (which necessarily involves two numbers)—not at least for all rows in the table.

OK. Then, in the next step, you find the differences in the energies of the $n$-th row from each of these closest rows, viz., the $a$-th, $b$-th, $c$-th and $c$-th rows. That is to say, you find the absolute magnitudes of the energy differences. Let us denote these magnitudes as: $\delta E_{na} = |E_n - E_a|$$\delta E_{nb} = |E_n - E_b|$$\delta E_{nc} = |E_n - E_c|$ and $\delta E_{nd} = |E_n - E_d|$.  Suppose the minimum among these values is $\delta E_{nc}$. So, against the $n$-th row, in the last column of the table, you write the value $\delta E_{nc}$.

Having done this exercise separately for each row in the table, you then ask: Which row has the smallest entry in the last column (the one for $\delta E$), and you pick that up. That is the distinguished (or the physically occurring) state.

In other words, the variational principle asks you to select not the row with the lowest absolute value of energy, but that row which shows the smallest difference of energy from one of its closest neighbours—and these closest neighbours are to be selected according to the differences in each number appearing in the relative-distance pair, and not according to the vertical place of rows in the tabular listing. (It so turns out that in this example, the row thus selected following both criteria—lowest energy as well as lowest variation in energy—are identical, though it would not necessarily always be the case. In short, we can’t always get away with the first, too dumbed down, version.)

Thus, the variational principle is about that change in the relative positions for which the corresponding change in the energy vanishes (or has the minimum possible absolute magnitude, in case the positions form a discretely varying, finite set).

(The update on 26th December 2016 gets over here.)

And, it turns out that this approach, too, is indeed able to perfectly predict the final disk-positions—precisely as they actually are observed in reality.

If you allow a continuum of positions (instead of the discrete set of only the $11$ number of different final positions for one disk, or $121$ number of ordered pairs), then instead of taking a Cartesian product of positions, what you have to do is take into account a tensor product of the position functions. The maths involved is a little more advanced, but the underlying algebraic structure—and the predictive principle which is fundamentally involved in the procedure—remains essentially the same. This principle—the variational principle—says:

Among all possible variations in the system configurations, that system configuration corresponds to reality which has the least variation in energy associated with it.

(This is a very rough statement, but it will do for this post and for a general audience. In particular, we don’t look into the issues of what constitute the kinematically admissible constraints, why the configurations must satisfy the field boundary conditions, the idea of the stationarity vs. of a minimum or a maximum, i.e., the issue of convexity-vs.-concavity, etc. The purpose of this post—and our example here—are both simple enough that we need not get into the whole she-bang of the variational theory as such.)

Notice that in this second paradigm, (i) we did not restrict the analysis to only those quantities that are actually taking place in reality; we also included a host (possibly an infinity) of purely hypothetical combinations of quantities too; (ii) we worked with energy, a scalar quantity, rather than with momentum, a vector quantity; and finally, (iii) in the variational method, we didn’t bother about the local details. We took into account the displacements of the disks, but not any displacement at any other point, say in the gas. We did not look into presence or absence of a pulse at one point in the gas as contrasted from any other point in it. In short, we did not discuss the details local to the system either in space or in time. We did not follow the system evolution, at all—not at least in a detailed, local way. If we were to do that, we would be concerned about what happens in the system at the instants and at spatial points other than the initial and final disk positions. Instead, we looked only at a global property—viz. the energy—whether at the sub-system level of the individual compartments, or at the level of the overall system.

The Two Paradigms Contrasted from Each Other:

If we were to follow Newton’s method, it would be impossible—impossible in principle—to be able to predict the final disk positions unless all their motions over all the intermediate transient dynamics (occurring over each moment of time and at each place of the system) were not be traced. Newton’s (or vectorial) method would require us to follow all the details of the entire evolution of all parts of the system at each point on its evolution path. In the variational approach, the latter is not of any primary concern.

Yet, in following the energetics program, we are able to predict the final disk positions. We are able to do that without worrying about what all happened before the equilibrium gets established. We remain concerned only with certain global quantities (here, system-energy) at each of the hypothetical positions.

The upside of the energetics program, as just noted, is that we don’t have to look into every detail at every stage of the entire transient dynamics.

Its downside is that we are able to talk only of the differences between certain isolated (hypothetical) configurations or states. The formalism is unable to say anything at all about any of the intermediate states—even if these do actually occur in reality. This is a very, very important point to keep in mind.

The Question:

Now, the question with which we began this post. Namely, what does the delta of the variational calculus mean?

Referring to the above discussion, note that the delta of the variational calculus is, here, nothing but a change in the position-pair, and also the corresponding change in the energy.

Thus, in the above example, the difference of the state (2,3) from the other close states such as (3,3), (1,3), (2,4), and (2,2) represents a variation in the system configuration (or state), and for each such a variation in the system configuration (or state), there is a corresponding variation in the energy $\delta E_{ni}$ of the system. That is what the delta refers to, in this example.

Now, with all this discussion and clarification, would it be possible for you to clearly state what the physical meaning of the delta is? To what precisely does the concept refer? How does the variation in energy $\delta E$ differ from both the finite changes ($\Delta E$) as well as the infinitesimal changes ($\text{d}E$) of the usual calculus?

Note, the question is conceptual in nature. And, no, not a single one of the very best books on classical mechanics manages to give a very succinct and accurate answer to it. Not even Rana and Joag (or Goldstein, or Feynman, or…)

I will give my answer in my next post, next year. I will also try to apply it to a couple of more interesting (and somewhat more complicated) physical situations—one from engineering sciences, and another from quantum mechanics!

In the meanwhile, think about it—the delta—the concept itself, its (conceptual) meaning. (If you already know the calculus of variations, note that in my above write-up, I have already supplied the answer, in a way. You just have to think a bit about it, that’s all!)

An Important Note: Do bring this post to the notice of the Officially Approved Full Professors of Mechanical Engineering in SPPU, and the SPPU authorities. I would like to know if the former would be able to state the meaning—at least now that I have already given the necessary context in such great detail.

Ditto, to the Officially Approved Full Professors of Mechanical Engineering at COEP, esp. D. W. Pande, and others like them.

After all, this topic—Lagrangian mechanics—is at the core of Mechanical Engineering, even they would agree. In fact, it comes from a subject that is not taught to the metallurgical engineers, viz., the topic of Theory of Machines. But it is taught to the Mechanical Engineers. That’s why, they should be able to crack it, in no time.

(Let me continue to be honest. I do not expect them to be able to crack it. But I do wish to know if they are able at least to give a try that is good enough!)

Even though I am jobless (and also nearly bank balance-less, and also cashless), what the hell! …

…Season’s greetings and best wishes for a happy new year!

A Song I Like:

[With jobless-ness and all, my mood isn’t likely to stay this upbeat, but anyway, while it lasts, listen to this song… And, yes, this song is like, it’s like, slightly more than 60 years old!]

(Hindi) “yeh raat bhigee bhigee”
Music: Shankar-Jaikishan
Singers: Manna De and Lata Mangeshkar
Lyrics: Shailendra

[E&OE]

/