Micro-level water-resources engineering—8: Measure that water evaporation! Right now!!

It’s past the middle of May—the hottest time of the year in India.

The day-time is still lengthening. And it will continue doing so well up to the summer solstice in the late June, though once monsoon arrives some time in the first half of June, the solar flux in this part of the world would get reduced due to the cloud cover, and so, any further lengthening of the day would not matter.

In the place where I these days live, the day-time temperature easily goes up to 42–44 deg. C. This high a temperature is, that way, not at all unusual for most parts of Maharashtra; sometimes Pune, which is supposed to be a city of a pretty temperate climate (mainly because of the nearby Sahyaadris), also registers the max. temperatures in the early 40s. But what makes the region where I currently live worse than Pune are these two factors: (i) the minimum temperature too stays as high as 30–32 deg. C here whereas in Pune it could easily be falling to 27–26 deg. C even during May, and (ii) the fall of the temperatures at night-time proceeds very gradually here. On a hot day, it can easily be as high as 38 deg C. even after the sunset, and even 36–37 deg. C right by the time it’s the mid-night; the drop below 35 deg. C occurs only for the 3–4 hours in the early morning, between 4 to 7 AM. In comparison, Pune is way cooler. The max. temperatures Pune registers may be similar, but the evening- and the night-time temperatures fall down much more rapidly there.

There is a lesson for the media here. Media obsesses over the max. temperature (and its record, etc.). That’s because the journos mostly are BAs. (LOL!) But anyone who has studied physics and calculus knows that it’s the integral of temperature with respect to time that really matters, because it is this quantity which scales with the total thermal energy transferred to a body. So, the usual experience common people report is correct. Despite similar max. temperatures, this place is hotter, much hotter than Pune.


And, speaking of my own personal constitution, I can handle a cold weather way better than I can handle—if at all I can handle—a hot weather. [Yes, in short, I’ve been in a bad shape for the past month or more. Lethargic. Lackadaisical. Enervated. You get the idea.]


But why is it that the temperature does not matter as much as the thermal energy does?

Consider a body, say a cube of metal. Think of some hypothetical apparatus that keeps this body at the same cool temperature at all times, say, at 20 deg. C.  Here, choose the target temperature to be lower than the minimum temperature in the day. Assume that the atmospheric temperature at two different places varies between the same limits, say, 42 to 30 deg. C. Since the target temperature is lower than the minimum ambient temperature, you would have to take heat out of the cube at all times.

The question is, at which of the two places the apparatus has to work harder. To answer that question, you have to calculate the total thermal energy that has be drained out of the cube over a single day. To answer this second question, you would need the data of not just the lower and upper limits of the temperature but also how it varies with time between two limits.


The humidity too is lower here as compared to in Pune (and, of course, in Mumbai). So, it feels comparatively much more drier. It only adds to the real feel of a real hot weather.

One does not realize it, but the existence of a prolonged high temperature makes the atmosphere here imperceptibly slowly but also absolutely insurmountably, dehydrating.

Unlike in Mumbai, one does not notice much perspiration here, and that’s because the air is so dry that any perspiration that does occur also dries up very fast. Shirts getting drenched by perspiration is not a very common sight here. Overall, desiccating would be the right word to describe this kind of an air.

So, yes, it’s bad, but you can always take precautions. Make sure to drink a couple of glasses of cool water (better still, fresh lemonade) before you step out—whether you are thirsty or not. And take an onion with you when you go out; if you begin to feel too much of heat, you can always crush the onion with hand and apply the juice onto the top of your head. [Addendum: A colleague just informed me that it’s even better to actually cut the onion and keep its cut portion touching to your body, say inside your shirt. He has spent summers in eastern Maharashtra, where temperatures can reach 47 deg. C. … Oh well!]

Also, eat a lot more onions than you normally do.

And, once you return home, make sure not to drink water immediately. Wait for 5–10 minutes. Otherwise, the body goes into a shock, and the ensuing transient spikes in your biological metabolism can, at times, even trigger the sun-stroke—which can even be fatal. A simple precaution helps avoid it.

For the same reason, take care to sit down in the shade of a tree for a few minutes before you eat that slice of water-melon. Water-melon is nothing but more than 95% water, thrown with a little sugar, some fiber, and a good measure of minerals. All in all, good for your body because even if the perspiration is imperceptible in the hot and dry regions, it is still occurring, and with it, the body is being drained of the necessary electrolytes and minerals. … Lemonades and water-melons supply the electrolytes and the minerals. People do take care not to drink lemonade in the Sun, but they don’t always take the same precaution for water-melon. Yet, precisely because a water-melon has so much water, you should take care not to expose your body to a shock. [And, oh, BTW, just in case you didn’t know already, the doctor-recommended alternative to Electral powder is: your humble lemonade! Works exactly equivalently!!]


Also, the very low levels of humidity also imply that in places like this, the desert-cooler is effective, very effective. The city shops are full of them. Some of these air-coolers sport a very bare-bones design. Nothing fancy like the Symphony Diet cooler (which I did buy last year in Pune!). The air-coolers locally made here can be as simple as just an open tray at the bottom to hold the water, a cube made of a coarse wire-mesh which is padded with the khus/wood sheathings curtain, and a robust fan operating [[very] noisily]. But it works wonderfully. And these local-made air-coolers also are very inexpensive. You can get one for just Rs. 2,500 or 3,000. I mean the ones which have a capacity to keep at least 3–4 people cool.(Branded coolers like the one I bought in Pune—and it does work even in Pune—often go above Rs. 10,000. [I bought that cooler last year because I didn’t have a job, thanks to the Mechanical Engineering Professors in the Savitribai Phule Pune University.])


That way, I also try to think of the better things this kind of an air brings. How the table salt stays so smoothly flowing, how the instant coffee powder or Bournvita never turns into a glue, how an opened packet of potato chips stays so crisp for days, how washed clothes dry up in no time…

Which, incidentally, brings me to the topic of this post.


The middle—or the second half—of May also is the most ideal time to conduct evaporation experiments.

If you are looking for a summer project, here is one: to determine the evaporation rate in your locality.

Take a couple of transparent plastic jars of uniform cross section. The evaporation rate is not very highly sensitive to the cross-sectional area, but it does help to take a vessel or a jar of sizeable diameter.

Affix a mm scale on the outside of each jar, say using cello-tape. Fill the plastic jars to some level almost to the full.

Keep one jar out in the open (exposed to the Sun), and another one, inside your home, in the shade. For the jar kept outside, make sure that birds don’t come and drink the water, thereby messing up with your measurements. For this purpose, you may surround the jar with an enclosure having a coarse mesh. The mesh must be coarse; else it will reduce the solar flux. The “reduction in the solar flux” is just a fancy [mechanical [thermal] engineering] term for saying that the mesh, if too fine, might cast too significant a shadow.

Take measurements of the heights of the water daily at a fixed time of the day, say at 6:00 PM. Conduct the experiment for a week or 10 days.

Then, plot a graph of the daily water level vs. the time elapsed, for each jar.

Realize, the rate of evaporation is measured in terms of the fall in the height, and not in terms of the volume of water lost. That’s because once the exposed area is bigger than some limit, the evaporation rate (the loss in height) is more or less independent of the cross-sectional area.

Now figure out:

Does the evaporation rate stay the same every day? If there is any significant departure from a straight-line graph, how do you explain it? Was there a measurement error? Was there an unusually strong wind on a certain day? a cloud cover?

Repeat the experiment next winter (around the new year), and determine the rate of evaporation at that time.

Later on, also make some calculations. If you are building a check-dam or a farm-pond, how much would be the evaporation loss over the five months from January to May-end? Is the height of your water storage system enough to make it practically useful? economically viable?


A Song I Like:

(Hindi) “mausam aayegaa, jaayegaa, pyaar sadaa muskuraayegaa…”
Music: Manas Mukherjee
Singers: Manna Dey and Asha Bhosale
Lyrics: Vithalbhai Patel

Advertisements

The goals are clear, now

This one blog post is actually a combo-pack of some 3 different posts, addressed to three different audiences: (i) to my general readers, (ii) to the engineering academics esp. in India, and (iii) to the QM experts. Let me cover it all in that order.


(I) To the general reader of this blog:

I have a couple of neat developments to report about.

I.1. First, and of immediate importance: I have received, and accepted, a job offer. Of course, the college is from a different university, not SPPU (Savitribai Phule Pune University). Just before attending this interview (in which I accepted the offer), I had also had discussions with the top management of another college, from yet another university (in another city). They too have, since then, confirmed that they are going to invite me once the dates for the upcoming UGC interviews at their college are finalized. I guess I will attend this second interview only if my approvals (the university and the AICTE approvals) for the job I have already accepted and will be joining soon, don’t go through, for whatever reason.

If you ask me, my own gut feel is that the approvals at both these universities should go through. Historically, neither of these two universities have ever had any issue with a mixed metallurgy-and-mechanical background, and especially after the new (mid-2014) GR by the Maharashtra State government (by now 2.5+ years old), the approval at these universities should be more or less only a formality, not a cause for excessive worry as such.

I told you, SPPU is the worst university in Maharashtra. And, Pune has become a real filthy, obnoxious place, speaking of its academic-intellectual atmosphere. I don’t know why the outside world still insists on calling both (the university and the city) great. I can only guess. And my guess is that brand values of institutions tend to have a long shelf life—and it would be an unrealistically longer shelf life, when the economy is mixed, not completely free. That is the broad reason. There is another, more immediate and practical reason to it, too—I mean, regarding how it all actually has come to work.

Most every engineer who graduates from SPPU these days goes into the IT field. They have been doing so for almost two decades by now. Now, in the IT field, the engineering knowledge as acquired at the college/university is hardly of any direct relevance. Hence, none cares for what academically goes on during those four years of the UG engineering—not in India, I mean—not even in IITs, speaking in comparison to what used to be the case some 3 decades ago. (For PG engineering, in most cases, the best of them go abroad or to IITs anyway.) By “none” I mean: first and foremost, the parents of the students; then the students themselves; and then, also the recruiting companies (by which, I mostly mean those from the IT field).

Now, once in the IT industry and thus making a lot of money, these people of course find it necessary to keep the brand value of “Pune University” intact. … Notice that the graduates of IITs and of COEP/VJTI etc. specifically mention their college on their LinkedIn profiles. But none from the other colleges in SPPU do. They always mention only “University of Pune”. The reason is, their colleges didn’t have as much of a brand value as did the university, when all this IT industry trend began. Now, if these SPPU-graduated engineers themselves begin to say that the university they attended was in fact bad (or had gone bad at least when they attended it), it will affect their own career growth, salaries and promotions. So, they never find it convenient to spell out the truth—who would do that? Now, the Pune education barons (not to mention the SPPU authorities) certainly are smart enough to simply latch on to this artificially inflated brand-value. The system works, even though the quality of engineering education as such has very definitely gone down. (In some respects, due to expansion of the engineering education market, the quality has actually gone up—even though my IIT/COEP classmates often find this part difficult to believe. But yes, there have been improvements too. The improvements pertain to such things as syllabii and systems (in the “ISO” sense of the term). But not to the actual delivery—not to the actually imparted education. And that‘s my point.)

When parents and recruiting companies themselves don’t care for the quality of education imparted within the four years of UG engineering, it is futile to expect that mere academicians, as a group, would do much to help the matters.

That’s why, though SPPU has become so bad, it still manages to keep its high reputation of the past—and all its current whimsies (e.g. such stupid issues as the Metallurgy-vs-Mechanical branch jumping, etc.)—completely intact.

Anyway, I am too small to fight the entire system. In any case, I was beyond the end of all my resources.

All in all, yes, I have accepted the job offer.

But despite the complaining/irritating tone that has slipped in the above write-up, I would be lying to you if I said that I was not enthusiastic about my new job. I am.

I.2. Second, and from the long-term viewpoint, the much more important development I have to report (to my general readers) is this.

I now realize that I have come to develop a conceptually consistent physical viewpoint for the maths of quantum mechanics.

(I won’t call it an “interpretation,” let alone a “philosophical interpretation.” I would call it a physics theory or a physical viewpoint.)

This work was in progress for almost a year and a half or more—since October 2015, if I go by my scribblings in the margins of my copy of Griffiths’ text-book. I still have to look-up the scribblings I made in the small pocket notebooks I maintain (more than 10 of them, I’ve finished already for QM alone). I also have yet to systematically gather and order all those other scribblings on the paper napkins I made in the restaurants. Yes, in may case, notings on the napkins is not just a metaphor; I have often actually done such notings, simply because sometimes I do forget to carry my pocket notebooks. At such times, these napkins (or those rough papers from the waiter’s order-pad), do come in handy. I have been storing them in a plastic bag, and a drawer. Once I look up all such notings systematically, I will be able to sequence the progression of my thoughts better. But yes, as a rough and ready estimate, thinking along this new line has been going on for some 1.5 years or more by now.

But it’s only recently, in December 2016 or January 2017, that I slowly grew fully confident that my new viewpoint is correct. I took about a month to verify the same, checking it from different angles, and this process still continues. … But, what the heck, let me be candid about it: the more I think about it, all that it does is to add more conceptual integrations to it. But the basic conceptual scheme, or framework, or the basic line of thought, stays the same. So, it’s it and that’s that.

Of course, detailed write-ups, (at least rough) calculations, and some (rough) simulations still have to be worked out, but I am working on them.

I have already written more than 30 pages in the main article (which I should now be converting into a multi-chapter book), and more than 50 pages in the auxiliary material (which I plan to insert in the main text, eventually).

Yes, I have implemented a source control system (SVN), and have been taking regular backups too, though I need to now implement a system of backups to two different external hard-disks.

But all this on-going process of writing will now get interrupted due to my move to the new job, in another city. My blogging too would get interrupted. So, please stay away from this blog for a while. I will try to resume both ASAP, but as of today, can’t tell when—may be a month or so.


(II) To the engineering academics among my readers, esp. the Indian academics:

I have changed my stance regarding publications. All along thus far, I had maintained that I will not publish anything in one of those “new” journals in which most every Indian engineering professor publishes these days.

However, I now realize that one of the points in the approvals (by universities, AICTE, UGC, NAAC, NBA, etc.) concerns journal papers. I have only one journal paper on my CV. Keeping the potential IPR issues in mind, all my other papers were written in only schematic way (the only exception is the diffusion paper), and for that reason, they were published only in the conference proceedings. (I had explicitly discussed this matter not just with my guide, but also with my entire PhD committee.) Of course, I made sure that all these were international conferences, pretty reputed ones, of pretty low acceptance rates (though these days the acceptance rates at these same conferences have gone significantly up (which, incidentally, should be a “good” piece of news to my new students)). But still, as a result, all but one of my papers have been only conference papers, not journal papers.

After suffering through UGC panel interviews at three different colleges (all in SPPU) I now realize that it’s futile to plead your case in front of them. They are insufferable in every sense; they stick to their guns. You can’t beat their sense of “quality,” as it were.

So, I have decided to follow their (I mean my UGC panel interviewers’) lead, and thus have now decided to publish at least three papers in such journals, right over the upcoming couple of months or so.

Forgive me if I report the same old things (which I had reported in those international conferences about a decade ago). I have been assured that conference papers are worthless and that no one reads them. Reporting the same things in journal papers should enhance, I guess, their readability. So, the investigations I report on will be the same, but now they will appear in the Microsoft Word format, and in international journals.

That’s another reason why my blogging will be sparser in the upcoming months.

That way, in the world of science and research, it has always been quite a generally accepted practice, all over the world, to first report your findings in conferences, seek your peers’ opinions on your work or your ideas, and then expand on (or correct on) the material/study, and then send it to journals. There is nothing wrong in it. Even the topmost physicists have followed precisely the same policy. … Why, come to think of it, the very first paper that ushered humanity into the quantum era itself was only a conference talk. In fact it was just a local conference, albeit in an advanced country. I mean Planck’s very first announcement regarding quantization. … So, it’s a perfectly acceptable practice.

The difference this time (I mean, in my, present, case) will be: I will contract on (and hopefully also dumb down) the contents of my conference papers, so as to match the level of the journals in which my UGC panel interviewers themselves publish.

No, the above was not a piece of sarcasm—at least I didn’t mean it, when I wrote it. I merely meant to highlight an objective fact. Given the typical length, font size, gaps in sections, and the overall treatment of the contents of these journals, I will have to both contract on and dumb down on my write-ups. … I will of course also add some new sentences here and there to escape the no-previous-publication clause, but believe me, in my case, that is a very minor worry. The important thing would be to match the level of the treatment, to use the Microsoft Word’s equation editor, and to cut down on the length. Those are my worries.

Another one of my worries is how to publish two journal papers—one good, and one bad—based on the same idea. I mean, suppose I want to publish something on the nature of the \delta of the calculus of variations, in one of these journals. … Incidentally, I do think that what I wrote on this idea right here on this blog a little ago, is worth publishing even in a good journal, say in Am. J. Phys., or at least in the Indian journal “Resonance.” So, I would like to eventually publish it one of these two journals, too. But for immediately enhancing the number of journal papers on my CV, I should immediately publish a shorter version of the same in one of these new international journals too, on an urgent basis. Now the question is: what all aspects I should withhold for now. That is my worry. That’s why, the way my current thinking goes, instead of publishing any new material (say on the \delta of CoV), I should instead simply recycle the already conference-published material.

One final point. Actually, I never did think that it was immoral to publish in such journals (I mean the ones in which my interviewers from SPPU publish). These journals do have ISSN, and they always are indexed in the Google Scholar (which is an acceptable indexing service even to NBA), and sometimes even in Scopus/PubMed etc. Personally, I had refrained from publishing in them not because I thought that it was immoral to do so, but rather because I thought it was plain stupid. I have been treating the invitations from such journals with a sense of humour all along.

But then, the way our system works, it does have the ways and the means to dumb down one and all. Including me. When my very career is at the stake, I will very easily and smoothly go along, toss away my sense of quality and propriety, and join the crowd. (But at least I will be open and forth-right about it—admitting it publicly, the way I have already done, here.)

So, that’s another reason why my blogging would be sparser over the upcoming few months, esp. this month and the next. I will be publishing in (those) journals, on a high priority.


(III) To the QM experts:

Now, a bit to QM experts. By “experts,” I mean those who have studied QM through university courses (or text-books, as in my case) to PG or PhD level. I mean, the QM as it is taught at the UG level, i.e., the non-relativistic version of it.

If you are curious about the exact nature of my ideas, well, you will have to be patient. Months, perhaps even a year, it’s going to take, before I come to write about it on my blog(s). It will take time. I have been engaged in writing about it for about a month by now, and I speak from this experience. And further, the matter of having to immediately publish journal papers in engineering will also interfere the task of writing.

However, if you are an academic in India (say a professor or even just a serious and curious PhD student of physics/chemistry/engg physics program, say at an IIT/IISc/IISER/similar) and are curious to know about my ideas… Well, just give me a call and let’s decide on a mutually convenient time to meet in person. Ditto, for academics/serious students of physics from abroad visiting India.

No, I don’t at all expect any academics in (or visiting) India to be that curious about my work. But still, theoretically speaking, assuming that someone is interested: just send me an email or call me to fix an appointment, and we will discuss my ideas, in person. We will work out at the black-board (better than working on paper, in my experience).

I am not at all hung up about maintaining secrecy until publication. It’s just that writing takes time.

One part of it is that when you write, people also expect a higher level of precision from you, and ensuring that takes time. Making general but precise statements or claims, on a most fundamental topic of physics—it’s QM itself—is difficult, very difficult. Talking to experts is, in contrast, easy—provided you know what you are talking about.

In a direct personal talk, there is a lot of room for going back and forth, jumping around the topics, hand-waving, which is not available in the mode of writing-by-one-then-reading-by-another. And, talking with experts would be easier for me because they already know the context. That’s why I specified PhD physicists/professors at this stage, and not, say, students of engineering or CS folks merely enthusiastic about QM. (Coming to humanity folks, say philosophers, I think that via this work, I have nothing—or next to nothing—to offer to their specialty.)

Personally, I am not comfortable with video-conferencing, though if the person in question is a serious academic or a reputed corporate/national lab researcher, I would sure give it a thought to it. For instance, if some professor from US/UK that I had already interacted with (say at iMechanica, or at his blog, or via emails) wants to now know about my new ideas and wants a discussion via Skype, I could perhaps go in for it—even though I would not be quite comfortable with the video-conferencing mode as such. The direct, in person talk, working together at the black-board, works best for me. I don’t find Skype comfortable enough even with my own class-mates or close personal relations. It just doesn’t work by me. So, try to keep it out.

For the same reason—the planning and the precision required in writing—I would mostly not be able to even blog about my new ideas. Interactions on blogs tends to introduce too many bifurcations in the discussion, and therefore, even though the different PoV could be valuable, such interactions should be introduced only after the first cut in the writing is already over. That’s why, the most I would be able to manage on this blog would be some isolated aspects—granted that some inconsistencies or contradictions could still easily slip in. I am not sure, but I will try to cover at least some isolated aspects from time to time.

Here’s an instance. (Let me remind you: I am addressing this part to those who have already studied QM through text-books, esp. to PhD physicists. I am not only referring to equations, but more importantly, I am assuming the context of a direct knowledge of how topics like the one below are generally treated in various books and references.)

Did you ever notice just how radical was de Broglie’s idea? I mean, for the electron, the equations de Broglie used were:

E = \hbar \nu and p = \hbar k.

Routine stuff, do you say? But notice, in the special relativity, i.e. in the classical electrodynamics, the equation for the energy of a massive particle is:
E^2 = (pc)^2 + (m_0 c^2)^2

In arriving at the relation p = \hbar k, Einstein had dropped the second term (m_0^2 c^4) from the expression for energy because radiation has no mass, and so, his hypothetical particles also would carry no mass.

When de Broglie assumed that this same expression holds also for the electron—its matter waves—what he basically was doing was: to take an expression derived for a massless particle (Einstein’s quantum of light) as is, and to assume that it would apply also for the massive particle (i.e. the electron).

In effect, what de Broglie had ended up asserting was that the matter-waves of the electron had a massless nature.

Got it? See how radical—and how subtly (indirectly, implicitly) slipped in—is that suggestion? Have you seen this aspect highlighted or discussed this way in a good university course or a text-book on modern physics or QM? …

…QM is subtle, very subtle. That’s why working out a conceptually consistent scheme for it is (and has been) such a fun.

The above observation was one of my clues in working out my new scheme. The other was the presence of the classical features in QM. Not only the pop-science books but also text-books on modern physics (and QM) had led me to believe that what the QM represented was completely radical break from the classical physic. Uh-oh. Not quite.

Radical ideas, QM does have. But completely radical? Not quite.

QM, actually, is hybrid. It does have a lot of classical elements built into it, right in its postulates. I had come to notice this part and was uncomfortable with it—I didn’t have the confidence in my own observation; I used to think that when I study more of QM, I would be shown how these classical features fall away. That part never happened, not even as my further studies of QM progressed, and so, I slowly became more confident about it. QM is hybrid, full stop. It does have classical features built right in its postulates, even in its maths. It does not represent a complete break from the classical physics—not as complete a break as physicists lead you to believe. That was my major clue.

Other clues came as my grasp of the nature of the QM maths became better and firmer, which occurred over a period of time. I mean the nature of the maths of: the Fourier theory, the variational calculus, the operator theory, and the higher-dimensional spaces.

I had come to understand the Fourier theory via my research on diffusion, and the variational calculus, via my studies (and teaching!) of FEM. The operator theory, I had come to suspect (simply comparing the way people used to write in the early days of QM, and the way they now write) was not essential to the physics of the QM theory. So I had begun mentally substituting the operators acting on the wavefunction by just a modified wavefunction itself. … Hell, do you express a classical problem—say a Poisson equation problem or a Navier-Stokes problem—via operators? These days people do, but, thankfully, the trend has not yet made it to the UG text-books to a significant extent. The whole idea of the operator theory is irrelevant to physics—its only use and relevance is in maths. … Soon enough, I then realized that the wavefunction itself is a curious construct. It’s pointless debating whether the wavefunction is ontic or epistemic, primarily because the damn thing is dimensionless. Physicists always take care to highlight the fact that its evolution is unitary, but what they never tell you, never ever highlight, is the fact that the damn thing has no dimensions. Qua a dimensionless quantity, it is merely a way of organizing some other quantities that do have a physical existence. As to its unitary evolution, well, all that this idea says is that it is merely a weighting function, so to speak. But it was while teaching thermodynamics (in Mumbai in 2014 and in Pune in 2015) that I finally connected the variational principles with the operator theory, the thermodynamic system with the quantum system, and this way then got my breakthroughs (or at least my clues).

Yet another clue was the appreciation of the fact that the world is remarkably stable. When you throw a ball, it goes through the space as a single object. The part of the huge Hilbert space of the entire universe which represents the ball—all the quantum particles in it—somehow does not come to occupy a bigger part of that space. Their relations to each other somehow stay stable. That was another clue.

As to the higher-dimensional function spaces, again, my climb was slow but steady. I had begun writing my series of posts on the idea of space. It helped. Then I worked through higher-dimensional space. A rough-and-ready version of my understanding was done right on this blog. It was then that my inchoate suspicions about the nature of the Hilbert space finally began to fall in place. There is an entrenched view, viz., that the wavefunction is a “vector” that “lives” only in a higher-dimensional abstract space, and that the existence of the tensor product over the higher-dimensional space makes it in principle impossible to visualize the wavefunction for a multi-particle quantum system, which means, any quantum system which is more complex than the hydrogen atom (i.e. a single electron). Schrodinger didn’t introduce this idea, but when Lorentz pointed out that a higher-dimensional space was implied by Schrodinger’s procedure, Schrodinger first felt frustrated, and later on, in any case, he was unable to overcome this objection. And so, this part got entrenched—and became a part of the mathematicians’ myths of QM. As my own grasp of this part of the maths became better (and it was engineers’ writings on linear algebra that helped me improve my grasp, not physicists’ or mathematicians’ (which I did attempt valiantly, and which didn’t help at all)) I got my further clues. For a clue, see my post on the CoV; I do mention, first, the Cartesian product, and then, a tensor product, in it.

Another clue was a better understanding of the nonlinear vs. linear distinction in maths. It too happened slowly.

As to others’ writings, the most helpful clue came from the “anti-photon” paper by (the Nobel laureate) W. E. Lamb. Among the bloggers, I found some of the write-ups by Lubos Motl to be really helpful; also a few by Schlafly. Discussions on Scott Aaronson’s blog were useful to check out the different perspectives on the quantum problems.

The most stubborn problem for me perhaps was the measurement problem, i.e. the collapse postulate. But to say anything more about it right away would be premature—it would too premature, in fact. I want to do it right—even though I will surely follow the adage that a completed document is better than a perfect document. Perfection may get achieved only on collapse, but I happily don’t carry the notion that a good treatment on the collapse postulate has to be preceded by a collapse.

Though the conceptual framework I now have in mind is new, once it is published, it would not be found, I think, to be very radically new—not by the working physicists or the QM experts themselves anyway. …

.. I mean, personally speaking, when I for the first time thought of this new way of thinking about the QM maths, it was radically new (and radically clarifying) to me. (As I said, it happened slowly, over a period of time, starting, may be, from second half of 2015 or so if not earlier).

But since then, through my regular searches on the ‘net, I have found that other people have been suggesting somewhat similar ideas for quite some time, though they have been, IMO, not as fully consistent as they could have been. For example, see Philip Wallace[^]’s work (which I came across only recently, right this month). Or, see Martin Ligare[^]’s papers (which I ran into just last month, on the evening of 25th January, to be precise). … Very close to my ideas, but not quite the same. And, not as conceptually comprehensive, if that’s the right word to use for it.

My tentative plan as of now is to first finish writing the document (already 30+ pages, as I mentioned above in the first section). This document is in the nature of a conceptual road-map, or a position/research-program paper. Call it a white-paper sort of a document, say. I want to finish it first. Simultaneously, I will also try to do some simulations or so, and only then go for writing papers for (good) journals. … Sharing of ideas on this blog wouldn’t have to wait until the papers though; it could begin much earlier than that, in fact as soon as the position paper is done, which should be after a few months—say by June-July at the earliest. I will try to keep this position paper as brief as possible, say under 100 pages.

Let’s see how it all goes. I will keep you updated. But yes, the goals are clear now.


I wrote this lengthy a post (almost 5000 words) because I did want to get all these things from my mind and on to the blog. But since in the immediate future I would be busy in organizing for the move (right from hunting for a house/flat to rent, to deciding on what all stuff to leave in Pune for the time being and what all to take with me), to the actual move (the actual packing, moving, and unpacking etc.), I wouldn’t get the time to blog over the next 2–3 weeks, may be until it’s March already. Realizing it, I decided to just gather all this material, which is worth 3 posts, and to dump it all together in this single post. So, there.


Bye for now.


[As usual, a minor revision or two may be done later.]

What am I thinking about? …and what should it be?

What am I thinking about?

It’s the “derivation” of the Schrodinger equation. Here’s how a simplest presentation of it goes:

The kinetic energy T of a massive particle is given, in classical mechanics, as
T = \dfrac{1}{2}mv^2 = \dfrac{p^2}{2m}
where v is the velocity, m is the mass, and p is the momentum. (We deal with only the scalar magnitudes, in this rough-and-ready “analysis.”)

If the motion of the particle occurs additionally also under the influence of a potential field V, then its total energy E is given by:
E = T + V = \dfrac{p^2}{2m} + V

In classical electrodynamics, it can be shown that for a light wave, the following relation holds:
E = pc
where E is the energy of light, p is its momentum, and c is its speed. Further, for light in vacuum:
\omega = ck
where k = \frac{2\pi}{\lambda} is the wavevector.

Planck hypothesized that in the problem of the cavity radiation, the energy-levels of the electromagnetic oscillators in the metallic cavity walls maintained at thermal equilibrium are quantized, somehow:
E = h \nu = \hbar \omega
where \hbar = \frac{h}{2\pi}  and \omega = 2  \pi \nu is the angular frequency. Making this vital hypothesis, he could successfully predict the power spectrum of the cavity radiation (getting rid of the ultraviolet catastrophe).

In explaining the photoelectric effect, Einstein hypothesized that lights consists of massless particles. He took Planck’s relation E = \hbar \omega as is, and then, substituted on its left hand-side the classical expression for the energy of the radiation E = pc. On the right hand-side he substituted the relation which holds for light in vacuum, viz. \omega = c k. He thus arrived at the expression for the quantized momentum for the hypothetical particles of light:
p = \hbar k
With the hypothesis of the quanta of light, he successfully explained all the known experimentally determined features of the photoelectric effect.

Whereas Planck had quantized the equilibrium energy of the charged oscillators in the metallic cavity wall, Einstein quantized the electromagnetic radiation within the cavity itself, via spatially discrete particles of light—an assumption that remains questionable till this day (see “Anti-photon”).

Bohr hypothesized a planetary model of the atom. It had negatively charged and massive point particles of electrons orbiting around the positively charged and massive, point-particles of the nucleus. The model carried a physically unexplained feature of the stationary of the electronic orbits—i.e. the orbits travelling in which an electron, somehow, does not emit/absorb any radiation, in contradiction to the classical electrodynamics. However, this way, Bohr could successfully predict the hydrogen atom spectra. (Later, Sommerfeld made some minor corrections to Bohr’s model.)

de Broglie hypothesized that the relations E = \hbar \omega and p = \hbar k hold not only just for the massless particles of light as proposed by Einstein, but, by analogy, also for the massive particles like electrons. Since light had both wave and particle characters, so must, by analogy, the electrons. He hypothesized that the stationarity of the Bohr orbits (and the quantization of the angular momentum for the Bohr electron) may be explained by assuming that matter waves associated with the electrons somehow form a standing-wave pattern for the stationary orbits.

Schrodinger assumed that de Broglie’s hypothesis for massive particles holds true. He generalized de Broglie’s model by recasting the problem from that of the standing waves in the (more or less planar) Bohr orbits, to an eigenvalue problem of a differential equation over the entirety of space.

The scheme of  the “derivation” of Schrodinger’s differential equation is “simple” enough. First assuming that the electron is a complex-valued wave, we work out the expressions for its partial differentiations in space and time. Then, assuming that the electron is a particle, we invoke the classical expression for the total energy of a classical massive particle, for it. Finally, we mathematically relate the two—somehow.

Assume that the electron’s state is given by a complex-valued wavefunction having the complex-exponential form:
\Psi(x,t) = A e^{i(kx -\omega t)}

Partially differentiating twice w.r.t. space, we get:
\dfrac{\partial^2 \Psi}{\partial x^2} = -k^2 \Psi
Partially differentiating once w.r.t. time, we get:
\dfrac{\partial \Psi}{\partial t} = -i \omega \Psi

Assume a time-independent potential. Then, the classical expression for the total energy of a massive particle like the electron is:
E = T + V = \dfrac{p^2}{2m} + V
Note, this is not a statement of conservation of energy. It is merely a statement that the total energy has two and only two components: kinetic energy, and potential energy.

Now in this—classical—equation for the total energy of a massive particle of matter, we substitute the de Broglie relations for the matter-wave, viz. the relations E = \hbar \omega and p = \hbar k. We thus obtain:
\hbar \omega = \dfrac{\hbar^2 k^2}{2m} + V
which is the new, hybrid form of the equation for the total energy. (It’s hybrid, because we have used de Broglie’s matter-wave postulates in a classical expression for the energy of a classical particle.)

Multiply both sides by \Psi(x,t) to get:
\hbar \omega \Psi(x,t) = \dfrac{\hbar^2 k^2}{2m}\Psi(x,t) + V(x)\Psi(x,t)

Now using the implications for \Psi obtained via its partial differentiations, namely:
k^2 \Psi = - \dfrac{\partial^2 \Psi}{\partial x^2}
and
\omega \Psi = i \dfrac{\partial \Psi}{\partial t}
and substituting them into the hybrid equation for the total energy, we get:
i \hbar \dfrac{\partial \Psi(x,t)}{\partial t} = - \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x)\Psi(x,t)

That’s what the time-dependent Schrodinger equation is.

And that—the “derivation” of the Schrodinger equation thus presented—is what I have been thinking of.

Apart from the peculiar mixture of the wave and particle paradigms followed in this “derivation,” the other few points, to my naive mind, seem to be: (i) the use of a complex-valued wavefunction, (ii) the step of multiplying the hybrid equation for the total energy, by this wavefunction, and (iii) the step of replacing \omega \Psi(x,t) by i \dfrac{\partial \Psi}{\partial t}, and also replacing k^2 \Psi by - \dfrac{\partial^2 \Psi}{\partial x^2}. Pretty rare, that step seems like, doesn’t it? I mean to say, just because it is multiplied by a variable, you are replacing a good and honest field variable by a partial time-derivative (or a partial space-derivative) of that same field variable! Pretty rare, a step like that is, in physics or engineering, don’t you think? Do you remember any other place in physics or engineering where we do something like that?


What should I think about?

Is there is any mechanical engineering topic that you want me to explain to you?

If so, send me your suggestions. If I find them suitable, I will begin thinking about them. May be, I will even answer them for you, here on this blog.


If not…

If not, there is always this one, involving the calculus of variations, again!:

Derbes, David (1996) “Feynman’s derivation of the Schrodinger equation,” Am. J. Phys., vol. 64, no. 7, July 1996, pp. 881–884

I’ve already found that I don’t agree with how Derbes uses the term “local”, in this article. His article makes it seem as if the local is nothing but a smallish segment on what essentially is a globally determined path. I don’t agree with that implication. …

However, here, although this issue is of relevance to the mechanical engineering proper, in the absence of a proper job (an Officially Approved Full Professor in Mechanical Engineering’s job), I don’t feel motivated to explain myself.

Instead, I find the following article by a Mechanical Engineering professor interesting: [^]

And, oh, BTW, if you are a blind follower of Feynman’s, do check out this one:

Briggs, John S. and Rost, Jan M. (2001) “On the derivation of the time-dependent equation of Schrodinger,” Foundations of Physics, vol. 31, no. 4, pp. 693–712.

I was delighted to find a mention of a system and an environment (so close to the heart of an engineer), even in this article on physics. (I have not yet finished reading it. But, yes, it too invokes the variational principles.)


OK then, bye for now.


[As usual, may be I will come back tomorrow and correct the write-up or streamline it a bit, though not a lot. Done on 2017.01.19.]


[E&OE]

See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—4

In this post, I provide my answer to the question which I had raised last time, viz., about the differences between the \Delta, the \text{d}, and the \delta (the first two, of the usual calculus, and the last one, of the calculus of variations).


Some pre-requisite ideas:

A system is some physical object chosen (or isolated) for study. For continua, it is convenient to select a region of space for study, in which case that region of space (holding some physical continuum) may also be regarded as a system. The system boundary is an abstraction.

A state of a system denotes a physically unique and reproducible condition of that system. State properties are the properties or attributes that together uniquely and fully characterize a state of a system, for the chosen purposes. The state is an axiom, and state properties are its corollary.

State properties for continua are typically expressed as functions of space and time. For instance, pressure, temperature, volume, energy, etc. of a fluid are all state properties. Since state properties uniquely define the condition of a system, they represent definite points in an appropriate, abstract, (possibly) higher-dimensional state space. For this reason, state properties are also called point functions.

A process (synonymous to system evolution) is a succession of states. In classical physics, the succession (or progression) is taken to be continuous. In quantum mechanics, there is no notion of a process; see later in this post.

A process is often represented as a path in a state space that connects the two end-points of the staring and ending states. A parametric function defined over the length of a path is called a path function.

A cyclic process is one that has the same start and end points.

During a cyclic process, a state function returns to its initial value. However, a path function does not necessarily return to the same value over every cyclic change—it depends on which particular path is chosen. For instance, if you take a round trip from point A to point B and back, you may spend some amount of money m if you take one route but another amount n if you take another route. In both cases you do return to the same point viz. A, but the amount you spend is different for each route. Your position is a state function, and the amount you spend is a path function.

[I may make the above description a bit more rigorous later on (by consulting a certain book which I don’t have handy right away (and my notes of last year are gone in the HDD crash)).]


The \Delta, the \text{d}, and the \delta:

The \Delta denotes a sufficiently small but finite, and locally existing difference in different parts of a system. Typically, since state properties are defined as (continuous) functions of space and time, what the \Delta represents is a finite change in some state property function that exists across two different but adjacent points in space (or two nearby instants in times), for a given system.

The \Delta is a local quantity, because it is defined and evaluated around a specific point of space and/or time. In other words, an instance of \Delta is evaluated at a fixed x or t. The \Delta x simply denotes a change of position; it may or may not mean a displacement.

The \text{d} (i.e. the infinitesimal) is nothing but the \Delta taken in some appropriate limiting process to the vanishingly small limit.

Since \Delta is locally defined, so is the infinitesimal (i.e. \text{d}).

The \delta of CoV is completely different from the above two concepts.

The \delta is a sufficiently small but global difference between the states (or paths) of two different, abstract, but otherwise identical views of the same physically existing system.

Considering the fact that an abstract view of a system is itself a system, \delta also may be regarded as a difference between two systems.

Though differences in paths are not only possible but also routinely used in CoV, in this post, to keep matters simple, we will mostly consider differences in the states of the two systems.

In CoV, the two states (of the two systems) are so chosen as to satisfy the same Dirichlet (i.e. field) boundary conditions separately in each system.

The state function may be defined over an abstract space. In this post, we shall not pursue this line of thought. Thus, the state function will always be a function of the physical, ambient space (defined in reference to the extensions and locations of concretely existing physical objects).

Since a state of a system of nonzero size can only be defined by specifying its values for all parts of a system (of which it is a state), a difference between states (of the two systems involved in the variation \delta) is necessarily global.

In defining \delta, both the systems are considered only abstractly; it is presumed that at most one of them may correspond to an actual state of a physical system (i.e. a system existing in the physical reality).

The idea of a process, i.e. the very idea of a system evolution, necessarily applies only to a single system.

What the \delta represents is not an evolution because it does not represent a change in a system, in the first place. The variation, to repeat, represents a difference between two systems satisfying the same field boundary conditions. Hence, there is no evolution to speak of. When compressed air is passed into a rubber balloon, its size increases. This change occurs over certain time, and is an instance of an evolution. However, two rubber balloons already inflated to different sizes share no evolutionary relation with each other; there is no common physical process connecting the two; hence no change occurring over time can possibly enter their comparative description.

Thus, the “change” denoted by \delta is incapable of representing a process or a system evolution. In fact, the word “change” itself is something of a misnomer here.

Text-books often stupidly try to capture the aforementioned idea by saying that \delta represents a small and possibly finite change that occurs without any elapse of time. Apart from the mind-numbing idea of a finite change occurring over no time (or equally stupefying ideas which it suggests, viz., a change existing at literally the same instant of time, or, alternatively, a process of change that somehow occurs to a given system but “outside” of any time), what they, in a way, continue to suggest also is the erroneous idea that we are working with only a single, concretely physical system, here.

But that is not the idea behind \delta at all.

To complicate the matters further, no separate symbol is used when the variation \delta is made vanishingly small.

In the primary sense of the term variation (or \delta), the difference it represents is finite in nature. The variation is basically a function of space (and time), and at every value of x (and t), the value of \delta is finite, in the primary sense of the word. Yes, these values can be made vanishingly small, though the idea of the limits applied in this context is different. (Hint: Expand each of the two state functions in a power series and relate each of the corresponding power terms via a separate parameter. Then, put the difference in each parameter through a limiting process to vanish. You may also use the Fourier expansion.))

The difference represented by \delta is between two abstract views of a system. The two systems are related only in an abstract view, i.e., only in (the mathematical) thought. In the CoV, they are supposed as connected, but the connection between them is not concretely physical because there are no two separate physical systems concretely existing, in the first place. Both the systems here are mathematical abstractions—they first have been abstracted away from the real, physical system actually existing out there (of which there is only a single instance).

But, yes, there is a sense in which we can say that \delta does have a physical meaning: it carries the same physical units as for the state functions of the two abstract systems.


An example from biology:

Here is an example of the differences between two different paths (rather than two different states).

Plot the height h(t) of a growing sapling at different times, and connect the dots to yield a continuous graph of the height as a function of time. The difference in the heights of the sapling at two different instants is \Delta h. But if you consider two different saplings planted at the same time, and assuming that they grow to the same final height at the end of some definite time period (just pick some moment where their graphs cross each other), and then, abstractly regarding them as some sort of imaginary plants, if you plot the difference between the two graphs, that is the variation or \delta h(t) in the height-function of either. The variation itself is a function (here of time); it has the units, of course, of m.


Summary:

The \Delta is a local change inside a single system, and \text{d} is its limiting value, whereas the \delta is a difference across two abstract systems differing in their global states (or global paths), and there is no separate symbol to capture this object in the vanishingly small limit.


Exercises:

Consider one period of the function y = A \sin(x), say over the interval [0,2\pi]; A = a is a small, real-valued, constant. Now, set A = 1.1a. Is the change/difference here a \delta or a \Delta? Why or why not?

Now, take the derivative, i.e., y' = A \cos(x), with A = a once again. Is the change/difference here a \delta or a \Delta? Why or why not?

Which one of the above two is a bigger change/difference?

Also consider this angle: Taking the derivative did affect the whole function. If so, why is it that we said that \text{d} was necessarily a local change?


An important and special note:

The above exercises, I am sure, many (though not all) of the Officially Approved Full Professors of Mechanical Engineering at the Savitribai Phule Pune University and COEP would be able to do correctly. But the question I posed last time was: Would it be therefore possible for them to spell out the physical meaning of the variation i.e. \delta? I continue to think not. And, importantly, even among those who do solve the above exercises successfully, they wouldn’t be too sure about their own answers. Upon just a little deeper probing, they would just throw up their hands. [Ditto, for many American physicists.] Even if a conceptual clarity is required in applications.

(I am ever willing and ready to change my mind about it, but doing so would need some actual evidence—just the way my (continuing) position had been derived, in the first place, from actual observations of them.)

The reason I made this special note was because I continue to go jobless, and nearly bank balance-less (and also, nearly cashless). And it all is basically because of folks like these (and the Indians like the SPPU authorities). It is their fault. (And, no, you can’t try to lift what is properly their moral responsibility off their shoulders and then, in fact, go even further, and attempt to place it on mine. Don’t attempt doing that.)


A Song I Like:

[May be I have run this song before. If yes, I will replace it with some other song tomorrow or so. No I had not.]

Hindi: “Thandi hawaa, yeh chaandani suhaani…”
Music and Singer: Kishore Kumar
Lyrics: Majrooh Sultanpuri

[A quick ‘net search on plagiarism tells me that the tune of this song was lifted from Julius La Rosa’s 1955 song “Domani.” I heard that song for the first time only today. I think that the lyrics of the Hindi song are better. As to renditions, I like Kishor Kumar’s version better.]


[Minor editing may be done later on and the typos may be corrected, but the essentials of my positions won’t be. Mostly done right today, i.e., on 06th January, 2017.]

[E&OE]