# What am I thinking about? …and what should it be?

What am I thinking about?

It’s the “derivation” of the Schrodinger equation. Here’s how a simplest presentation of it goes:

The kinetic energy $T$ of a massive particle is given, in classical mechanics, as
$T = \dfrac{1}{2}mv^2 = \dfrac{p^2}{2m}$
where $v$ is the velocity, $m$ is the mass, and $p$ is the momentum. (We deal with only the scalar magnitudes, in this rough-and-ready “analysis.”)

If the motion of the particle occurs additionally also under the influence of a potential field $V$, then its total energy $E$ is given by:
$E = T + V = \dfrac{p^2}{2m} + V$

In classical electrodynamics, it can be shown that for a light wave, the following relation holds:
$E = pc$
where $E$ is the energy of light, $p$ is its momentum, and $c$ is its speed. Further, for light in vacuum:
$\omega = ck$
where $k = \frac{2\pi}{\lambda}$ is the wavevector.

Planck hypothesized that in the problem of the cavity radiation, the energy-levels of the electromagnetic oscillators in the metallic cavity walls maintained at thermal equilibrium are quantized, somehow:
$E = h \nu = \hbar \omega$
where $\hbar = \frac{h}{2\pi}$  and $\omega = 2 \pi \nu$ is the angular frequency. Making this vital hypothesis, he could successfully predict the power spectrum of the cavity radiation (getting rid of the ultraviolet catastrophe).

In explaining the photoelectric effect, Einstein hypothesized that lights consists of massless particles. He took Planck’s relation $E = \hbar \omega$ as is, and then, substituted on its left hand-side the classical expression for the energy of the radiation $E = pc$. On the right hand-side he substituted the relation which holds for light in vacuum, viz. $\omega = c k$. He thus arrived at the expression for the quantized momentum for the hypothetical particles of light:
$p = \hbar k$
With the hypothesis of the quanta of light, he successfully explained all the known experimentally determined features of the photoelectric effect.

Whereas Planck had quantized the equilibrium energy of the charged oscillators in the metallic cavity wall, Einstein quantized the electromagnetic radiation within the cavity itself, via spatially discrete particles of light—an assumption that remains questionable till this day (see “Anti-photon”).

Bohr hypothesized a planetary model of the atom. It had negatively charged and massive point particles of electrons orbiting around the positively charged and massive, point-particles of the nucleus. The model carried a physically unexplained feature of the stationary of the electronic orbits—i.e. the orbits travelling in which an electron, somehow, does not emit/absorb any radiation, in contradiction to the classical electrodynamics. However, this way, Bohr could successfully predict the hydrogen atom spectra. (Later, Sommerfeld made some minor corrections to Bohr’s model.)

de Broglie hypothesized that the relations $E = \hbar \omega$ and $p = \hbar k$ hold not only just for the massless particles of light as proposed by Einstein, but, by analogy, also for the massive particles like electrons. Since light had both wave and particle characters, so must, by analogy, the electrons. He hypothesized that the stationarity of the Bohr orbits (and the quantization of the angular momentum for the Bohr electron) may be explained by assuming that matter waves associated with the electrons somehow form a standing-wave pattern for the stationary orbits.

Schrodinger assumed that de Broglie’s hypothesis for massive particles holds true. He generalized de Broglie’s model by recasting the problem from that of the standing waves in the (more or less planar) Bohr orbits, to an eigenvalue problem of a differential equation over the entirety of space.

The scheme of  the “derivation” of Schrodinger’s differential equation is “simple” enough. First assuming that the electron is a complex-valued wave, we work out the expressions for its partial differentiations in space and time. Then, assuming that the electron is a particle, we invoke the classical expression for the total energy of a classical massive particle, for it. Finally, we mathematically relate the two—somehow.

Assume that the electron’s state is given by a complex-valued wavefunction having the complex-exponential form:
$\Psi(x,t) = A e^{i(kx -\omega t)}$

Partially differentiating twice w.r.t. space, we get:
$\dfrac{\partial^2 \Psi}{\partial x^2} = -k^2 \Psi$
Partially differentiating once w.r.t. time, we get:
$\dfrac{\partial \Psi}{\partial t} = -i \omega \Psi$

Assume a time-independent potential. Then, the classical expression for the total energy of a massive particle like the electron is:
$E = T + V = \dfrac{p^2}{2m} + V$
Note, this is not a statement of conservation of energy. It is merely a statement that the total energy has two and only two components: kinetic energy, and potential energy.

Now in this—classical—equation for the total energy of a massive particle of matter, we substitute the de Broglie relations for the matter-wave, viz. the relations $E = \hbar \omega$ and $p = \hbar k$. We thus obtain:
$\hbar \omega = \dfrac{\hbar^2 k^2}{2m} + V$
which is the new, hybrid form of the equation for the total energy. (It’s hybrid, because we have used de Broglie’s matter-wave postulates in a classical expression for the energy of a classical particle.)

Multiply both sides by $\Psi(x,t)$ to get:
$\hbar \omega \Psi(x,t) = \dfrac{\hbar^2 k^2}{2m}\Psi(x,t) + V(x)\Psi(x,t)$

Now using the implications for $\Psi$ obtained via its partial differentiations, namely:
$k^2 \Psi = - \dfrac{\partial^2 \Psi}{\partial x^2}$
and
$\omega \Psi = i \dfrac{\partial \Psi}{\partial t}$
and substituting them into the hybrid equation for the total energy, we get:
$i \hbar \dfrac{\partial \Psi(x,t)}{\partial t} = - \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x)\Psi(x,t)$

That’s what the time-dependent Schrodinger equation is.

And that—the “derivation” of the Schrodinger equation thus presented—is what I have been thinking of.

Apart from the peculiar mixture of the wave and particle paradigms followed in this “derivation,” the other few points, to my naive mind, seem to be: (i) the use of a complex-valued wavefunction, (ii) the step of multiplying the hybrid equation for the total energy, by this wavefunction, and (iii) the step of replacing $\omega \Psi(x,t)$ by $i \dfrac{\partial \Psi}{\partial t}$, and also replacing $k^2 \Psi$ by $- \dfrac{\partial^2 \Psi}{\partial x^2}$. Pretty rare, that step seems like, doesn’t it? I mean to say, just because it is multiplied by a variable, you are replacing a good and honest field variable by a partial time-derivative (or a partial space-derivative) of that same field variable! Pretty rare, a step like that is, in physics or engineering, don’t you think? Do you remember any other place in physics or engineering where we do something like that?

What should I think about?

Is there is any mechanical engineering topic that you want me to explain to you?

If so, send me your suggestions. If I find them suitable, I will begin thinking about them. May be, I will even answer them for you, here on this blog.

If not…

If not, there is always this one, involving the calculus of variations, again!:

Derbes, David (1996) “Feynman’s derivation of the Schrodinger equation,” Am. J. Phys., vol. 64, no. 7, July 1996, pp. 881–884

I’ve already found that I don’t agree with how Derbes uses the term “local”, in this article. His article makes it seem as if the local is nothing but a smallish segment on what essentially is a globally determined path. I don’t agree with that implication. …

However, here, although this issue is of relevance to the mechanical engineering proper, in the absence of a proper job (an Officially Approved Full Professor in Mechanical Engineering’s job), I don’t feel motivated to explain myself.

Instead, I find the following article by a Mechanical Engineering professor interesting: [^]

And, oh, BTW, if you are a blind follower of Feynman’s, do check out this one:

Briggs, John S. and Rost, Jan M. (2001) “On the derivation of the time-dependent equation of Schrodinger,” Foundations of Physics, vol. 31, no. 4, pp. 693–712.

I was delighted to find a mention of a system and an environment (so close to the heart of an engineer), even in this article on physics. (I have not yet finished reading it. But, yes, it too invokes the variational principles.)

OK then, bye for now.

[As usual, may be I will come back tomorrow and correct the write-up or streamline it a bit, though not a lot. Done on 2017.01.19.]

[E&OE]

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—4

In this post, I provide my answer to the question which I had raised last time, viz., about the differences between the $\Delta$, the $\text{d}$, and the $\delta$ (the first two, of the usual calculus, and the last one, of the calculus of variations).

Some pre-requisite ideas:

A system is some physical object chosen (or isolated) for study. For continua, it is convenient to select a region of space for study, in which case that region of space (holding some physical continuum) may also be regarded as a system. The system boundary is an abstraction.

A state of a system denotes a physically unique and reproducible condition of that system. State properties are the properties or attributes that together uniquely and fully characterize a state of a system, for the chosen purposes. The state is an axiom, and state properties are its corollary.

State properties for continua are typically expressed as functions of space and time. For instance, pressure, temperature, volume, energy, etc. of a fluid are all state properties. Since state properties uniquely define the condition of a system, they represent definite points in an appropriate, abstract, (possibly) higher-dimensional state space. For this reason, state properties are also called point functions.

A process (synonymous to system evolution) is a succession of states. In classical physics, the succession (or progression) is taken to be continuous. In quantum mechanics, there is no notion of a process; see later in this post.

A process is often represented as a path in a state space that connects the two end-points of the staring and ending states. A parametric function defined over the length of a path is called a path function.

A cyclic process is one that has the same start and end points.

During a cyclic process, a state function returns to its initial value. However, a path function does not necessarily return to the same value over every cyclic change—it depends on which particular path is chosen. For instance, if you take a round trip from point $A$ to point $B$ and back, you may spend some amount of money $m$ if you take one route but another amount $n$ if you take another route. In both cases you do return to the same point viz. $A$, but the amount you spend is different for each route. Your position is a state function, and the amount you spend is a path function.

[I may make the above description a bit more rigorous later on (by consulting a certain book which I don’t have handy right away (and my notes of last year are gone in the HDD crash)).]

The $\Delta$, the $\text{d}$, and the $\delta$:

The $\Delta$ denotes a sufficiently small but finite, and locally existing difference in different parts of a system. Typically, since state properties are defined as (continuous) functions of space and time, what the $\Delta$ represents is a finite change in some state property function that exists across two different but adjacent points in space (or two nearby instants in times), for a given system.

The $\Delta$ is a local quantity, because it is defined and evaluated around a specific point of space and/or time. In other words, an instance of $\Delta$ is evaluated at a fixed $x$ or $t$. The $\Delta x$ simply denotes a change of position; it may or may not mean a displacement.

The $\text{d}$ (i.e. the infinitesimal) is nothing but the $\Delta$ taken in some appropriate limiting process to the vanishingly small limit.

Since $\Delta$ is locally defined, so is the infinitesimal (i.e. $\text{d}$).

The $\delta$ of CoV is completely different from the above two concepts.

The $\delta$ is a sufficiently small but global difference between the states (or paths) of two different, abstract, but otherwise identical views of the same physically existing system.

Considering the fact that an abstract view of a system is itself a system, $\delta$ also may be regarded as a difference between two systems.

Though differences in paths are not only possible but also routinely used in CoV, in this post, to keep matters simple, we will mostly consider differences in the states of the two systems.

In CoV, the two states (of the two systems) are so chosen as to satisfy the same Dirichlet (i.e. field) boundary conditions separately in each system.

The state function may be defined over an abstract space. In this post, we shall not pursue this line of thought. Thus, the state function will always be a function of the physical, ambient space (defined in reference to the extensions and locations of concretely existing physical objects).

Since a state of a system of nonzero size can only be defined by specifying its values for all parts of a system (of which it is a state), a difference between states (of the two systems involved in the variation $\delta$) is necessarily global.

In defining $\delta$, both the systems are considered only abstractly; it is presumed that at most one of them may correspond to an actual state of a physical system (i.e. a system existing in the physical reality).

The idea of a process, i.e. the very idea of a system evolution, necessarily applies only to a single system.

What the $\delta$ represents is not an evolution because it does not represent a change in a system, in the first place. The variation, to repeat, represents a difference between two systems satisfying the same field boundary conditions. Hence, there is no evolution to speak of. When compressed air is passed into a rubber balloon, its size increases. This change occurs over certain time, and is an instance of an evolution. However, two rubber balloons already inflated to different sizes share no evolutionary relation with each other; there is no common physical process connecting the two; hence no change occurring over time can possibly enter their comparative description.

Thus, the “change” denoted by $\delta$ is incapable of representing a process or a system evolution. In fact, the word “change” itself is something of a misnomer here.

Text-books often stupidly try to capture the aforementioned idea by saying that $\delta$ represents a small and possibly finite change that occurs without any elapse of time. Apart from the mind-numbing idea of a finite change occurring over no time (or equally stupefying ideas which it suggests, viz., a change existing at literally the same instant of time, or, alternatively, a process of change that somehow occurs to a given system but “outside” of any time), what they, in a way, continue to suggest also is the erroneous idea that we are working with only a single, concretely physical system, here.

But that is not the idea behind $\delta$ at all.

To complicate the matters further, no separate symbol is used when the variation $\delta$ is made vanishingly small.

In the primary sense of the term variation (or $\delta$), the difference it represents is finite in nature. The variation is basically a function of space (and time), and at every value of $x$ (and $t$), the value of $\delta$ is finite, in the primary sense of the word. Yes, these values can be made vanishingly small, though the idea of the limits applied in this context is different. (Hint: Expand each of the two state functions in a power series and relate each of the corresponding power terms via a separate parameter. Then, put the difference in each parameter through a limiting process to vanish. You may also use the Fourier expansion.))

The difference represented by $\delta$ is between two abstract views of a system. The two systems are related only in an abstract view, i.e., only in (the mathematical) thought. In the CoV, they are supposed as connected, but the connection between them is not concretely physical because there are no two separate physical systems concretely existing, in the first place. Both the systems here are mathematical abstractions—they first have been abstracted away from the real, physical system actually existing out there (of which there is only a single instance).

But, yes, there is a sense in which we can say that $\delta$ does have a physical meaning: it carries the same physical units as for the state functions of the two abstract systems.

An example from biology:

Here is an example of the differences between two different paths (rather than two different states).

Plot the height $h(t)$ of a growing sapling at different times, and connect the dots to yield a continuous graph of the height as a function of time. The difference in the heights of the sapling at two different instants is $\Delta h$. But if you consider two different saplings planted at the same time, and assuming that they grow to the same final height at the end of some definite time period (just pick some moment where their graphs cross each other), and then, abstractly regarding them as some sort of imaginary plants, if you plot the difference between the two graphs, that is the variation or $\delta h(t)$ in the height-function of either. The variation itself is a function (here of time); it has the units, of course, of m.

Summary:

The $\Delta$ is a local change inside a single system, and $\text{d}$ is its limiting value, whereas the $\delta$ is a difference across two abstract systems differing in their global states (or global paths), and there is no separate symbol to capture this object in the vanishingly small limit.

Exercises:

Consider one period of the function $y = A \sin(x)$, say over the interval $[0,2\pi]$; $A = a$ is a small, real-valued, constant. Now, set $A = 1.1a$. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Now, take the derivative, i.e., $y' = A \cos(x)$, with $A = a$ once again. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Which one of the above two is a bigger change/difference?

Also consider this angle: Taking the derivative did affect the whole function. If so, why is it that we said that $\text{d}$ was necessarily a local change?

An important and special note:

The above exercises, I am sure, many (though not all) of the Officially Approved Full Professors of Mechanical Engineering at the Savitribai Phule Pune University and COEP would be able to do correctly. But the question I posed last time was: Would it be therefore possible for them to spell out the physical meaning of the variation i.e. $\delta$? I continue to think not. And, importantly, even among those who do solve the above exercises successfully, they wouldn’t be too sure about their own answers. Upon just a little deeper probing, they would just throw up their hands. [Ditto, for many American physicists.] Even if a conceptual clarity is required in applications.

(I am ever willing and ready to change my mind about it, but doing so would need some actual evidence—just the way my (continuing) position had been derived, in the first place, from actual observations of them.)

The reason I made this special note was because I continue to go jobless, and nearly bank balance-less (and also, nearly cashless). And it all is basically because of folks like these (and the Indians like the SPPU authorities). It is their fault. (And, no, you can’t try to lift what is properly their moral responsibility off their shoulders and then, in fact, go even further, and attempt to place it on mine. Don’t attempt doing that.)

A Song I Like:

[May be I have run this song before. If yes, I will replace it with some other song tomorrow or so. No I had not.]

Hindi: “Thandi hawaa, yeh chaandani suhaani…”
Music and Singer: Kishore Kumar
Lyrics: Majrooh Sultanpuri

[A quick ‘net search on plagiarism tells me that the tune of this song was lifted from Julius La Rosa’s 1955 song “Domani.” I heard that song for the first time only today. I think that the lyrics of the Hindi song are better. As to renditions, I like Kishor Kumar’s version better.]

[Minor editing may be done later on and the typos may be corrected, but the essentials of my positions won’t be. Mostly done right today, i.e., on 06th January, 2017.]

[E&OE]

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—3

I was looking for a certain book on heat transfer which I had (as usual) misplaced somewhere, and while searching for that book at home, I accidentally ran into another book I had—the one on Classical Mechanics by Rana and Joag [^].

After dusting this book a bit, I spent some time in one typical way, viz. by going over some fond memories associated with a suddenly re-found book…. The memories of how enthusiastic I once was when I had bought that book; how I had decided to finish that book right within weeks of buying it several years ago; the number of times I might have picked it up, and soon later on, kept it back aside somewhere, etc.  …

Yes, that’s right. I have not yet managed to finish this book. Why, I have not even managed to begin reading this book the way it should be read—with a paper and pencil at hand to work through the equations and the problems. That was the reason why, I now felt a bit guilty. … It just so happened that it was just the other day (or so) when I was happily mentioning the Poisson brackets on Prof. Scott Aaronson’s blog, at this thread [^]. … To remove (at least some part of) my sense of guilt, I then decided to browse at least through this part (viz., Poisson’s brackets) in this book. … Then, reading a little through this chapter, I decided to browse through the preceding chapters from the Lagrangian mechanics on which it depends, and then, in general, also on the calculus of variations.

It was at this point that I suddenly happened to remember the reason why I had never been able to finish (even the portions relevant to engineering from) this book.

The thing was, the explanation of the $\delta$—the delta of the variational calculus.

The explanation of what the $\delta$ basically means, I had found right back then (many, many years ago), was not satisfactorily given in this book. The book did talk of all those things like the holonomic constraints vs. the nonholonomic constraints, the functionals, integration by parts, etc. etc. etc. But without ever really telling me, in a forth-right and explicit manner, what the hell this $\delta$ was basically supposed to mean! How this $\delta y$ was different from the finite changes ($\Delta y$) and the infinitesimal changes ($\text{d}y$) of the usual calculus, for instance. In terms of its physical meaning, that is. (Hell, this book was supposed to be on physics, wasn’t it?)

Here, I of course fully realize that describing Rana and Joag’s book as “unsatisfactory” is making a rather bold statement, a very courageous one, in fact. This book is extraordinarily well-written. And yet, there I was, many, many years ago, trying to understand the delta, and not getting anywhere, not even with this book in my hand. (OK, a confession. The current copy which I have is not all that old. My old copy is gone by now (i.e., permanently misplaced or so), and so, the current copy is the one which I had bought once again, in 2009. As to my old copy, I think, I had bought it sometime in the mid-1990s.)

It was many years later, guess some time while teaching FEM to the undergraduates in Mumbai, that the concept had finally become clear enough to me. Most especially, while I was going through P. Seshu’s and J. N. Reddy’s books. [Reflected Glory Alert! Professor P. Seshu was my class-mate for a few courses at IIT Madras!] However, even then, even at that time, I remember, I still had this odd feeling that the physical meaning was still not clear to me—not as as clear as it should be. The matter eventually became “fully” clear to me only later on, while musing about the differences between the perspective of Thermodynamics on the one hand and that of Heat Transfer on the other. That was some time last year, while teaching Thermodynamics to the PG students here in Pune.

Thermodynamics deals with systems at equilibria, primarily. Yes, its methods can be extended to handle also the non-equilibrium situations. However, even then, the basis of the approach summarily lies only in the equilibrium states. Heat Transfer, on the other hand, necessarily deals with the non-equilibrium situations. Remove the temperature gradient, and there is no more heat left to speak of. There does remain the thermal energy (as a form of the internal energy), but not heat. (Remember, heat is the thermal energy in transit that appears on a system boundary.) Heat transfer necessarily requires an absence of thermal equilibrium. … Anyway, it was while teaching thermodynamics last year, and only incidentally pondering about its differences from heat transfer, that the idea of the variations (of Cov) had finally become (conceptually) clear to me. (No, CoV does not necessarily deal only with the equilibrium states; it’s just that it was while thinking about the equilibrium vs. the transient that the matter about CoV had suddenly “clicked” to me.)

In this post, let me now note down something on the concept of the variation, i.e., towards understanding the physical meaning of the symbol $\delta$.

Please note, I have made an inline update on 26th December 2016. It makes the presentation of the calculus of variations a bit less dumbed down. The updated portion is clearly marked as such, in the text.

The Problem Description:

The concept of variations is abstract. We would be better off considering a simple, concrete, physical situation first, and only then try to understand the meaning of this abstract concept.

Accordingly, consider a certain idealized system. See its schematic diagram below:

There is a long, rigid cylinder made from some transparent material like glass. The left hand-side end of the cylinder is hermetically sealed with a rigid seal. At the other end of the cylinder, there is a friction-less piston which can be driven by some external means.

Further, there also are a couple of thin, circular, piston-like disks ($D_1$ and $D_2$) placed inside the cylinder, at some $x_1$ and $x_2$ positions along its length. These disks thus divide the cylindrical cavity into three distinct compartments. The disks are assumed to be impermeable, and fitting snugly, they in general permit no movement of gas across their plane. However, they also are assumed to be able to move without any friction.

Initially, all the three compartments are filled with a compressible fluid to the same pressure in each compartment, say 1 atm. Since all the three compartments are at the same pressure, the disks stay stationary.

Then, suppose that the piston on the extreme right end is moved, say from position $P_1$ to $P_2$. The final position $P_2$ may be to the left or to the right of the initial position $P_1$; it doesn’t matter. For the current description, however, let’s suppose that the position $P_2$ is to the left of $P_1$. The effect of the piston movement thus is to increase the pressure inside the system.

The problem is to determine the nature of the resulting displacements that the two disks undergo as measured from their respective initial positions.

There are essentially two entirely different paradigms for conducting an analysis of this problem.

The “Vector Mechanics” Paradigm:

The first paradigm is based on an approach that was put to use so successfully by Newton. Usually, it is called the paradigm of vector analysis.

In this paradigm, we focus on the fact that the forced displacement of the piston with time, $x(t)$, may be described using some function of time that is defined over the interval lying between two instants $t_i$ and $t_f$.

For example, suppose the function is:
$x(t) = x_0 + v t$,
where $v$ is a constant. In other words, the motion of the piston is steady, with a constant velocity, between the initial and final instants. Since the velocity is constant, there is no acceleration over the open interval $(t_i, t_f)$.

However, notice that before the instant $t_i$, the piston velocity was zero. Then, the velocity suddenly became a finite (constant) value. Therefore, if you extend the interval to include the end-instants as well, i.e., if you consider the semi-closed interval $[t_i, t_f)$, then there is an acceleration at the instant $t_i$. Similarly, since the piston comes to a position of rest at $t = t_f$, there also is another acceleration, equal in magnitude and opposite in direction, which appears at the instant $t_f$.

The existence of these two instantaneous accelerations implies that jerks or pressure waves are sent through the system. We may model them as vector quantities, as impulses. [Side Exercise: Work out what happens if we consider only the open interval $(t_i, t_f)$.]

We can now apply Newton’s 3 laws, based on the idea that shock-waves must have begun at the piston at the instant $t = t_i$. They must have got transmitted through the gas kept under pressure, and they must have affected the disk $D_1$ lying closest to the piston, thereby setting this disk into motion. This motion must have passed through the gas in the middle compartment of the system as another pulse in the pressure (generated at the disk $D_1$), thereby setting also the disk $D_2$ in a state of motion a little while later. Finally, the pulse must have got bounced off the seal on the left hand side, and in turn, come back to affect the motion of the disk $D_2$, and then of the disk $D_1$. Continuing their travels to and fro, the pulses, and hence the disks, would thus be put in a back and forth motion.

After a while, these transients would move forth and back, superpose, and some of their constituent frequencies would get cancelled out, leaving only those frequencies operative such that the three compartments are put under some kind of stationary states.

In case the gas is not ideal, there would be damping anyway, and after a sufficiently long while, the disks would move through such small displacements that we could easily ignore the ever-decreasing displacements in a limiting argument.

Thus, assume that, after an elapse of a sufficiently long time, the disks become stationary. Of course, their new positions are not the same as their original positions.

The problem thus can be modeled as basically a transient one. The state of the new equilibrium state is thus primarily seen as an effect or an end-result of a couple of transient processes which occur in the forward and backward directions. The equilibrium is seen as not a primarily existing state, but as a result of two equal and opposite transient causes.

Notice that throughout this process, Newton’s laws can be applied directly. The nature of the analysis is such that the quantities in question—viz. the displacements of the disks—always are real, i.e., they correspond to what actually is supposed to exist in the reality out there.

The (values of) displacements are real in the sense that the mathematical analysis procedure itself involves only those (values of) displacements which can actually occur in reality. The analysis does not concern itself with some other displacements that might have been possible but don’t actually occur. The analysis begins with the forced displacement condition, translates it into pressure waves, which in turn are used in order to derive the predicted displacements in the gas in the system, at each instant. Thus, at any arbitrary instant of time $t > t_i$ (in fact, the analysis here runs for times $t \gg t_f$), the analysis remains concerned only with those displacements that are actually taking place at that instant.

The Method of Calculus of Variations:

The second paradigm follows the energetics program. This program was initiated by Newton himself as well as by Leibnitz. However, it was pursued vigorously not by Newton but rather by Leibnitz, and then by a series of gifted mathematicians-physicists: the Bernoulli brothers, Euler, Lagrange, Hamilton, and others. This paradigm is essentially based on the calculus of variations. The idea here is something like the following.

We do not care for a local description at all. Thus, we do not analyze the situation in terms of the local pressure pulses, their momenta/forces, etc. All that we focus on are just two sets of quantities: the initial positions of the disks, and their final positions.

For instance, focus on the disk $D_1$. It initially is at the position $x_{1_i}$. It is found, after a long elapse of time (i.e., at the next equilibrium state), to have moved to $x_{1_f}$. The question is: how to relate this change in $x_1$ on the one hand, to the displacement that the piston itself undergoes from $P_{x_i}$ to $P_{x_f}$.

To analyze this question, the energetics program (i.e., the calculus of variations) adopts a seemingly strange methodology.

It begins by saying that there is nothing unique to the specific value of the position $x_{1_f}$ as assumed by the disk $D_1$. The disk could have come to a halt at any other (nearby) position, e.g., at some other point $x_{1_1}$, or $x_{1_2}$, or $x_{1_3}$, … etc. In fact, since there are an infinity of points lying in a finite segment of line, there could have been an infinity of positions where the disk could have come to a rest, when the new equilibrium was reached.

Of course, in reality, the disk $D_1$ comes to a halt at none of these other positions; it comes to a halt only at $x_{1_f}$.

Yet, the theory says, we need to be “all-inclusive,” in a way. We need not, just for the aforementioned reason, deny a place in our analysis to these other positions. The analysis must include all such possible positions—even if they be purely hypothetical, imaginary, or unreal. What we do in the analysis, this paradigm says, is to initially include these merely hypothetical, unrealistic positions too on exactly the same footing as that enjoyed by that one position which is realistic, which is given by $x_{1_f}$.

Thus, we take a set of all possible positions for each disk. Then, for each such a position, we calculate the “impact” it would make on the energy of the system taken as a whole.

The energy of the system can be additively decomposed into the energies carried by each of its sub-parts. Thus, focusing on disk $D_1$, for each one of its possible (hypothetical) final position, we should calculate the energies carried by both its adjacent compartments. Since a change in $D_1$‘s position does not affect the compartment 3, we need not include it. However, for the disk $D_1$, we do need to include the energies carried by both the compartments 1 and 2. Similarly, for each of the possible positions occupied by the disk $D_2$, it should include the energies of the compartments 2 and 3, but not of 1.

At this point, to bring simplicity (and thereby better) clarity to this entire procedure, let us further assume that the possible positions of each disk forms a finite set. For instance, each disk can occupy only one of the positions that is some $-5, -4, -3, -2, -1, 0, +1, +2, +3, +4$ or $+5$ distance-units away from its initial position. Thus, a disk is not allowed to come to a rest at, say, $2.3$ units; it must do so either at $2$ or at $3$ units. (We will thus perform the initial analysis in terms of only the integer positions, and only later on extend it to any real-valued positions.) (If you are a mechanical engineering student, suggest a suitable mechanism that can ensure only integer relative displacements.)

The change in energy $E$ of a compartment is given by
$\Delta E = P A \Delta x$,
where $P$ is the pressure, $A$ is the cross-sectional area of the cylinder, and $\Delta x$ is the change in the length of the compartment.

Now, observe that the energy of the middle compartment depends on the relative distance between the two disks lying on its sides. Yet, for the same reason, the energy of the middle compartment does depend on both these positions. Hence, we must take a Cartesian product of the relative displacements undergone by both the disks, and only then calculate the system energy for each such a permutation (i.e. the ordered pair) of their positions. Let us go over the details of the Cartesian product.

The Cartesian product of the two positions may be stated as a row-by-row listing of ordered pairs of the relative positions of $D_1$ and $D_2$, e.g., as follows: the ordered pair $(-5, +2)$ means that the disk $D_1$ is $5$ units to the left of its initial position, and the disk $D_2$ is $+2$ units to the right of its initial position. Since each of the two positions forming an ordered pair can range over any of the above-mentioned $11$ number of different values, there are, in all, $11 \times 11 = 121$ number of such possible ordered pairs in the Cartesian product.

For each one of these $121$ different pairs, we use the above-given formula to determine what the energy of each compartment is like. Then, we add the three energies (of the three compartments) together to get the value of the energy of the system as a whole.

In short, we get a set of $121$ possible values for the energy of the system.

You must have noticed that we have admitted every possible permutation into analysis—all the $121$ number of them.

Of course, out of all these $121$ number of permutations of positions, it should turn out that $120$ number of them have to be discarded because they would be merely hypothetical, i.e. unreal. That, in turn, is because, the relative positions of the disks contained in one and only one ordered pair would actually correspond to the final, equilibrium position. After all, if you conduct this experiment in reality, you would always get a very definite pair of the disk-positions, and it this same pair of relative positions that would be observed every time you conducted the experiment (for the same piston displacement). Real experiments are reproducible, and give rise to the same, unique result. (Even if the system were to be probabilistic, it would have to give rise to an exactly identical probability distribution function.) It can’t be this result today and that result tomorrow, or this result in this lab and that result in some other lab. That simply isn’t science.

Thus, out of all those $121$ different ordered-pairs, one and only one ordered-pair would actually correspond to reality; the rest all would be merely hypothetical.

The question now is, which particular pair corresponds to reality, and which ones are unreal. How to tell the real from the unreal. That is the question.

Here, the variational principle says that the pair of relative positions that actually occurs in reality carries a certain definite, distinguishing attribute.

The system-energy calculated for this pair (of relative displacements) happens to carry the lowest magnitude from among all possible $121$ number of pairs. In other words, any hypothetical or unreal pair has a higher amount of system energy associated with it. (If two pairs give rise to the same lowest value, both would be equally likely to occur. However, that is not what provably happens in the current example, so let us leave this kind of a “degeneracy” aside for the purposes of this post.)

(The update on 26 December 2016 begins here:)

Actually, the description  given in the immediately preceding paragraph was a bit too dumbed down. The variational principle is more subtle than that. Explaining it makes this post even longer, but let me give it a shot anyway, at least today.

To follow the actual idea of the variational principle (in a not dumbed-down manner), the procedure you have to follow is this.

First, make a table of all possible relative-position pairs, and their associated energies. The table has the following columns: a relative-position pair, the associated energy $E$ as calculated above, and one more column which for the time being would be empty. The table may look something like what the following (partial) listing shows:

(0,0) -> say, 115 Joules
(-1,0) -> say, 101 Joules
(-2,0) -> say, 110 Joules

(2,2) -> say, 102 Joules
(2,3) -> say, 100 Joules
(2,4) -> say, 101 Joules
(2,5) -> say, 120 Joules

(5,0) -> say, 135 Joules

(5,5) -> say 117 Joules.

Having created this table (of $121$ rows), you then pick each row one by and one, and for the picked up $n$-th row, you ask a question: What all other row(s) from this table have their relative distance pairs such that these pairs lie closest to the relative distance pair of this given row. Let me illustrate this question with a concrete example. Consider the row which has the relative-distance pair given as (2,3). Then, the relative distance pairs closest to this one would be obtained by adding or subtracting a distance of 1 to each in the pair. Thus, the relative distance pairs closest to this one would be: (3,3), (1,3), (2,4), and (2,2). So, you have to pick up those rows which have these four entries in the relative-distance pairs column. Each of these four pairs represents a variation $\delta$ on the chosen state, viz. the state (2,3).

In symbolic terms, suppose for the $n$-th row being considered, the rows closest to it in terms of the differences in their relative distance pairs, are the $a$-th, $b$-th, $c$-th and $d$-th rows. (Notice that the rows which are closest to a given row in this sense, would not necessarily be found listed just above or below that given row, because the scheme followed while creating the list or the vector that is the table would not necessarily honor the closest-lying criterion (which necessarily involves two numbers)—not at least for all rows in the table.

OK. Then, in the next step, you find the differences in the energies of the $n$-th row from each of these closest rows, viz., the $a$-th, $b$-th, $c$-th and $c$-th rows. That is to say, you find the absolute magnitudes of the energy differences. Let us denote these magnitudes as: $\delta E_{na} = |E_n - E_a|$$\delta E_{nb} = |E_n - E_b|$$\delta E_{nc} = |E_n - E_c|$ and $\delta E_{nd} = |E_n - E_d|$.  Suppose the minimum among these values is $\delta E_{nc}$. So, against the $n$-th row, in the last column of the table, you write the value $\delta E_{nc}$.

Having done this exercise separately for each row in the table, you then ask: Which row has the smallest entry in the last column (the one for $\delta E$), and you pick that up. That is the distinguished (or the physically occurring) state.

In other words, the variational principle asks you to select not the row with the lowest absolute value of energy, but that row which shows the smallest difference of energy from one of its closest neighbours—and these closest neighbours are to be selected according to the differences in each number appearing in the relative-distance pair, and not according to the vertical place of rows in the tabular listing. (It so turns out that in this example, the row thus selected following both criteria—lowest energy as well as lowest variation in energy—are identical, though it would not necessarily always be the case. In short, we can’t always get away with the first, too dumbed down, version.)

Thus, the variational principle is about that change in the relative positions for which the corresponding change in the energy vanishes (or has the minimum possible absolute magnitude, in case the positions form a discretely varying, finite set).

(The update on 26th December 2016 gets over here.)

And, it turns out that this approach, too, is indeed able to perfectly predict the final disk-positions—precisely as they actually are observed in reality.

If you allow a continuum of positions (instead of the discrete set of only the $11$ number of different final positions for one disk, or $121$ number of ordered pairs), then instead of taking a Cartesian product of positions, what you have to do is take into account a tensor product of the position functions. The maths involved is a little more advanced, but the underlying algebraic structure—and the predictive principle which is fundamentally involved in the procedure—remains essentially the same. This principle—the variational principle—says:

Among all possible variations in the system configurations, that system configuration corresponds to reality which has the least variation in energy associated with it.

(This is a very rough statement, but it will do for this post and for a general audience. In particular, we don’t look into the issues of what constitute the kinematically admissible constraints, why the configurations must satisfy the field boundary conditions, the idea of the stationarity vs. of a minimum or a maximum, i.e., the issue of convexity-vs.-concavity, etc. The purpose of this post—and our example here—are both simple enough that we need not get into the whole she-bang of the variational theory as such.)

Notice that in this second paradigm, (i) we did not restrict the analysis to only those quantities that are actually taking place in reality; we also included a host (possibly an infinity) of purely hypothetical combinations of quantities too; (ii) we worked with energy, a scalar quantity, rather than with momentum, a vector quantity; and finally, (iii) in the variational method, we didn’t bother about the local details. We took into account the displacements of the disks, but not any displacement at any other point, say in the gas. We did not look into presence or absence of a pulse at one point in the gas as contrasted from any other point in it. In short, we did not discuss the details local to the system either in space or in time. We did not follow the system evolution, at all—not at least in a detailed, local way. If we were to do that, we would be concerned about what happens in the system at the instants and at spatial points other than the initial and final disk positions. Instead, we looked only at a global property—viz. the energy—whether at the sub-system level of the individual compartments, or at the level of the overall system.

The Two Paradigms Contrasted from Each Other:

If we were to follow Newton’s method, it would be impossible—impossible in principle—to be able to predict the final disk positions unless all their motions over all the intermediate transient dynamics (occurring over each moment of time and at each place of the system) were not be traced. Newton’s (or vectorial) method would require us to follow all the details of the entire evolution of all parts of the system at each point on its evolution path. In the variational approach, the latter is not of any primary concern.

Yet, in following the energetics program, we are able to predict the final disk positions. We are able to do that without worrying about what all happened before the equilibrium gets established. We remain concerned only with certain global quantities (here, system-energy) at each of the hypothetical positions.

The upside of the energetics program, as just noted, is that we don’t have to look into every detail at every stage of the entire transient dynamics.

Its downside is that we are able to talk only of the differences between certain isolated (hypothetical) configurations or states. The formalism is unable to say anything at all about any of the intermediate states—even if these do actually occur in reality. This is a very, very important point to keep in mind.

The Question:

Now, the question with which we began this post. Namely, what does the delta of the variational calculus mean?

Referring to the above discussion, note that the delta of the variational calculus is, here, nothing but a change in the position-pair, and also the corresponding change in the energy.

Thus, in the above example, the difference of the state (2,3) from the other close states such as (3,3), (1,3), (2,4), and (2,2) represents a variation in the system configuration (or state), and for each such a variation in the system configuration (or state), there is a corresponding variation in the energy $\delta E_{ni}$ of the system. That is what the delta refers to, in this example.

Now, with all this discussion and clarification, would it be possible for you to clearly state what the physical meaning of the delta is? To what precisely does the concept refer? How does the variation in energy $\delta E$ differ from both the finite changes ($\Delta E$) as well as the infinitesimal changes ($\text{d}E$) of the usual calculus?

Note, the question is conceptual in nature. And, no, not a single one of the very best books on classical mechanics manages to give a very succinct and accurate answer to it. Not even Rana and Joag (or Goldstein, or Feynman, or…)

I will give my answer in my next post, next year. I will also try to apply it to a couple of more interesting (and somewhat more complicated) physical situations—one from engineering sciences, and another from quantum mechanics!

In the meanwhile, think about it—the delta—the concept itself, its (conceptual) meaning. (If you already know the calculus of variations, note that in my above write-up, I have already supplied the answer, in a way. You just have to think a bit about it, that’s all!)

An Important Note: Do bring this post to the notice of the Officially Approved Full Professors of Mechanical Engineering in SPPU, and the SPPU authorities. I would like to know if the former would be able to state the meaning—at least now that I have already given the necessary context in such great detail.

Ditto, to the Officially Approved Full Professors of Mechanical Engineering at COEP, esp. D. W. Pande, and others like them.

After all, this topic—Lagrangian mechanics—is at the core of Mechanical Engineering, even they would agree. In fact, it comes from a subject that is not taught to the metallurgical engineers, viz., the topic of Theory of Machines. But it is taught to the Mechanical Engineers. That’s why, they should be able to crack it, in no time.

(Let me continue to be honest. I do not expect them to be able to crack it. But I do wish to know if they are able at least to give a try that is good enough!)

Even though I am jobless (and also nearly bank balance-less, and also cashless), what the hell! …

…Season’s greetings and best wishes for a happy new year!

A Song I Like:

[With jobless-ness and all, my mood isn’t likely to stay this upbeat, but anyway, while it lasts, listen to this song… And, yes, this song is like, it’s like, slightly more than 60 years old!]

(Hindi) “yeh raat bhigee bhigee”
Music: Shankar-Jaikishan
Singers: Manna De and Lata Mangeshkar
Lyrics: Shailendra

[E&OE]

/

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—2

Remember the age-old decade-old question, viz.:

“Stress or strain: which one is more fundamental?”

I myself had posed it at iMechanica about a decade ago [^]. Specifically, on 8th March 2007 (US time, may be EST or something).

The question had generated quite a bit of discussion at that time. Even as of today, this thread remains within the top 5 most-hit posts at iMechanica.

In fact, as of today, with about 1.62 lakh reads (i.e. 162 k hits), I think, it is the second most hit post at iMechanica. The only post with more hits, I think, is Nanshu Lu’s, providing a tutorial for the Abaqus software [^]; it beats mine like hell, with about 5 lakh (500 k) hits! The third most hit post, I think, again is about sharing scripts for the Abaqus software [^]; as of today, it lags mine very closely, but could overtake mine anytime, with about 1.48 lakh (148 k) hits already. There used to be a general thread on Open Source FEM software that used to be very close to my post. As of today, it has fallen behind a bit, with about 1.42 lakh (142 k) hits [^]. (I don’t know, but there could be other widely read posts, too.)

Of course, the attribute “most hit” is in no fundamental way related to “most valuable,” “most relevant,” or even “most interesting.”

Yet, the fact of the matter also is that mine is the only one among the top 5 posts which probes on a fundamental theoretical aspect. All others seem to be on software. Not very surprising, in a way.

Typically, hits get registered for topics providing some kind of a practical service. For instance, tips and tutorials on software—how to install a software, how to deal with a bug, how to write a sub-routine, how to produce visualizations, etc. Topics like these tend to get more hits. These are all practical matters, important right in the day-to-day job or studies, and people search the ‘net more for such practically useful services. Precisely for this reason—and especially given the fact that iMechanica is a forum for engineers and applied scientists—it is unexpected (at least it was unexpected to me) that a “basically useless” and “theoretical” discussion could still end up being so popular. There certainly was a surprise about it, to me. … But that’s just one part.

The second, more interesting part (i.e., more interesting to me) has been that, despite all these reads, and despite the simplicity of the concepts involved (stress and strain), the issue went unresolved for such a long time—almost a decade!

Students begin to get taught these two concepts right when they are in their XI/XII standard. In my XI/XII standard, I remember, we even had a practical about it: there was a steel wire suspended from a cantilever near the ceiling, and there was hook with a supporting plate at the bottom of this wire. The experiment consisted of adding weights, and measuring extensions. … Thus, the learning of these concepts begins right around the same time that students are learning calculus and Newton’s  3 laws… Students then complete the acquisition of these two concepts in their “full” generality, right by the time they are just in the second- or third-year of undergraduate engineering. The topic is taught in a great many branches of engineering: mechanical, civil, aerospace, metallurgical, chemical, naval architecture, and often-times (and certainly in our days and in COEP) also electrical. (This level of generality would be enough to discuss the question as posed at iMechanica.)

In short, even if the concepts are so “simple” that UG students are routinely taught them, a simple conceptual question involving them could go unresolved for such a long time.

It is this fact which was (honestly) completely unexpected to me, at least at the time when I had posed the question.

I had actually thought that there would surely be some reference text/paper somewhere that must have considered this aspect already, and answered it. But I was afraid that the answer (or the reference in which it appears) could perhaps be outside of my reach, my understanding of continuum mechanics. (In particular, I knew only a little bit of tensor calculus—only that as given in Malvern, and in Schaum’s series, basically. (I still don’t know much more about tensor calculus; my highest reach for tensor calculus remains limited to the book by Prof. Allan Bower of Brown [^].)) Thus, the reason I wrote the question in such a great detail (and in my replies, insisted on discussing the issues in conceptual details) was only to emphasize the fact that I had no hi-fi tensor calculus in mind; only the simplest physics-based and conceptual explanation was what I was looking for.

And that’s why, the fact that the question went unresolved for so long has also been (actually) fascinating to me. I (actually) had never expected it.

And yes, “dear” Officially Approved Mechanical Engineering Professors at the Savitribai Phule Pune University (SPPU), and authorities at SPPU, as (even) you might have noticed, it is a problem concerning the very core of the Mechanical Engineering proper.

I had thought once, may be last year or so, that I had finally succeeded in nailing down the issue right. (I might have written about it on this blog or somewhere else.) But, still, I was not so sure. So, I decided to wait.

I now have come to realize that my answer should be correct.

I, however, will not share my answer right away. There are two reasons for it.

First, I would like it if someone else gives it a try, too. It would be nice to see someone else crack it, too. A little bit of a wait is nothing to trade in for that. (As far as I am concerned, I’ve got enough “popularity” etc. just out of posing it.)

Second, I also wish to see if the Officially Approved Mechanical Engineering Professors at the Savitribai Phule Pune University (SPPU)) would be willing and able to give it a try.

(Let me continue to be honest. I do not expect them to crack it. But I do wish to know whether they are able to give it a try.)

In fact, come to think of it, let me do one thing. Let me share my answer only after one of the following happens:

• either I get the Official Approval (and also a proper, paying job) as a Full Professor of Mechanical Engineering at SPPU,
• or, an already Officially Approved Full Professor of Mechanical Engineering at SPPU (especially one of those at COEP, especially D. W. Pande, and/or one of those sitting on the Official COEP/UGC Interview Panels for faculty interviews at SPPU) gives it at least a try that is good enough. [Please note, the number of hits on the international forum of iMechanica, and the nature of the topic, once again.]

I will share my answer as soon as either of the above two happens—i.e., in the Indian government lingo: “whichever is earlier” happens.

But, yes, I am happy that I have come up with a very good argument to finally settle the issue. (I am fairly confident that my eventual answer should also be more or less satisfactory to those who had participated on this iMechanica thread. When I share my answer, I will of course make sure to note it also at iMechanica.)

This time round, there is not just one song but quite a few of them competing for inclusion on the “A Song I Like” section. Perhaps, some of these, I have run already. Though I wouldn’t mind repeating a song, I anyway want to think a bit about it before finalizing one. So, let me add the section when I return to do some minor editing later today or so. (I certainly want to get done with this post ASAP, because there are other theoretical things that beckon my attention. And yes, with this announcement about the stress-and-strain issue, I am now going to resume my blogging on topics related to QM, too.)

Update at 13:40 hrs (right on 19 Dec. 2016): Added the section on a song I like; see below.

A Song I Like:

(Marathi) “soor maagoo tulaa mee kasaa? jeevanaa too tasaa, mee asaa!”
Lyrics: Suresh Bhat
Music: Hridaynath Mangeshkar
Singer: Arun Date

It’s a very beautiful and a very brief poem.

As a song, it has got fairly OK music and singing. (The music composer could have done better, and if he were to do that, so would the singer. The song is not in a bad shape in its current form; it is just that given the enormously exceptional talents of this composer, Hridaynath Mangeshkar, one does get a feel here that he could have done better, somehow—don’t ask me how!) …

I will try to post an English translation of the lyrics if I find time. The poem is in a very, very simple Marathi, and for that reason, it would also be very, very easy to give a rough sense of it—i.e., if the translation is to be rather loose.

The trouble is, if you want to keep the exact shade of the words, it then suddenly becomes very difficult to translate. That’s why, I make no promises about translating it. Further, as far as I am concerned, there is no point unless you can convey the exact shades of the original words. …

Unless you are a gifted translator, a translation of a poem almost always ends up losing the sense of rhythm. But even if you keep a more modest aim, viz., only of offering an exact translation without bothering about the rhythm part, the task still remains difficult. And it is more difficult if the original words happen to be of the simple, day-to-day usage kind. A poem using complex words (say composite, Sanskrit-based words) would be easier to translate precisely because of its formality, precisely because of the distance it keeps from the mundane life… An ordinary poet’s poem also would be easy to translate regardless of what kind of words he uses. But when the poet in question is great, and uses simple words, it becomes a challenge, because it is difficult, if not impossible, to convey the particular sense of life he pours into that seemingly effortless composition. That’s why translation becomes difficult. And that’s why I make no promises, though a try, I would love to give it—provided I find time, that is.

Second Update on 19th Dec. 2016, 15:00 hrs (IST):

A Translation of the Lyrics:

I offer below a rough translation of the lyrics of the song noted above. However, before we get to the translation, a few notes giving the context of the words are absolutely necessary.

Notes on the Context:

Note 1:

Unlike in the Western classical music, Indian classical music is not written down. Its performance, therefore, does not have to conform to a pre-written (or a pre-established) scale of tones. Particularly in the Indian vocal performance, the singer is completely free to choose any note as the starting note of his middle octave.

Typically, before the actual singing begins, the lead singer (or the main instrument player) thinks of some tone that he thinks might best fit how he is feeling that day, how his throat has been doing lately, the particular settings at that particular time, the emotional interpretation he wishes to emphasize on that particular day, etc. He, therefore, tentatively picks up a note that might serve as the starting tone for the middle octave, for that particular performance. He makes this selection not in advance of the show and in private, but right on the stage, right in front of the audience, right after the curtain has already gone up. (He might select different octaves for two successive songs, too!)

Then, to make sure that his rendition is going to come out right if he were to actually use that key, that octave, what he does is to ask a musician companion (himself on the stage besides the singer) to play and hold that note on some previously well-tuned instrument, for a while. The singer then uses this key as the reference, and tries out a small movement or so. If everything is OK, he will select that key.

All this initial preparation is called (Hindi) “soor lagaanaa.” The part where the singer turns to the trusted companion and asks for the reference note to be played is called (Hindi) “soor maanganaa.” The literal translation of the latter is: “asking for the tone” or “seeking the pitch.”

After thus asking for the tone and trying it out, if the singer thinks that singing in that specific key is going to lead to a good concert performance, he selects it.

At this point, both—the singer and that companion musician—exchange glances at each other, and with that indicate that the tone/pitch selection is OK, that this part is done. No words are exchanged; only the glances. Indian performances depend a great deal on impromptu variations, on improvizations, and therefore, the mutual understanding between the companion and the singer is of crucial importance. In fact, so great is their understanding that they hardly ever exchange any words—just glances are enough. Asking for the reference key is just a simple ritual that assures both that the mutual understanding does exist.

And after that brief glance, begins the actual singing.

Note 2:

Whereas the Sanskrit and Marathi word “aayuShya” means life-span (the number of years, or the finite period that is life), the Sanskrit and Marathi word “jeevan” means Life—with a capital L. The meaning of “jeevan” thus is something like a slightly abstract outlook on the concrete facts of life. It is like the schema of life. The word is not so abstract as to mean the very Idea of Life or something like that. It is life in the usual, day-to-day sense, but with a certain added emphasis on the thematic part of it.

Note 3:

Here, the poet is addressing this poem to “jeevan” i.e., to the Life with a capital L (or the life taken in its more abstract, thematic sense). The poet is addressing Life as if the latter is a companion in an Indian singing concert. The Life is going to help him in selecting the note—the note which would define the whole scale in which to sing during the imminent live performance. The Life is also his companion during the improvisations. The poem is addressed using this metaphor.

Now, my (rough) translation:

The Refrain:
[Just] How do I ask you for the tone,
Life, you are that way [or you follow some other way], and I [follow] this way [or, I follow mine]

Stanza 1:
You glanced at me, I glanced at you,
[We] looked full well at each other,
Pain is my mirror [or the reference instrument], and [so it is] yours [too]

Stanza 2:
Even once, to [my] mind’s satisfaction,
You [oh, Life] did not ever become my [true]  mate
[And so,] I played [on this actual show of life, just whatever] the way the play happened [or unfolded]

And, finally, Note 4 (Yes, one is due):

There is one place where I failed in my translation, and most any one not knowing both the Marathi language and the poetry of Suresh Bhat would.

In Marathi, “tu tasaa, [tar] mee asaa,” is an expression of a firm, almost final, acknowledgement of (irritating kind of) differences. “If you must insist on being so unreasonable, then so be it—I am not going to stop following my mind either.” That is the kind of sense this brief Marathi expression carries.

And, the poet, Suresh Bhat, is peculiar: despite being a poet, despite showing exquisite sensitivity, he just never stops being manly, at the same time. Pain and sorrow and suffering might enter his poetry; he might acknowledge their presence through some very sensitively selected words. And yet, the underlying sense of life which he somehow manages to convey also is as if he is going to dismiss pain, sorrow, suffering, etc., as simply an affront—a summarily minor affront—to his royal dignity. (This kind of a “royal” sense of life often is very well conveyed by ghazals. This poem is a Marathi ghazal.) Thus, in this poem, when Suresh Bhat agrees to using pain as a reference point, the words still appear in such a sequence that it is clear that the agreement is being conceded merely in order to close a minor and irritating part of an argument, that pain etc. is not meant to be important even in this poem let alone in life. Since the refrain follows immediately after this line, it is clear that the stress gets shifted to the courteous question which is raised following the affronts made by one fickle, unfaithful, even idiotic Life—the question of “Just how do I treat you as a friend? Just how do I ask you for the tone?” (The form of “jeevan” or Life used by Bhat in this poem is masculine in nature, not neutral the way it is in normal Marathi.)

I do not know how to arrange the words in the translation so that this same sense of life still comes through. I simply don’t have that kind of a command over languages—any of the languages, whether Marathi or English. Hence this (4th) note. [OK. Now I am (really) done with this post.]

Anyway, take care, and bye for now…

Update on 21st Dec. 2016, 02:41 AM (IST):

Realized a mistake in Stanza 1, and corrected it—the exchange between yours and mine (or vice versa).

[E&OE]

/