# An interesting problem from the classical mechanics of vibrations

Update on 18 June 2017:
Added three diagrams depicting the mathematical abstraction of the problem; see near the end of the post. Also added one more consideration by way of an additional question.

TL;DR: A very brief version of this post is now posted at iMechanica; see here [^].

How I happened to come to formulate this problem:

As mentioned in my last post, I had started writing down my answers to the conceptual questions from Eisberg and Resnick’s QM text. However, as soon as I began doing that (typing out my answer to the first question from the first chapter), almost predictably, something else happened.

Since it anyway was QM that I was engaged with, somehow, another issue from QM—one which I had thought about a bit some time ago—happened to now just surface up in my mind. And it was an interesting issue. Back then, I had not thought of reaching an answer, and even now, I realized, I had not very satisfactory answer to it, not even in just conceptual terms. Naturally, my mind remained engaged in thinking about this second QM problem for a while.

In trying to come to terms with this QM problem (of my own making, not E&R’s), I now tried to think of some simple model problem from classical mechanics that might capture at least some aspects of this QM issue. Thinking a bit about it, I realized that I had not read anything about this classical mechanics problem during my [very] limited studies of the classical mechanics.

But since it appeared simple enough—heck, it was just classical mechanics—I now tried to reason through it. I thought I “got” it. But then, right the next day, I began doubting my own answer—with very good reasons.

… By now, I had no option but to keep aside the more scholarly task of writing down answers to the E&R questions. The classical problem of my own making had begun becoming all interesting by itself. Naturally, even though I was not procrastinating, I still got away from E&R—I got diverted.

I made some false starts even in the classical version of the problem, but finally, today, I could find some way through it—one which I think is satisfactory. In this post, I am going to share this classical problem. See if it interests you.

Background:

Consider an idealized string tautly held between two fixed end supports that are a distance $L$ apart; see the figure below. The string can be put into a state of vibrations by plucking it. There is a third support exactly at the middle; it can be removed at will.

Assume all the ideal conditions. For instance, assume perfectly rigid and unyielding supports, and a string that is massive (i.e., one which has a lineal mass density; for simplicity, assume this density to be constant over the entire string length) but having zero thickness. The string also is perfectly elastic and having zero internal friction of any sort. Assume that the string is surrounded by the vacuum (so that the vibrational energy of the string does not leak outside the system). Assume the absence of any other forces such as gravitational, electrical, etc. Also assume that the middle support, when it remains touching the string, does not allow any leakage of the vibrational energy from one part of the string to the other. Feel free to make further suitable assumptions as necessary.

The overall system here consists of the string (sans the supports, whose only role is to provide the necessary boundary conditions).

Initially, the string is stationary. Then, with the middle support touching the string, the left-half of the string is made to undergo oscillations by plucking it somewhere in the left-half only, and immediately releasing it. Denote the instant of the release as, say $t_R$. After the lapse of a sufficiently long time period, assume that the left-half of the system settles down into a steady-state standing wave pattern. Given our assumptions, the right-half of the system continues to remain perfectly stationary.

The internal energy of the system at $t_0$ is $0$. Energy is put into the system only once, at $t_R$, and never again. Thus, for all times $t > t_R$, the system behaves as a thermodynamically isolated system.

For simplicity, assume that the standing waves in the left-half form the fundamental mode for that portion (i.e. for the length $L/2$). Denote the frequency of this fundamental mode as $\nu_H$, and its max. amplitude (measured from the central line) as $A_H$.

Next, at some instant of time $t = t_1$, suppose that the support in the middle is suddenly removed, taking care not to disturb the string in any way in the process. That is to say, we  neither put in any more energy in the system nor take out of it, in the process of removing the middle support.

Once the support is thus removed, the waves from the left-half can now travel to the right-half, get reflected from the right end-support, travel all the way to the left end-support, get reflected there, etc. Thus, they will travel back and forth, in both the directions.

Modeled as a two-point BV/IC problem, assume that the system settles down into a steadily repeating pattern of some kind of standing waves.

The question now is:

What would be the pattern of the standing waves formed in the system at a time $t_F \gg t_1$?

The theory suggests that there is no unique answer!:

Since the support in the middle was exactly at the midpoint, removing it has the effect of suddenly doubling the length for the string.

Now, simple maths of the normal modes tells you that the string can vibrate in the fundamental mode for the entire length, which means: the system should show standing waves of the frequency $\nu_F = \nu_H/2$.

However, there also are other, theoretically conceivable, answers.

For instance, it is also possible that the system gets settled into the first higher-harmonic mode. In the very first higher-harmonic mode, it will maintain the same frequency as earlier, i.e., $\nu_F = \nu_H$, but being an isolated system, it has to conserve its energy, and so, in this higher harmonic mode, it must vibrate with a lower max. amplitude $A_F < A_H$. Thermodynamically speaking, since the energy is conserved also in such a mode, it also should certainly be possible.

In fact, you can take the argument further, and say that any one or all of the higher harmonics (potentially an infinity of them) would be possible. After all, the system does not have to maintain a constant frequency or a constant max. amplitude; it only has to maintain the same energy.

OK. That was the idealized model and its maths. Now let’s turn to reality.

Relevant empirical observations show that only a certain answer gets selected:

What do you actually observe in reality for systems that come close enough to the above mentioned idealized description? Let’s take a range of examples to get an idea of what kind of a show the real world puts up….

Consider, say, a violinist’s performance. He can continuously alter the length of the vibrations with his finger, and thereby produce a continuous spectrum of frequencies. However, at any instant, for any given length for the vibrating part, the most dominant of all such frequencies is, actually, only the fundamental mode for that length.

A real violin does not come very close to our idealized example above. A flute is better, because its spectrum happens to be the purest among all musical instruments. What do we mean by a “pure” tone here? It means this: When a flutist plays a certain tone, say the middle “saa” (i.e. the middle “C”), the sound actually produced by the instrument does not significantly carry any higher harmonics. That is to say, when a flutist plays the middle  “saa,” unlike the other musical instruments, the flute does not inadvertently go on to produce also the “saa”s from any of the higher octaves. Its energy remains very strongly concentrated in only a single tone, here, the middle “saa”. Thus, it is said to be a “pure” tone; it is not “contaminated” by any of the higher harmonics. (As to the lower harmonics for a given length, well, they are ruled out because of the basic physics and maths.)

Now, if you take a flute of a variable length (something like a trumpet) and try very suddenly doubling the length of the vibrating air column, you will find that instead of producing a fainter sound of the same middle “saa”, the flute instead produces the next lower “saa”. (If you want, you can try it out more systematically in the laboratory by taking a telescopic assembly of cylinders and a tuning fork.)

Of course, really speaking, despite its pure tones, even the flute does not come close enough to our idealized description above. For instance, notice that in our idealized description, energy is put into the system only once, at $t_R$, and never again. On the other hand, in playing a violin or a flute we are continuously pumping in some energy; the system is also continuously dissipating its energy to its environment via the sound waves produced in the air. A flute, thus, is an open system; it is not an isolated system. Yet, despite the additional complexity introduced because of an open system, and therefore, perhaps, a greater chance of being drawn into higher harmonic(s), in reality, a variable length flute is always observed to “select” only the fundamental harmonic for a given length.

How about an actual guitar? Same thing. In fact, the guitar comes closest to our idealized description. And if you try out plucking the string once and then, after a while, suddenly removing the finger from a fret, you will find that the guitar too “prefers” to immediately settle down rather in the fundamental harmonic for the new length. (Take an electric guitar so that even as the sound turns fainter and still fainter due to damping, you could still easily make out the change in the dominant tone.)

OK. Enough of empirical observations. Back to the connection of these observations with the theory of physics (and maths).

The question:

Thermodynamically, an infinity of tones are perfectly possible. Maths tells you that these infinity of tones are nothing but the set of the higher harmonics (and nothing else). Yet, in reality, only one tone gets selected. What gives?

What is the missing physics which makes the system get settled into one and only one option—indeed an extreme option—out of an infinity of them of which are, energetically speaking, equally possible?

Update on 18 June 2017:

Here is a statement of the problem in certain essential mathematical terms. See the three figures below:

The initial state of the string is what the following figure (Case 1) depicts. The max. amplitude is 1.0. Though the quiescent part looks longer than half the length, it’s just an illusion of perception.:

Case 1: Fundamental tone for the half length, extended over a half-length

The following figure (Case 2) is the mathematical idealization of the state in which an actual guitar string tends to settle in. Note that the max. amplitude is greater (it’s $\sqrt{2}$) so  as to have the energy of this state the same as that of Case 1.

Case 2: Fundamental tone for the full length, extended over the full length

The following figure (Case 3) depicts what mathematically is also possible for the final system state. However, it’s not observed with actual guitars. Note, here, the frequency is half of that in the Case 1, and the wavelength is doubled. The max. amplitude for this state is less than 1.0 (it’s $\dfrac{1}{\sqrt{2}}$) so as to have this state too carry exactly the same energy as in Case 1.

Case 3: The first overtone for the full length, extended over the full length

Thus, the problem, in short is:

The transition observed in reality is: $T1:$ Case 1 $\rightarrow$ Case 2.

However, the transition $T2:$ Case 1 $\rightarrow$ Case 3 also is possible by the mathematics of standing waves and thermodynamics (or more basically, by that bedrock on which all modern physics rests, viz., the calculus of variations). Yet, it is not observed.

Why does only $T1$ occur? why not $T2$? or even a linear combination of both? That’s the problem, in essence.

While attempting to answer it, also consider this : Can an isolated system like the one depicted in the Case 1 at all undergo a transition of modes?

Enjoy!

Update on 18th June 2017 is over.

That was the classical mechanics problem I said I happened to think of, recently. (And it was the one which took me away from the program of answering the E&R questions.)

Find it interesting? Want to give it a try?

If you do give it a try and if you reach an answer that seems satisfactory to you, then please do drop me a line. We can then cross-check our notes.

And of course, if you find this problem (or something similar) already solved somewhere, then my request to you would be stronger: do let me know about the reference!

In the meanwhile, I will try to go back to (or at least towards) completing the task of answering the E&R questions. [I do, however, also plan to post a slightly edited version of this post at iMechanica.]

Update History:

07 June 2017: Published on this blog

8 June 2017, 12:25 PM, IST: Added the figure and the section headings.

8 June 2017, 15:30 hrs, IST: Added the link to the brief version posted at iMechanica.

18 June 2017, 12:10 hrs, IST: Added the diagrams depicting the mathematical abstraction of the problem.

A Song I Like:

(Marathi) “olyaa saanj veli…”
Music: Avinash-Vishwajeet
Singers: Swapnil Bandodkar, Bela Shende
Lyrics: Ashwini Shende

# See, how hard I am trying to become an Approved (Full) Professor of Mechanical Engineering in SPPU?—4

In this post, I provide my answer to the question which I had raised last time, viz., about the differences between the $\Delta$, the $\text{d}$, and the $\delta$ (the first two, of the usual calculus, and the last one, of the calculus of variations).

Some pre-requisite ideas:

A system is some physical object chosen (or isolated) for study. For continua, it is convenient to select a region of space for study, in which case that region of space (holding some physical continuum) may also be regarded as a system. The system boundary is an abstraction.

A state of a system denotes a physically unique and reproducible condition of that system. State properties are the properties or attributes that together uniquely and fully characterize a state of a system, for the chosen purposes. The state is an axiom, and state properties are its corollary.

State properties for continua are typically expressed as functions of space and time. For instance, pressure, temperature, volume, energy, etc. of a fluid are all state properties. Since state properties uniquely define the condition of a system, they represent definite points in an appropriate, abstract, (possibly) higher-dimensional state space. For this reason, state properties are also called point functions.

A process (synonymous to system evolution) is a succession of states. In classical physics, the succession (or progression) is taken to be continuous. In quantum mechanics, there is no notion of a process; see later in this post.

A process is often represented as a path in a state space that connects the two end-points of the staring and ending states. A parametric function defined over the length of a path is called a path function.

A cyclic process is one that has the same start and end points.

During a cyclic process, a state function returns to its initial value. However, a path function does not necessarily return to the same value over every cyclic change—it depends on which particular path is chosen. For instance, if you take a round trip from point $A$ to point $B$ and back, you may spend some amount of money $m$ if you take one route but another amount $n$ if you take another route. In both cases you do return to the same point viz. $A$, but the amount you spend is different for each route. Your position is a state function, and the amount you spend is a path function.

[I may make the above description a bit more rigorous later on (by consulting a certain book which I don’t have handy right away (and my notes of last year are gone in the HDD crash)).]

The $\Delta$, the $\text{d}$, and the $\delta$:

The $\Delta$ denotes a sufficiently small but finite, and locally existing difference in different parts of a system. Typically, since state properties are defined as (continuous) functions of space and time, what the $\Delta$ represents is a finite change in some state property function that exists across two different but adjacent points in space (or two nearby instants in times), for a given system.

The $\Delta$ is a local quantity, because it is defined and evaluated around a specific point of space and/or time. In other words, an instance of $\Delta$ is evaluated at a fixed $x$ or $t$. The $\Delta x$ simply denotes a change of position; it may or may not mean a displacement.

The $\text{d}$ (i.e. the infinitesimal) is nothing but the $\Delta$ taken in some appropriate limiting process to the vanishingly small limit.

Since $\Delta$ is locally defined, so is the infinitesimal (i.e. $\text{d}$).

The $\delta$ of CoV is completely different from the above two concepts.

The $\delta$ is a sufficiently small but global difference between the states (or paths) of two different, abstract, but otherwise identical views of the same physically existing system.

Considering the fact that an abstract view of a system is itself a system, $\delta$ also may be regarded as a difference between two systems.

Though differences in paths are not only possible but also routinely used in CoV, in this post, to keep matters simple, we will mostly consider differences in the states of the two systems.

In CoV, the two states (of the two systems) are so chosen as to satisfy the same Dirichlet (i.e. field) boundary conditions separately in each system.

The state function may be defined over an abstract space. In this post, we shall not pursue this line of thought. Thus, the state function will always be a function of the physical, ambient space (defined in reference to the extensions and locations of concretely existing physical objects).

Since a state of a system of nonzero size can only be defined by specifying its values for all parts of a system (of which it is a state), a difference between states (of the two systems involved in the variation $\delta$) is necessarily global.

In defining $\delta$, both the systems are considered only abstractly; it is presumed that at most one of them may correspond to an actual state of a physical system (i.e. a system existing in the physical reality).

The idea of a process, i.e. the very idea of a system evolution, necessarily applies only to a single system.

What the $\delta$ represents is not an evolution because it does not represent a change in a system, in the first place. The variation, to repeat, represents a difference between two systems satisfying the same field boundary conditions. Hence, there is no evolution to speak of. When compressed air is passed into a rubber balloon, its size increases. This change occurs over certain time, and is an instance of an evolution. However, two rubber balloons already inflated to different sizes share no evolutionary relation with each other; there is no common physical process connecting the two; hence no change occurring over time can possibly enter their comparative description.

Thus, the “change” denoted by $\delta$ is incapable of representing a process or a system evolution. In fact, the word “change” itself is something of a misnomer here.

Text-books often stupidly try to capture the aforementioned idea by saying that $\delta$ represents a small and possibly finite change that occurs without any elapse of time. Apart from the mind-numbing idea of a finite change occurring over no time (or equally stupefying ideas which it suggests, viz., a change existing at literally the same instant of time, or, alternatively, a process of change that somehow occurs to a given system but “outside” of any time), what they, in a way, continue to suggest also is the erroneous idea that we are working with only a single, concretely physical system, here.

But that is not the idea behind $\delta$ at all.

To complicate the matters further, no separate symbol is used when the variation $\delta$ is made vanishingly small.

In the primary sense of the term variation (or $\delta$), the difference it represents is finite in nature. The variation is basically a function of space (and time), and at every value of $x$ (and $t$), the value of $\delta$ is finite, in the primary sense of the word. Yes, these values can be made vanishingly small, though the idea of the limits applied in this context is different. (Hint: Expand each of the two state functions in a power series and relate each of the corresponding power terms via a separate parameter. Then, put the difference in each parameter through a limiting process to vanish. You may also use the Fourier expansion.))

The difference represented by $\delta$ is between two abstract views of a system. The two systems are related only in an abstract view, i.e., only in (the mathematical) thought. In the CoV, they are supposed as connected, but the connection between them is not concretely physical because there are no two separate physical systems concretely existing, in the first place. Both the systems here are mathematical abstractions—they first have been abstracted away from the real, physical system actually existing out there (of which there is only a single instance).

But, yes, there is a sense in which we can say that $\delta$ does have a physical meaning: it carries the same physical units as for the state functions of the two abstract systems.

An example from biology:

Here is an example of the differences between two different paths (rather than two different states).

Plot the height $h(t)$ of a growing sapling at different times, and connect the dots to yield a continuous graph of the height as a function of time. The difference in the heights of the sapling at two different instants is $\Delta h$. But if you consider two different saplings planted at the same time, and assuming that they grow to the same final height at the end of some definite time period (just pick some moment where their graphs cross each other), and then, abstractly regarding them as some sort of imaginary plants, if you plot the difference between the two graphs, that is the variation or $\delta h(t)$ in the height-function of either. The variation itself is a function (here of time); it has the units, of course, of m.

Summary:

The $\Delta$ is a local change inside a single system, and $\text{d}$ is its limiting value, whereas the $\delta$ is a difference across two abstract systems differing in their global states (or global paths), and there is no separate symbol to capture this object in the vanishingly small limit.

Exercises:

Consider one period of the function $y = A \sin(x)$, say over the interval $[0,2\pi]$; $A = a$ is a small, real-valued, constant. Now, set $A = 1.1a$. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Now, take the derivative, i.e., $y' = A \cos(x)$, with $A = a$ once again. Is the change/difference here a $\delta$ or a $\Delta$? Why or why not?

Which one of the above two is a bigger change/difference?

Also consider this angle: Taking the derivative did affect the whole function. If so, why is it that we said that $\text{d}$ was necessarily a local change?

An important and special note:

The above exercises, I am sure, many (though not all) of the Officially Approved Full Professors of Mechanical Engineering at the Savitribai Phule Pune University and COEP would be able to do correctly. But the question I posed last time was: Would it be therefore possible for them to spell out the physical meaning of the variation i.e. $\delta$? I continue to think not. And, importantly, even among those who do solve the above exercises successfully, they wouldn’t be too sure about their own answers. Upon just a little deeper probing, they would just throw up their hands. [Ditto, for many American physicists.] Even if a conceptual clarity is required in applications.

(I am ever willing and ready to change my mind about it, but doing so would need some actual evidence—just the way my (continuing) position had been derived, in the first place, from actual observations of them.)

The reason I made this special note was because I continue to go jobless, and nearly bank balance-less (and also, nearly cashless). And it all is basically because of folks like these (and the Indians like the SPPU authorities). It is their fault. (And, no, you can’t try to lift what is properly their moral responsibility off their shoulders and then, in fact, go even further, and attempt to place it on mine. Don’t attempt doing that.)

A Song I Like:

[May be I have run this song before. If yes, I will replace it with some other song tomorrow or so. No I had not.]

Hindi: “Thandi hawaa, yeh chaandani suhaani…”
Music and Singer: Kishore Kumar
Lyrics: Majrooh Sultanpuri

[A quick ‘net search on plagiarism tells me that the tune of this song was lifted from Julius La Rosa’s 1955 song “Domani.” I heard that song for the first time only today. I think that the lyrics of the Hindi song are better. As to renditions, I like Kishor Kumar’s version better.]

[Minor editing may be done later on and the typos may be corrected, but the essentials of my positions won’t be. Mostly done right today, i.e., on 06th January, 2017.]

[E&OE]

/