Exactly what does this script show?

Update on 02 March 2018, 15:34 IST: I have now added another, hopefully better, version of the script (but also kept the old one intact); see in the post below. The new script too comes without comments.


Here is a small little Python script which helps you visualize something about a state of stress in 2D.

If interested in understanding the concept of stress, then do run it, read it, try to understand what it does, and then, if still interested in the concept of stress, try to answer this “simple” little question:

Exactly what does this script show? Exactly what it is that you are visualizing, here?


I had written a few more notes and inline comments in the script, but have deliberately deleted most of them—or at least the ones which might have given you a clue towards answering the above question. I didn’t want to spoil your fun, that’s why.

Once you all finish giving it a try, I will then post another blog-entry here, giving my answer to that question (and in the process, bringing back all the deleted notes and comments).

Anyway, here is the script:


'''
A simple script to help visualize *something* about
a 2D stress tensor.

--Ajit R. Jadhav. Version: 01 March 2018, 21:39 HRS IST.
'''

import math
import numpy as np
import matplotlib.pyplot as plt

# Specifying the input stress
# Note:
# While plotting, we set the x- and y-limits to -150 to +150,
# and enforce the aspect ratio of 1. That is to say, we do not
# allow MatPlotLib to automatically scale the axes, because we
# want to appreciate the changes in the shapes as well sizes in
# the plot.
#
# Therefore, all the input stress-components should be kept
# to within the -100 to +100 (both inclusive) range.
#
# Specify the stress state in this order: xx, xy; yx, yy
# The commas and the semicolon are necessary.

sStress = "-100, 45; 90, 25"

axes = plt.axes()
axes.set_xlim((-150, 150))
axes.set_ylim((-150, 150))
plt.axes().set_aspect('equal', 'datalim')
plt.title(
    "A visualization of *something* about\n" \
    "the 2D stress-state [xx, xy; yx, yy] = [%s]" \
    % sStress)

mStress = np.matrix(sStress)
mStressT = np.transpose(mStress)

mUnitNormal = np.zeros((2, 1))
mTraction = np.zeros((2, 1))

nOrientations = 18
dIncrement = 360.0 / float(nOrientations)
for i in range(0, nOrientations):
    dThetaDegrees = float(i) * dIncrement
    dThetaRads = dThetaDegrees * math.pi / 180.0
    mUnitNormal = [round(math.cos(dThetaRads), 6), round(math.sin(dThetaRads), 6)]
    mTraction = mStressT.dot(mUnitNormal)
    if i == 0:
        plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'black', linewidth=1.0)
    else:
        plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'gray', linewidth=0.5)
    plt.plot(mTraction[0, 0], mTraction[0, 1], marker='.',
             markeredgecolor='gray', markerfacecolor='gray', markersize=5)
    plt.text(mTraction[0, 0], mTraction[0, 1], '%d' % dThetaDegrees)
    plt.pause(0.05)

plt.show()


Update on 02 March 2018, 15:34 IST:

Here is a second version of a script that does something similar (but continues to lack explanatory comments). One advantage with this version is that you can copy-paste the script to some file, say, MyScript.py, and invoke it from command line, giving the stress components and the number of orientations as command-line inputs, e.g.,

python MyScript.py "100, 0; 0, 50" 12

which makes it easier to try out different states of stress.

The revised code is here:


'''
A simple script to help visualize *something* about
a 2D stress tensor.

--Ajit R. Jadhav. 
History: 
06 March 2018, 10:43 IST: 
In computeTraction(), changed the mUnitNormal code to make it np.matrix() rather than python array
02 March 2018, 15:39 IST; Published the code
'''

import sys
import math
import numpy as np
import matplotlib.pyplot as plt

# Specifying the input stress
# Note:
# While plotting, we set the x- and y-limits to -150 to +150,
# and enforce the aspect ratio of 1. That is to say, we do not
# allow MatPlotLib to automatically scale the axes, because we
# want to appreciate the changes in the shapes as well sizes in
# the plot.
#
# Therefore, all the input stress-components should be kept
# to within the -100 to +100 (both inclusive) range.
#
# Specify the stress state in this order: xx, xy; yx, yy
# The commas and the semicolon are necessary.
# If you run the program from a command-line, you can also
# specify the input stress string in quotes as the first
# command-line argument, and no. of orientations, as the
# second. e.g.:
# python MyScript.py "100, 50; 50, 0" 12
##################################################

gsStress = "-100, 45; 90, 25"
gnOrientations = 18


##################################################

def plotArrow(vTraction, dThetaDegs, clr, axes):
    dx = round(vTraction[0], 6)
    dy = round(vTraction[1], 6)
    if not (math.fabs(dx) < 10e-6 and math.fabs(dy) < 10e-6):
        axes.arrow(0, 0, dx, dy, head_width=3, head_length=9.0, length_includes_head=True, fc=clr, ec=clr)
    axes.annotate(xy=(dx, dy), s='%d' % dThetaDegs, color=clr)


##################################################

def computeTraction(mStressT, dThetaRads):
    vUnitNormal = [round(math.cos(dThetaRads), 6), round(math.sin(dThetaRads), 6)]
    mUnitNormal = np.reshape(vUnitNormal, (2,1))
    mTraction = mStressT.dot(mUnitNormal)
    vTraction = np.squeeze(np.asarray(mTraction))
    return vTraction


##################################################

def main():
    axes = plt.axes()
    axes.set_label("label")
    axes.set_xlim((-150, 150))
    axes.set_ylim((-150, 150))
    axes.set_aspect('equal', 'datalim')
    plt.title(
        "A visualization of *something* about\n" \
        "the 2D stress-state [xx, xy; yx, yy] = [%s]" \
        % gsStress)

    mStress = np.matrix(gsStress)
    mStressT = np.transpose(mStress)
    vTraction = computeTraction(mStressT, 0)
    plotArrow(vTraction, 0, 'red', axes)
    dIncrement = 360.0 / float(gnOrientations)
    for i in range(1, gnOrientations):
        dThetaDegrees = float(i) * dIncrement
        dThetaRads = dThetaDegrees * math.pi / 180.0
        vTraction = computeTraction(mStressT, dThetaRads)
        plotArrow(vTraction, dThetaDegrees, 'gray', axes)
        plt.pause(0.05)
    plt.show()


##################################################

if __name__ == "__main__":
    nArgs = len(sys.argv)
    if nArgs > 1:
        gsStress = sys.argv[1]
    if nArgs > 2:
        gnOrientations = int(sys.argv[2])
    main()



OK, have fun, and if you care to, let me know your answers, guess-works, etc…..


Oh, BTW, I have already taken a version of my last post also to iMechanica, which led to a bit of an interaction there too… However, I had to abruptly cut short all the discussions on the topic because I unexpectedly got way too busy in the affiliation- and accreditation-related work. It was only today that I’ve got a bit of a breather, and so could write this script and this post. Anyway, if you are interested in the concept of stress—issues like what it actually means and all that—then do check out my post at iMechanica, too, here [^].


… Happy Holi, take care to use only safe colors—and also take care not to bother those people who do not want to be bothered by you—by your “play”, esp. the complete strangers…

OK, take care and bye for now. ….


A Song I Like:

(Marathi [Am I right?]) “rang he nave nave…”
Music: Aditya Bedekar
Singer: Shasha Tirupati
Lyrics: Yogesh Damle

 

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]

 

An interesting problem from the classical mechanics of vibrations

Update on 18 June 2017:
Added three diagrams depicting the mathematical abstraction of the problem; see near the end of the post. Also added one more consideration by way of an additional question.


TL;DR: A very brief version of this post is now posted at iMechanica; see here [^].


How I happened to come to formulate this problem:

As mentioned in my last post, I had started writing down my answers to the conceptual questions from Eisberg and Resnick’s QM text. However, as soon as I began doing that (typing out my answer to the first question from the first chapter), almost predictably, something else happened.

Since it anyway was QM that I was engaged with, somehow, another issue from QM—one which I had thought about a bit some time ago—happened to now just surface up in my mind. And it was an interesting issue. Back then, I had not thought of reaching an answer, and even now, I realized, I had not very satisfactory answer to it, not even in just conceptual terms. Naturally, my mind remained engaged in thinking about this second QM problem for a while.

In trying to come to terms with this QM problem (of my own making, not E&R’s), I now tried to think of some simple model problem from classical mechanics that might capture at least some aspects of this QM issue. Thinking a bit about it, I realized that I had not read anything about this classical mechanics problem during my [very] limited studies of the classical mechanics.

But since it appeared simple enough—heck, it was just classical mechanics—I now tried to reason through it. I thought I “got” it. But then, right the next day, I began doubting my own answer—with very good reasons.

… By now, I had no option but to keep aside the more scholarly task of writing down answers to the E&R questions. The classical problem of my own making had begun becoming all interesting by itself. Naturally, even though I was not procrastinating, I still got away from E&R—I got diverted.

I made some false starts even in the classical version of the problem, but finally, today, I could find some way through it—one which I think is satisfactory. In this post, I am going to share this classical problem. See if it interests you.


Background:

Consider an idealized string tautly held between two fixed end supports that are a distance L apart; see the figure below. The string can be put into a state of vibrations by plucking it. There is a third support exactly at the middle; it can be removed at will.

 

 

 

Assume all the ideal conditions. For instance, assume perfectly rigid and unyielding supports, and a string that is massive (i.e., one which has a lineal mass density; for simplicity, assume this density to be constant over the entire string length) but having zero thickness. The string also is perfectly elastic and having zero internal friction of any sort. Assume that the string is surrounded by the vacuum (so that the vibrational energy of the string does not leak outside the system). Assume the absence of any other forces such as gravitational, electrical, etc. Also assume that the middle support, when it remains touching the string, does not allow any leakage of the vibrational energy from one part of the string to the other. Feel free to make further suitable assumptions as necessary.

The overall system here consists of the string (sans the supports, whose only role is to provide the necessary boundary conditions).

Initially, the string is stationary. Then, with the middle support touching the string, the left-half of the string is made to undergo oscillations by plucking it somewhere in the left-half only, and immediately releasing it. Denote the instant of the release as, say t_R. After the lapse of a sufficiently long time period, assume that the left-half of the system settles down into a steady-state standing wave pattern. Given our assumptions, the right-half of the system continues to remain perfectly stationary.

The internal energy of the system at t_0 is 0. Energy is put into the system only once, at t_R, and never again. Thus, for all times t > t_R, the system behaves as a thermodynamically isolated system.

For simplicity, assume that the standing waves in the left-half form the fundamental mode for that portion (i.e. for the length L/2). Denote the frequency of this fundamental mode as \nu_H, and its max. amplitude (measured from the central line) as A_H.

Next, at some instant of time t = t_1, suppose that the support in the middle is suddenly removed, taking care not to disturb the string in any way in the process. That is to say, we  neither put in any more energy in the system nor take out of it, in the process of removing the middle support.

Once the support is thus removed, the waves from the left-half can now travel to the right-half, get reflected from the right end-support, travel all the way to the left end-support, get reflected there, etc. Thus, they will travel back and forth, in both the directions.

Modeled as a two-point BV/IC problem, assume that the system settles down into a steadily repeating pattern of some kind of standing waves.

The question now is:

What would be the pattern of the standing waves formed in the system at a time t_F \gg t_1?


The theory suggests that there is no unique answer!:

Here is one obvious answer:

Since the support in the middle was exactly at the midpoint, removing it has the effect of suddenly doubling the length for the string.

Now, simple maths of the normal modes tells you that the string can vibrate in the fundamental mode for the entire length, which means: the system should show standing waves of the frequency \nu_F = \nu_H/2.

However, there also are other, theoretically conceivable, answers.

For instance, it is also possible that the system gets settled into the first higher-harmonic mode. In the very first higher-harmonic mode, it will maintain the same frequency as earlier, i.e., \nu_F = \nu_H, but being an isolated system, it has to conserve its energy, and so, in this higher harmonic mode, it must vibrate with a lower max. amplitude A_F < A_H. Thermodynamically speaking, since the energy is conserved also in such a mode, it also should certainly be possible.

In fact, you can take the argument further, and say that any one or all of the higher harmonics (potentially an infinity of them) would be possible. After all, the system does not have to maintain a constant frequency or a constant max. amplitude; it only has to maintain the same energy.

OK. That was the idealized model and its maths. Now let’s turn to reality.


Relevant empirical observations show that only a certain answer gets selected:

What do you actually observe in reality for systems that come close enough to the above mentioned idealized description? Let’s take a range of examples to get an idea of what kind of a show the real world puts up….

Consider, say, a violinist’s performance. He can continuously alter the length of the vibrations with his finger, and thereby produce a continuous spectrum of frequencies. However, at any instant, for any given length for the vibrating part, the most dominant of all such frequencies is, actually, only the fundamental mode for that length.

A real violin does not come very close to our idealized example above. A flute is better, because its spectrum happens to be the purest among all musical instruments. What do we mean by a “pure” tone here? It means this: When a flutist plays a certain tone, say the middle “saa” (i.e. the middle “C”), the sound actually produced by the instrument does not significantly carry any higher harmonics. That is to say, when a flutist plays the middle  “saa,” unlike the other musical instruments, the flute does not inadvertently go on to produce also the “saa”s from any of the higher octaves. Its energy remains very strongly concentrated in only a single tone, here, the middle “saa”. Thus, it is said to be a “pure” tone; it is not “contaminated” by any of the higher harmonics. (As to the lower harmonics for a given length, well, they are ruled out because of the basic physics and maths.)

Now, if you take a flute of a variable length (something like a trumpet) and try very suddenly doubling the length of the vibrating air column, you will find that instead of producing a fainter sound of the same middle “saa”, the flute instead produces the next lower “saa”. (If you want, you can try it out more systematically in the laboratory by taking a telescopic assembly of cylinders and a tuning fork.)

Of course, really speaking, despite its pure tones, even the flute does not come close enough to our idealized description above. For instance, notice that in our idealized description, energy is put into the system only once, at t_R, and never again. On the other hand, in playing a violin or a flute we are continuously pumping in some energy; the system is also continuously dissipating its energy to its environment via the sound waves produced in the air. A flute, thus, is an open system; it is not an isolated system. Yet, despite the additional complexity introduced because of an open system, and therefore, perhaps, a greater chance of being drawn into higher harmonic(s), in reality, a variable length flute is always observed to “select” only the fundamental harmonic for a given length.

How about an actual guitar? Same thing. In fact, the guitar comes closest to our idealized description. And if you try out plucking the string once and then, after a while, suddenly removing the finger from a fret, you will find that the guitar too “prefers” to immediately settle down rather in the fundamental harmonic for the new length. (Take an electric guitar so that even as the sound turns fainter and still fainter due to damping, you could still easily make out the change in the dominant tone.)

OK. Enough of empirical observations. Back to the connection of these observations with the theory of physics (and maths).


The question:

Thermodynamically, an infinity of tones are perfectly possible. Maths tells you that these infinity of tones are nothing but the set of the higher harmonics (and nothing else). Yet, in reality, only one tone gets selected. What gives?

What is the missing physics which makes the system get settled into one and only one option—indeed an extreme option—out of an infinity of them of which are, energetically speaking, equally possible?


Update on 18 June 2017:

Here is a statement of the problem in certain essential mathematical terms. See the three figures below:

The initial state of the string is what the following figure (Case 1) depicts. The max. amplitude is 1.0. Though the quiescent part looks longer than half the length, it’s just an illusion of perception.:

Fundamental tone for the half length, extended over a half-length

Case 1: Fundamental tone for the half length, extended over a half-length

The following figure (Case 2) is the mathematical idealization of the state in which an actual guitar string tends to settle in. Note that the max. amplitude is greater (it’s \sqrt{2}) so  as to have the energy of this state the same as that of Case 1.

Case 2: Fundamental tone for the full length, extended over the full length

Case 2: Fundamental tone for the full length, extended over the full length

 

 

 

 

 

 

 

 

The following figure (Case 3) depicts what mathematically is also possible for the final system state. However, it’s not observed with actual guitars. Note, here, the frequency is half of that in the Case 1, and the wavelength is doubled. The max. amplitude for this state is less than 1.0 (it’s \dfrac{1}{\sqrt{2}}) so as to have this state too carry exactly the same energy as in Case 1.

Case 3: The first overtone for the full length, extended over the full length

Case 3: The first overtone for the full length, extended over the full length

 

 

 

 

 

 

 

 

Thus, the problem, in short is:

The transition observed in reality is: T1: Case 1 \rightarrow Case 2.

However, the transition T2: Case 1 \rightarrow Case 3 also is possible by the mathematics of standing waves and thermodynamics (or more basically, by that bedrock on which all modern physics rests, viz., the calculus of variations). Yet, it is not observed.

Why does only T1 occur? why not T2? or even a linear combination of both? That’s the problem, in essence.

While attempting to answer it, also consider this : Can an isolated system like the one depicted in the Case 1 at all undergo a transition of modes?

Enjoy!

Update on 18th June 2017 is over.


That was the classical mechanics problem I said I happened to think of, recently. (And it was the one which took me away from the program of answering the E&R questions.)

Find it interesting? Want to give it a try?

If you do give it a try and if you reach an answer that seems satisfactory to you, then please do drop me a line. We can then cross-check our notes.

And of course, if you find this problem (or something similar) already solved somewhere, then my request to you would be stronger: do let me know about the reference!


In the meanwhile, I will try to go back to (or at least towards) completing the task of answering the E&R questions. [I do, however, also plan to post a slightly edited version of this post at iMechanica.]


Update History:

07 June 2017: Published on this blog

8 June 2017, 12:25 PM, IST: Added the figure and the section headings.

8 June 2017, 15:30 hrs, IST: Added the link to the brief version posted at iMechanica.

18 June 2017, 12:10 hrs, IST: Added the diagrams depicting the mathematical abstraction of the problem.


A Song I Like:

(Marathi) “olyaa saanj veli…”
Music: Avinash-Vishwajeet
Singers: Swapnil Bandodkar, Bela Shende
Lyrics: Ashwini Shende