# How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.

Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…

But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.

As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.

When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…

I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…

A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan

/

# Blog-Filling—Part 3

Note: A long Update was added on 23 November 2017, at the end of the post.

Today I got just a little bit of respite from what has been a very tight schedule, which has been running into my weekends, too.

But at least for today, I do have a bit of a respite. So, I could at least think of posting something.

But for precisely the same reason, I don’t have any blogging material ready in the mind. So, I will just note something interesting that passed by me recently:

1. Catastrophe Theory: Check out Prof. Zhigang Suo’s recent blog post at iMechanica on catastrophe theory, here [^]; it’s marked by Suo’s trademark simplicity. He also helpfully provides a copy of Zeeman’s 1976 SciAm article, too. Regular readers of this blog will know that I am a big fan of the catastrophe theory; see, for instance, my last post mentioning the topic, here [^].
2. Computational Science and Engineering, and Python: If you are into computational science and engineering (which is The Proper And The Only Proper long-form of “CSE”), and wish to have fun with Python, then check out Prof. Hans Petter Langtangen’s excellent books, all under Open Source. Especially recommended is his “Finite Difference Computing with PDEs—A Modern Software Approach” [^]. What impressed me immediately was the way the author begins this book with the wave equation, and not with the diffusion or potential equation as is the routine practice in the FDM (or CSE) books. He also provides the detailed mathematical reason for his unusual choice of ordering the material, but apart from his reason(s), let me add in a comment here: wave $\Rightarrow$ diffusion $\Rightarrow$ potential (Poisson-Laplace) precisely was the historical order in which the maths of PDEs (by which I mean both the formulations of the equations and the techniques for their solutions) got developed—even though the modern trend is to reverse this order in the name of “simplicity.” The book comes with Python scripts; you don’t have to copy-paste code from the PDF (and then keep correcting the errors of characters or indentations). And, the book covers nonlinearity too.
3. Good Notes/Teachings/Explanations of UG Quantum Physics: I ran across Dan Schroeder’s “Entanglement isn’t just for spin.” Very true. And it needed to be said [^]. BTW, if you want a more gentle introduction to the UG-level QM than is presented in Allan Adam (et al)’s MIT OCW 8.04–8.06 [^], then make sure to check out Schroeder’s course at Weber [^] too. … Personally, though, I keep on fantasizing about going through all the videos of Adam’s course and taking out notes and posting them at my Web site. [… sigh]
4. The Supposed Spirituality of the “Quantum Information” Stored in the “Protein-Based Micro-Tubules”: OTOH, if you are more into philosophy of quantum mechanics, then do check out Roger Schlafly’s latest post, not to mention my comment on it, here [^].

The point no. 4. above was added in lieu of the usual “A Song I Like” section. The reason is, though I could squeeze in the time to write this post, I still remain far too rushed to think of a song—and to think/check if I have already run it here or not. But I will try add one later on, either to this post, or, if there is a big delay, then as the next “blog filler” post, the next time round.

[Update on 23 Nov. 2017 09:25 AM IST: Added the Song I Like section; see below]

OK, that’s it! … Will catch you at some indefinite time in future here, bye for now and take care…

A Song I Like:

(Western, Instrumental) “Theme from ‘Come September'”
Credits: Bobby Darin (?) [+ Billy Vaughn (?)]

[I grew up in what were absolutely rural areas in Maharashtra, India. All my initial years till my 9th standard were limited, at its upper end in the continuum of urbanity, to Shirpur, which still is only a taluka place. And, back then, it was a decidedly far more of a backward + adivasi region. The population of the main town itself hadn’t reached more than 15,000 or so by the time I left it in my X standard; the town didn’t have a single traffic light; most of the houses including the one we lived in) were load-bearing structures, not RCC; all the roads in the town were of single lanes; etc.

Even that being the case, I happened to listen to this song—a Western song—right when I was in Shirpur, in my 2nd/3rd standard. I first heard the song at my Mama’s place (an engineer, he was back then posted in the “big city” of the nearby Jalgaon, a district place).

As to this song, as soon as I listened to it, I was “into it.” I remained so for all the days of that vacation at Mama’s place. Yes, it was a 45 RPM record, and the permission to put the record on the player and even to play it, entirely on my own, was hard won after a determined and tedious effort to show all the elders that I was able to put the pin on to the record very carefully. And, every one in the house was an elder to me: my siblings, cousins, uncle, his wife, not to mention my parents (who were the last ones to be satisfied). But once the recognition arrived, I used it to the hilt; I must have ended up playing this record for at least 5 times for every remaining day of the vacation back then.

As far as I am concerned, I am entirely positive that appreciation for a certain style or kind of music isn’t determined by your environment or the specific culture in which you grow up.

As far as songs like these are concerned, today I am able to discern that what I had immediately though indirectly grasped, even as a 6–7 year old child, was what I today would describe as a certain kind of an “epistemological cleanliness.” There was a clear adherence to certain definitive, delimited kind of specifics, whether in terms of tones or rhythm. Now, it sure did help that this tune was happy. But frankly, I am certain, I would’ve liked a “clean” song like this one—one with very definite “separations”/”delineations” in its phrases, in its parts—even if the song itself weren’t to be so directly evocative of such frankly happy a mood. Indian music, in contrast, tends to keep “continuity” for its own sake, even when it’s not called for, and the certain downside of that style is that it leads to a badly mixed up “curry” of indefinitely stretched out weilings, even noise, very proudly passing as “music”. (In evidence: pick up any traditional “royal palace”/”kothaa” music.) … Yes, of course, there is a symmetrical downside to the specific “separated” style carried by the Western music too; the specific style of noise it can easily slip into is a disjointed kind of a noise. (In evidence, I offer 90% of Western classical music, and 99.99% of Western popular “music”. As to which 90%, well, we have to meet in person, and listen to select pieces of music on the fly.)

Anyway, coming back to the present song, today I searched for the original soundtrack of “Come September”, and got, say, this one [^]. However, I am not too sure that the version I heard back then was this one. Chances are much brighter that the version I first listened to was Billy Vaughn’s, as in here [^].

… A wonderful tune, and, as an added bonus, it never does fail to take me back to my “salad days.” …

… Oh yes, as another fond memory: that vacation also was the very first time that I came to wear a T-shirt; my Mama had gifted it to me in that vacation. The actual choice to buy a T-shirt rather than a shirt (+shorts, of course) was that of my cousin sister (who unfortunately is no more). But I distinctly remember she being surprised to learn that I was in no mood to have a T-shirt when I didn’t know what the word meant… I also distinctly remember her assuring me using sweet tones that a T-shirt would look good on me! … You see, in rural India, at least back then, T-shirts weren’t heard of; for years later on, may be until I went to Nasik in my 10th standard, it would be the only T-shirt I had ever worn. … But, anyway, as far as T-shirts go… well, as you know, I was into software engineering, and so….

Bye [really] for now and take care…]

# Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:

Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.

[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.

And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].

A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].

Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]

# Haptic, tactile, virtual, surgery, etc.

Three updates made on 24th March 2016 appear near the end of this post.

Once in a while I check out the map of the visitors’ locations (see the right bar).

Since hardly any one ever leaves any comment at my blog, I can only guess who these visitors possibly could be. Over a period of time, guessing the particular visitors in this way has become an idle sort of a past-time for me. (No, I don’t obsess over it, and I don’t in fact spend much time on it—at the most half-a-minute or so, once in a while. But, yes, check, I certainly do!)

Among the recent visitors, there was one hit on 6th March 2016 coming from Troy, NY, USA (at 11:48:02 AM IST, to be precise). … Must be someone from RPI, I thought. (My blog is such that mostly only academics could possibly be interested in it; never the people with lucrative industrial jobs such as those in the SF Bay Area. Most of the hits from the Bay Area are from Mountain View, and that’s the place where the bots of Google’s search engines live.)

But I couldn’t remember engaging in any discussion with any one from RPI on any blog.

Who could this visitor possibly be? I could not figure it out immediately, so I let the matter go.

Yesterday, I noticed for the first time an ad for “several” post-doc positions at RPI, posted on iMechanica by Prof. Suvranu De [^]. It had been posted right on the same day: 6th March 2016. However, since recently I was checking out only my thread on the compactness of support [^], I had missed out on the main front page. Thus, I noticed the ad only today.

Curious, I dropped an informal email to Prof. De immediately, almost more or less by cognitive habits.

I am not too keen on going to the USA, and in fact, I am not even inclined to leave India. Reasons are manifold.

You, as every one else on the planet, of course comes to know all that ever happens to or in the USA. Americans make sure that you do—whether you like it or not. (Remember 9/11? They have of course forgotten it by now, but don’t you remember the early naughties when, imagining you to be just as dumb and thick-skinned as they are,  the kind of decibels they had pierced into your metaphorical ears (and in fact also in your soul)? Justifiable, you say? How about other big “controversies” which actually were nothing but scandals? Can’t you pick up one or two?)

Naturally, who would want to go to that uncivilized a place?

And even if you want to argue otherwise, let me suggest you to see if you can or cannot easily gather (or recollect) what all that has happened to me when I was in the USA?

So, the idea of trying to impress Dr. De for this post-doc position was, any which way, completely out of the question. Even if he is HoD at RPI.

And then, frankly, at my age, I don’t even feel like impressing any one for a mere post-doc; not these days anyway (I mean, some 6 years after the PhD defense, and after having to experience so many years of joblessness (including those reported via this blog)). … As far as I am concerned, either they know what and who I am, and act accordingly (including acting swiftly enough), or they don’t. (In the last case, mostly, they end up blaming me, as usual, in some or the other way.)

OK, so, coming back to what I wrote Dr. De. It was more in the nature of a loud thinking about the question of whether I should at all apply to them in the first place or not. … Given my experience of the other American post-docs advertised at iMechanica, e.g. those by Prof. Sukumar (UC Davis), and certain others in the USA, and also my experience of the Americans of the Indian origin (and even among them, those who are JPBTIs and those who are younger to me by age), I can’t keep any realistic expectation that I would ever receive any reply to that email of mine from Prof. De. The odds are far too against; check out the “follow-up” tag. (I could, of course, be psychically attacked, the way I was, right this week, a few days ago.)

Anyway, once thus grown curious about the subject matter, I then did a bit of a Web search, and found the following videos:

The very first listing after a Google search (using the search string: “Suvranu De”; then clicking on the tab: “videos”) was the one on “virtual lap band – surgical simulation”: [^].

Watching that video somehow made me sort of uneasy immediately. Uneasy, in a minor but a definitely irritating way. In a distinctly palpable way, almost as if it was a physical discomfort. No, not because the video carries the scene of tissue-cutting and all. … I have never been one of those who feel nervous or squeamish at the sight of blood, cuts, etc. (Most men, in fact, don’t!) So, my uneasiness was not on that count. …

Soon enough (i.e., much before the time I was in the middle of that video), I figured out the reason why.

I then checked out a few more videos, e.g., those here [^] and here [^]. … Exactly the same sense of discomfort or uneasiness, arising out of the same basic reason.

What kind of an uneasiness could there possibly be? Can you guess?

I don’t want to tell you, right away. I want you to guess. (Assume that an evil smile had transitorily appeared on my face.)

To close this post: If you so want, go ahead, check out those videos, see if it makes you uncomfortable watching some part of an implementation of this new idea. Then, the sin of the sins (“paapam, paapam, mahaapaapam” in Sanskrit): drop me a line (via a comment or an email) stating what that reason possibly could be. (Hint: It has nothing to do with the feely-feely-actually-silly/wily sort of psychological reasons. )

After a while, I will come back, and via an update to this post let you know the reason.

Update 1:

Yahoo! wants you to make a note of the “12 common mistakes to avoid in job interview”: [^]. They published this article today.

Update 2 (on 24th March 2016):

Surprise! Prof.  De soon enough (on 18th March IST) dropped me an email which was brief, professional, but direct to the point. A consequence, and therefore not much of a surprise: I am more or less inclined to at least apply for the position. I have not done so as of today; see the Update 3 below.

Update 3 (on 24th March 2016):

Right the same day (on 18th March 2016 about 10:00 PM IST), my laptop developed serious hardware issues including (but not limited to) yet another HDD crash! The previous crash was less than a year ago, in last June  [^].

Once again, there was  loss of (some) data: the initial and less-than-25%-completed drafts of 4+ research papers, some parts (from sometime in February onwards) of my notes on the current course on CFD, SPH, etc., as well as some preliminary Python code on SPH). The Update 2 in fact got delayed because of this development. I just got the machine back from the Dell Service last evening, and last night got it going on a rapid test installation of Windows 7. I plan to do a more serious re-installation over the next few days.

Update 4 (on 24th March 2016):

The thing in the three videos (about haptics, virtual surgery) that made me uncomfortable or uneasy was the fact that in each case, the surgeon was standing in a way that would have been just a shade uncomfortable to me. The surgeon’s hands were too “free” i.e. unsupported (say near elbow), his torso was stooping down in a wrong way (you couldn’t maintain that posture with accuracy in hands manipulation for long, I thought), and at the same time, he had to keep his eyes fixed on a monitor that was a bit too high-up for the eyes-to-hands coordination to work right. In short, there was this seeming absence of a consideration of ergonomics or the human factors engineering here. Of course, it’s only a prototype, and it’s only a casual onlooker’s take of the “geometry,” but that’s what made me distinctly uncomfortable.

(People often have rationalistic ideas about the proper (i.e. the least stress inducing and most efficient) posture.  In a later post, I will point out a few of these.)

A Song I Like:
(filled-in on 24th March 2016)

(Marathi) “thembaanche painjaN waaje…” [“rutu premaachaa aalaa”]
Music: Shashank Powar

[An incidental note: The crash occurred—the computer suddenly froze—while I was listening to—actually, watching the YouTube video of—this song. … Somehow, even today, I still continue liking the song! … But yes, as is usual for this section, only the audio track is being referred to. (I had run into this song while searching for some other song, which I will use in a later post.)]

[Some editorial touches, other than the planned update, are possible, as always.]

[E&OE]

# The Infosys Prizes, 2015

I realized that it was the end of November the other day, and it somehow struck me that I should check out if there has been any news on the Infosys prizes for this year. I vaguely recalled that they make the yearly announcements sometime in the last quarter of a year.

Turns out that, although academic bloggers whose blogs I usually check out had not highlighted this news, the prizes had already been announced right in mid-November [^].

It also turns out also that, yes, I “know”—i.e., have in-person chatted (exactly once) with—one of the recipients. I mean Professor Dr. Umesh Waghmare, who received this year’s award for Engineering Sciences [^]. I had run into him in an informal conference once, and have written about it in a recent post, here [^].

Dr. Waghmare is a very good choice, if you ask me. His work is very neat—I mean both the ideas which he picks out to work on, and the execution on them.

I still remember his presentation at that informal conference (where I chatted with him). He had talked about a (seemingly) very simple idea, related to graphene [^]—its buckling.

Here is my highly dumbed down version of that work by Waghmare and co-authors. (It’s dumbed down a lot—Waghmare et al’s work was on buckling, not bending. But it’s OK; this is just a blog, and guess I have a pretty general sort of a “general readership” here.)

Bending, in general, sets up a combination of tensile and compressive stresses, which results in the setting up of a bending moment within a beam or a plate. All engineers (except possibly for the “soft” branches like CS and IT) study bending quite early in their undergraduate program, typically in the second year. So, I need not explain its analysis in detail. In fact, in this post, I will write only a common-sense level description of the issue. For technical details, look up the Wiki articles on bending [^] and buckling [^] or Prof. Bower’s book [^].

Assuming you are not an engineer, you can always take a longish rubber eraser, hold it so that its longest edge is horizontal, and then bend it with a twist of your fingers. If the bent shape is like an inverted ‘U’, then, the inner (bottom) surface has got compressed, and the outer (top) surface has got stretched. Since compression and tension are opposite in nature, and since the eraser is a continuous body of a finite height, it is easy to see that there has to be a continuous surface within the volume of the eraser, some half-way through its height, where there can be no stresses. That’s because, the stresses change sign in going from the compressive stress at the bottom surface to the tensile stresses on the top surface. For simplicity of mathematics, this problem is modeled as a 1D (line) element, and therefore, in elasticity theory, this actual 2D surface is referred to as the neutral axis (i.e. a line).

The deformation of the eraser is elastic, which means that it remains in the bent state only so long as you are applying a bending “force” to it (actually, it’s a moment of a force).

The classical theory of bending allows you to relate the curvature of the beam, and the bending moment applied to it. Thus, knowing bending moment (or the applied forces), you can tell how much the eraser should bend. Or, knowing how much the eraser has curved, you can tell how big a pair of fforces would have to be applied to its ends. The theory works pretty well; it forms of the basis of how most buildings are designed anyway.

So far, so good. What happens if you bend, not an eraser, but a graphene sheet?

The peculiarity of graphene is that it is a single atom-thick sheet of carbon atoms. Your usual eraser contains billions and billions of layers of atoms through its thickness. In contrast, the thickness of a graphene sheet is entirely accounted for by the finite size of the single layer of atoms. And, it is found that unlike thin paper, the graphen sheet, even if it is the the most extreme case of a thin sheet, actually does offer a good resistance to bending. How do you explain that?

The naive expectation is that something related to the interatomic bonding within this single layer must, somehow, produce both the compressive and tensile stresses—and the systematic variation from the locally tensile to the locally compressive state as we go through this thickness.

Now, at the scale of single atoms, quantum mechanical effects obviously are dominant. Thus, you have to consider those electronic orbitals setting up the bond. A shift in the density of the single layer of orbitals should correspond to the stresses and strains in the classical mechanics of beams and plates.

What Waghmare related at that conference was a very interesting bit.

He calculated the stresses as predicted by (in my words) the changed local density of the orbitals, and found that the forces predicted this way are way smaller than the experimentally reported values for graphene sheets. In other words, the actual graphene is much stiffer than what the naive quantum mechanics-based model shows—even if the model considers those electronic orbitals. What is the source of this additional stiffness?

He then showed a more detailed calculation (i.e. a simulation), and found that the additional stiffness comes from a quantum-mechanical interaction between the portions of the atomic orbitals that go off transverse to the plane of the graphene sheet.

Thus, suppose a graphene sheet is initially held horizontally, and then bent to form an inverted U-like curvature. According to Waghmare and co-authros, you now have to consider not just the orbital cloud between the atoms (i.e. the cloud lying in the same plane as the graphene sheet) but also the orbital “petals” that shoot vertically off the plane of the graphene. Such petals are attached to nucleus of each C atom; they are a part of the electronic (or orbital) structure of the carbon atoms in the graphene sheet.

In other words, the simplest engineering sketch for the graphene sheet, as drawn in the front view, wouldn’t look like a thin horizontal line; it would also have these small vertical “pins” at the site of each carbon atom, overall giving it an appearance rather like a fish-bone.

What happens when you bend the graphene sheet is that on the compression side, the orbital clouds for these vertical petals run into each other. Now, you know that an orbital cloud can be loosely taken as the electronic charge density, and that the like charges (e.g. the negatively charged electrons) repel each other. This inter-electronic repulsive force tends to oppose the bending action. Thus, it is the petals’ contribution which accounts for the additional stiffness of the graphene sheet.

I don’t know whether this result was already known to the scientific community back then in 2010 or not, but in any case, it was a very early analysis of bending of graphene. Further, as far as I could tell, the quality of Waghmare’s calculations and simulations was very definitely superlative. … You work in a field (say computational modeling) for some time, and you just develop a “nose” of sorts, that allows you to “smell” a superlative calculation from an average one. Particularly so, if your own skills on the calculations side are rather on the average, as happens to be the case with me. (My strengths are in conceptual and computational sides, but not on the mathematical side.) …

So, all in all, it’s a very well deserved prize. Congratulations, Dr. Waghmare!

A Song I Like:

(The so-called “fusion” music) “Jaisalmer”
Artists: Rahul Sharma (Santoor) and Richard Clayderman (Piano)
Album: Confluence

[As usual, may be one more editing pass…]

[E&OE]

/

# Blogging some crap…

I had taken a vow not to blog very frequently any more—certainly not any more at least right this month, in April.

But then, I am known to break my own rules.

Still, guess I really am coming to a point where quite a few threads on which I wanted to blog are, somehow, sort of coming to an end, and fresh topics are still too fresh to write anything about.

So, the only things to blog about would be crap. Thus the title of this post.

Anyway, here is an update of my interests, and the reason why it actually is, and also would be, difficult for me to blog very regularly in the near future of months, may be even a year or so. [I am being serious.]

1. About micro-level water resources engineering:

Recently, I blogged a lot about it. Now, I think I have more or less completed my preliminary studies, and pursuing anything further would take a definitely targeted and detailed research—something that only can be pursued once I have a master’s or PhD student to guide. Which will only happen once I have a job. Which will only happen in July (when the next academic term of the University of Mumbai begins).

There is only one idea that I might mention for now.

I have installed QGIS, and worked through the relevant exercises to familiarize myself with it. Ujaval Gandhi’s tutorials are absolutely great in this respect.

The idea I can blog about right away is this. As I mentioned earlier, DEM maps with 5 m resolution are impossible to find. I asked my father to see if he had any detailed map at sub-talukaa level. He gave me an old official map from GSI; it is on a 1:50000 scale, with contours at 20 m. Pretty detailed, but still, since we are looking for check-dams of heights up to 10 m, not so helpful. So, I thought of interpolating contours, and the best way to do it would be through some automatic algorithms. The map anyway has to be digitized first.

That means, scan it at a high enough resolution, and then perform a raster to vector conversion so that DEM heightfields could be viewed in QGIS.

The trouble is, the contour lines are too faint. That means, automatic image processing to extract the existing contours would be of limited help. So, I thought of an idea: why not lay a tracing paper on top, and trace out only the contours using black pen, and then, separately scan it? It was this idea that was already mentioned in an official Marathi document by the irrigation department.

Of course, they didn’t mean to go further and do the raster-to-vector conversion and all.  I would want to adapt/create algorithms that could simulate rainfall run-offs after high intensity sporadic rains, possibly leading also to flooding. I also wanted to build algorithms that would allow estimates of volumes of water in a check dam before and after evaporation and seepage. (Seepage calculations would be done, as a first step, after homogenizing the local geology; the local geology could enter the computations at a more advanced stage of the research.) A PhD student at IIT Bombay has done some work in this direction, and I wanted to independently probe these issues. I could always use raster algorithms, but since the size of the map would be huge, I thought that the vector format would be more efficient for some of these algorithms. Thus, I had to pursue the raster-to-vector conversion.

So I did some search in this respect, and found some papers and even open source software. For instance, Peter Selinger’s POTrace, and the further off-shoots from it.

I then realized that since the contour lines in the scanned image (whether original or traced) wouldn’t be just one-pixel wide, I would have to run some kind of a line thinning algorithm.

Suitable ready made solutions are absent and building one from the scratch would be too time consuming—it can possibly be a good topic for a master’s project in the CS/Mech departments, in the computer graphics field. Here is one idea I saw implemented somewhere. To fix our imagination, launch MS Paint (or GIMP on Ubuntu), and manually draw a curve in a thick brush, or type a letter in a huge font like 48 points or so, and save the BMP file. Our objective is to make a single pixel-thick line drawing out of this thick diagram. The CS folks apparently call it the centerlining algorithm. The idea I saw implemented was something like this: (i) Do edge detection to get single pixel wide boundaries. The “filled” letter in the BMP file would now become “hollow;” it would have only the outlines that are single pixel wide. (ii) Do raster-to-vector conversion, say using POTrace, on this hollow letter. You would thus have a polygon representation for the letter. (iii) Run a meshing software (e.g. Jonathan Schewchuk’s Triangle, or something in the CGAL library) to fill the interior parts of this hollow polygon with a single layer of triangles. (iv) Find the centroids of all these triangles, and connect them together. This will get us the line running through the central portions of each arm of the letter diagram. Keep this line and delete the triangles. What you have now got is a single pixel-wide vector representation of what once was a thick letter—or a contour line in the scanned image.

Sine this algorithm seemed too complicated, I thought whether it won’t be possible to simply apply a suitable diffusion algorithm to simply erode away the thickness of the line. For instance, think that the thick-walled letter is initially made uniformly cold, and then it is placed in uniformly heated surroundings. Since the heat enters from boundaries, the outer portions become hotter than the interior. As the temperature goes on increasing, imagine the thick line to begin to melt. As soon as a pixel melts, check whether there is any solid pixel still left in its neighbourhood or not. If yes, remove the molten pixel from the thick line. In the end, you would get a raster representation one pixel thick. You can easily convert it to the vector representation. This is a simplified version of the algorithm I had implemented for my paper on the melting snowman, with that check for neighbouring solid pixels now being thrown in.

Pursuing either would be too much work for the time being; I could either offload it to a student for his project, or work on it at a later date.

Thus ended my present thinking line on the micro-level water-resources engineering.

2. Quantum mechanics:

You knew that I was fooling you when I had noted in my post dated the first of April this year, that:

“in the course of attempting to build a computer simulation, I have now come to notice a certain set of factors which indicate that there is a scope to formulate a rigorous theorem to the effect that it will always be logically impossible to remove all the mysteries of quantum mechanics.”

Guess people know me too well—none fell for it.

Well, though I haven’t quite built a simulation, I have been toying with certain ideas about simulating quantum phenomena using what seems to be a new fluid dynamical model. (I think I had mentioned about using CFD to do QM, on my blog here a little while ago).

I pursued this idea, and found that it indeed should reproduce all the supposed weirdities of QM. But then I also found that this model looks a bit too contrived for my own liking. It’s just not simple enough. So, I have to think more about it, before allocating any specific or concrete research activities about it.

That is another dead-end, as far as blogging is concerned.

However, in the meanwhile, if you must have something interesting related to QM, check out David Hestenes’ work. Pretty good, if you ask me.

OK. Physicists, go away.

3. Homeopathy:

I had ideas about computational modelling for the homeopathic effect. By homeopathy, I mean: the hypothesis that water is capable of storing an “imprint” or “memory” of a foreign substance via structuring of its dipole molecules.

I have blogged about this topic before. I had ideas of doing some molecular dynamics kind of modelling. However, I now realize that given the current computational power, any MD modelling would be for far too short time periods. I am not sure how useful that would be, if some good scheme (say a variational scheme) for coarse-graining or coupling coarse-grained simulation with the fine-grained MD simulation isn’t available.

Anyway, I didn’t have much time available to look into these aspects. And so, there goes another line of research; I don’t have much to do blogging about it.

4. CFD:

This is one more line of research/work for me. Indeed, as far as my professional (academic research) activities go, this one is probably the most important line.

Here, too, there isn’t much left to blog about, even if I have been pursuing some definite work about it.

I would like to model some rheological flows as they occur in ceramics processing, starting with ceramic injection moulding. A friend of mine at IIT Bombay has been working in this area, and I should have easy access to the available experimental data. The phenomenon, of course, is much too complex; I doubt whether an institute with relatively modest means like an IIT could possibly conduct experimentation to all the required level of accuracy or sophistication. Accurate instrumentation means money. In India, money is always much more limited, as compared to, say, in the USA—the place where neither money nor dumbness is ever in short supply.

But the problem is very interesting to a computational engineer like me. Here goes a brief description, suitably simplified (but hopefully not too dumbed down (even if I do have American readers on this blog)).

Take a little bit of wax in a small pot, melt it, and mix some fine sand into it. The paste should have the consistency of a toothpaste (the limestone version, not the gel version). Just like you pinch on the toothpaste tube and pops out the paste—technically this is called an extrusion process—similarly, you have a cylinder and ram arrangement that holds this (molten wax+sand) paste and injects it into a mould cavity. The mould is metallic; aluminium alloys are often used in research because making a precision die in aluminium is less expensive. The hot molten wax+ceramic paste is pushed into the mould cavity under pressure, and fills it. Since the mould is cold, it takes out the heat from the paste, and so the paste solidifies. You then open the mould, take out the part, and sinter it. During sintering, the wax melts and evaporates, and then the sand (ceramic) gets bound together by various sintering mechanism. Materials engineers focus on the entire process from a processing viewpoint. As a computational engineer, my focus is only up to the point that the paste solidifies. So many interesting things happen up to that point that it already makes my plate too full. Here is an indication.

The paste is a rheological material. Its flow is non-Newtonian. (There sinks in his chair your friendly computational fluid dynamicist—his typical software cannot handle non-Newtonian fluids.) If you want to know, this wax+sand paste shows a shear-thinning behaviour (which is in contrast to the shear-thickening behaviour shown by, say, corn syrup).

Further, the flow of the paste involves moving boundaries, with pronounced surface effects, as well as coalescence or merging of boundaries when streams progressing on different arms of the cavity eventually come together during the filling process. (Imagine the simplest mould cavity in the shape of an O-ring. The paste is introduced from one side, say from the dash placed on the left hand side of the cavity, as shown here: “-O”. First, after entering the cavity, the paste has to diverge into the upper and lower arms, and as the cavity filling progresses, the two arms then come together on the rightmost parts of the “O” cavity.)

Modelling moving boundaries is a challenge. No textbook on CFD would even hint at how to handle it right, because all of them are based on rocket science (i.e. the aerodynamics research that NASA and others did from fifties onwards). It’s a curious fact that aeroplanes always fly in air. They never fly at the boundary of air and vacuum. So, an aeronautical engineer never has to worry about a moving fluid boundary problem. Naval engineers have a completely different approach; they have to model a fluid flow that is only near a surface—they can afford to ignore what happens to the fluid that lies any deeper than a few characteristic lengths of their ships. Handling both moving boundaries and interiors of fluids at the same time with sufficient accuracy, therefore, is a pretty good challenge. Ask any people doing CFD research in casting simulation.

But simulation of the flow of the molten iron in gravity sand-casting is, relatively, a less complex problem. Do dimensional analysis and verify that molten iron has the same fluid dynamical characteristics as that of the plain water. In other words, you can always look at how water flows inside a cavity, and the flow pattern would remain exactly the same also for molten iron, even if the metal is so heavy. Implication, surface tension effects are OK to handle for the flow of molten iron. Also, pressures are negligibly small in gravity casting.

But rheological paste being too thick, and it flowing under pressure, handling the surface tensions effect right should be even bigger a challenge. Especially at those points where multiple streams join together, under pressure.

Then, there is also heat transfer. You can’t get away doing only momentum equations; you have to couple in the energy equations too. And, the heat transfer obviously isn’t steady-state; it’s necessarily transient—the whole process of cavity filling and paste solidification gets over within a few seconds, sometimes within even a fraction of a second.

And then, there is this phase change from the liquid state to the solid state too. Yet another complication for the computational engineer.

Why should he address the problem in the first place?

If the die design isn’t right, the two arms of the fluid paste lose heat and become sluggish, even part solidify at the boundary, before joining together. The whole idea behind doing computational modelling is to help the die designer improve his design, by allowing him to try out many different die designs and their variations on a computer, before throwing money into making an actual die. Trying out die designs on computer takes time and money too, but the expense would be relatively much much smaller as compared to actually making a die and trying it. Precision machining is too expensive, and taking a manufacturing trial takes too much time—it blocks an entire engineering team and a production machine into just trials.

So, the idea is that the computational engineer could help by telling in advance whether, given a die design and process parameters, defects like cold-joins are likely to occur.

The trouble is, the computational modelling techniques happen to be at their weakest exactly at those spots where important defects like cold-joins are most likely. These are the places where all the armies of the devil come together: non-Newtonian fluid with temperature dependent properties, moving and coalescing boundaries, transient heat transfer, phase change, variable surface tension and wall friction, pressure and rapidity (transience would be too mild a word) of the overall process.

So, that’s what the problem to model itself looks like.

Obviously, ready made software aren’t yet sophisticated enough. The best available are those that do some ad-hoc tweaking to the existing software for the plastic injection moulding. But the material and process parameters differ, and it shows in the results. And, that way, validation of these tweaks still is an on-going activity in the research community.

Obviously, more research is needed! [I told you the reason: Economics!]

Given the granular nature of the material, and the rapidity of the process, some people thought that SPH (smoothed particle hydrodynamics) should be suitable. They have tried, but I don’t know the extent of the sophistication thus far.

Some people have also tried finite-differences based approaches, with some success. But FDM has its limitations—fluxes aren’t conserved, and in a complex process like this, it would be next to impossible to tell whether a predicted result is a feature of the physical process or an artefact of the numerical modelling.

FVM should do better because it conserves fluxes better. But the existing FVM software is too complex to try out the required material and process specific variations. Try introducing just one change to a material model in OpenFOAM, and simulating the entire filling process with it. Forget it. First, try just mould filling with coupled heat transfer. Forget it. First, try just mould filling with OpenFOAM. Forget it. First, try just debug-stepping through a steady-state simulation. Forget it. First, try just compiling it from the sources, successfully.

I did!

Hence, the natural thing to do is to first write some simple FVM code, initially only in 2D, and then go on adding the process-specific complications to it.

Now this is something about I have got going, but by its nature, it also is something about you can’t blog a lot. It will be at least a few months or so before even a preliminary version 0.1 code would become available, at which point some blogging could be done about it—and, hopefully, also some bragging.

Thus, in the meanwhile, that line of thought, too comes to an end, as far as blogging is concerned.

Thus, I don’t (and won’t) have much to blog about, even if I remain (and plan to remain) busy (to very busy).

So allow me to blog only sparsely in the coming weeks and months. Guess I could bring in the comments I made at other blogs once in a while to keep this blog somehow going, but that’s about it.

In short, nothing new. And so, it all is (and is going to be) crap.

More of it, later—much later, may be a few weeks later or so. I will blog, but much more infrequently, that’s the takeaway point.

* * * * *   * * * * *   * * * * *

(Marathi) “madhu maagashee maajhyaa sakhyaa pari…”
Lyrics: B. R. Tambe
Singer: Lata Mangeshkar
Music: Vasant Prabhu

[I just finished writing the first cut; an editing pass or two is still due.]

[E&OE]

# Hail Python well MATE

Hail Python! (Yes, I have converted):

As you know, I’ve been using Ubuntu LTS (currently 14.04.01). These days it has even become the OS of my primary usage. I like the current Ubuntu as much as I had liked NT4 and Win2K when they were in vogue.

I began moving away from the Windows mainly because the one IDE that I really loved, viz. VC++ 6, got superseded by a series of software that were, increasingly, unusable by me. For instance, increasingly too complex. And, increasingly too sluggish.

There then followed a period of a kind of quantum mechanical-ness: I didn’t find either Windows + Visual Studio suitable, nor did I find any of the alternatives (e.g. RedHat + vi/emacs/whatever + configure + make + GDB + …) attractive enough.

And, for some reasons unknown to me, I’ve never found Java as “my” platform.

As to Java, no, neither the personality of Scott McNealy nor the institution of the DoJ in USA had anything to do with it. In many ways, Java is a very good language, but somehow I never found the non-C++ features of it attractive even an iota: be it the lack of genericity, or the idea of downloading the bytecodes and the flakinesses of the VMs supporting it (owing mainly to the complexity of the language), or the excessive reliance on the ‘net and the big SUN servers that it implied, or, as may be shocking to you, even the famous Java reflection! (Given the other design features of Java, and the way it was handled, I thought that it was a feature for the clueless programmer—which Java programmers anyway tend to be (even if IMO the Visual Basic programmers very strongly compete with them in that department).)

And thus, I was in a state of QM-like superposition, for a long, long, time.

Partial measurements began to occur when, sometime as late as in the ‘teens, Eclipse IMO improved to the point that it wouldn’t leave a previous instance of the debug image sneakily lurking in the OS even after you thought you had terminated it. Sometime in between, Eclipse also had begun honouring the classic keyboard short-cuts of VC++: F11 would really step into a C++ function; F10 would really execute a function call without stepping into it; etc. Good!

Still, by this time, C++ “libraries” had already become huge, very huge. A code-base of a half-million LOC was routine; some exceeded millions of LOC. Further, each library assumed or required a different make system, a different version of the GCC etc., and perhaps much more importantly, none was equally well-supported on both Linux and Windows; for examples on one or more on these counts, cf. OpenFOAM, PetSc, Boost, CGAL, ParaView, QT, TAUCS, SuperLU, et al., just to name a few. The C++ template classes had become so huge that for writing even very simple applications, say a simple dialog box based GUI doing something meaningful on the numerical side, you began drowning in countless template parameters if not also namespaces. And, of course, when it came to computational engineering, there was this permanent headache of having to write wrappers even for as simple a task as translating a vector or a matrix from one library to another, just to give you an example.

I thus had actually become a low productivity C++ programmer, despite the over-abundance of the in-principle “reusable” components and packages. … I pined for the good old days of VC++ 6 on Win2K. (Last year or so, I even tried installing VC++ 6 on Win7, and then found that forget having “support”, the IDE wouldn’t even install i.e. work on Win7.)

In the meanwhile, I had noticed Python, but as they (for instance the folks painting the backsides of auto-rickshaws and trucks) in India say, (Hindi) “samay se pahele aur bhaagya se jyaadaa/adhik…” …

… Even if you allow an innovative new idea to take root in your mind, it still has to ripen a bit before it begins to make sense.

Then, this week (as you might have guessed), I happened to have an epiphany of sorts.

And, yes, as a result, I have converted!

I have become Pythonic. Well, at least Pythetic! … Pythonewbie! (Fine. This should work.)

As you know, I was going to convert my check-dams C++ code into Python. As I said in the last update of my last post, I initially began learning the language the hard way [^]. However, being a fairly accomplished C++ programmer—apart from a professional working experience of about a decade, I had also written, e.g., a yacc-like front-end taking EBNF grammar for LALR1 languages and spitting out parser-tables, just as hobby pursued on evenings—I soon got tired of learning Python the hard way, and instead opted for a softer way. I did a lot of search, downloaded `n’ number of tutorials, codes, etc., went through the Google education [^], and quickly became, as I said, a Pythonewbie.

Let me first jot down what all the components of the Python ecosystem I have downloaded and have begun already trying: wxPython, PyQT, PyOpenGL, Pyglet, VPython, matplotlib, and of course, NumPy and SciPy. I also drool at the possibilities of using the Python bindings for OpenFOAM, VTK/ParaView, CGAL, FENICS, and many, many, many others.

Why did I convert only now? Why not earlier?

Apart from (Hindi) “samay se pahele aur bhaagya se jyaadaa/adhik,” here are some more mundane reasons, listed in no particular order:

1. I was led to believe (or at least thought) that Python was a scripting language, that it is a good alternative to, say, the shell scripts.

False.

Python is—at least at this stage of development of the language and of the entire eco-system—not a language per say, but rather an ingenious tool to glue together massive but fast C/C++/FORTRAN libraries together.

2. I was also led away from Python because it lacked the cozy, secure, protective, nurturing, etc. environment of the C/C++ “{}” blocks.

I had come to like and rely on this K&R innovation so much, that a lack of the braces was one very definite (but eventually very minor) reason that Visual BASIC (and OO FORTRAN) had turned me away. As to Python, I felt plain insecure. I had a very definite fear of the program crashes any time I made a simple mistake concerning a mere indentation. … Well, that way, the C++ code I myself wrote never had the highly irritating sloppiness of the unevenly spaced indentations. But I anyway did carry this fear. (For the same reason, I found the design of the OpenFOAM input files far more comfortable than it actually is: you see, it uses a lot of braces!)

But now, I came to realize that while the fear of going block-less isn’t without reason, practically speaking, it also is largely unfounded. … Even if in Python you don’t have the protection of the C/C++ blocks, in practice, you still can’t possibly make too many mistakes about scopes and object lifetimes for the simple reason that in Python, functions themselves become so very short! (I said scope and lifetime, not visibility. Visibility does remain a different game, with its own subtleties, in Python.)

Another virtue of Python:

Another thing. Python is small. Translation: Its interpreters are sturdy. This is one count on which Java, IMO, truly floundered, and as far as I gather from others, it still does.

(No, don’t try to convince me otherwise. I also see Java’s bugs on my own. …. Just fix, for instance, that bug in the Java VM which leads to this Eclipse CDT bug. Suppose you are working on a C++ project, and so, there are a bunch of C++ files open in various tabs of the Eclipse editor. Suppose that you then open a text file, e.g. the OutputData.txt file, in an additional Eclipse tab. Then, you go to some C++ file, edit it, build it, and debug/run the program so that the OutputData.txt file on the disk changes. You switch the tab to the text file. Naturally, the tab contents needs to refreshed. Eclipse, being written in Java, is stupid enough not to refresh it automatically; it instead tells you go to the menu File/Refresh this page. That’s what happens if the text file tab isn’t the active one. [Eclipse is in version 3.8+. [Yes, it is written in Java.]] Now, if you repeatedly do this thing (even just a couple of times or so), the menu item File/Refresh is found painted in a way as if it were disabled. As far as I can make it out, it seems to be a Java VM bug, not an Eclipse bug; it also occurs in some other Java programs (though I can’t tell you off-hand which ones). In any case, go ahead San Francisco Bay-Area i.e. Java-Loving Programmer(s), fix Your Platform first, and then think of trying to convert me.)

Python becomes reliably sturdy precisely because it recognizes that it can’t and hence shouldn’t try to be good at too many things.

Python doesn’t pretend to give you any GUI or graphics capability—not even to the extent to which TurboC/C++ did. On the other hand, Java/C# tried to be masters of everything: GUI, network, graphics…. You know where they end(ed) up. And where C and even FORTRAN continue chugging along even today. (In case you didn’t know: both remain within top 5 languages, after some 50+ and 40+ years, respectively.)

The question to pose (but usually not posed) to a recent convert:

Since I am a recent convert, you may now be tempted to probe me: “Any downside to Python?”

My answer: You are right, there are downsides. It’s just that they don’t matter to me, because they aren’t relevant to me—either my personal background or my present purposes.

Python might make learning “programming” incredibly easy and fast. But if you are going to be a serious, professional programmer, there are so many principles and rules that you must additionally learn, and far too many of these are simply not accessible while working within the Python environment. Just to give you some examples: Type safety. Object life-time management, including programmer-controlled dynamic (de)allocation. Static vs. dynamic binding. Pointer arithmetic. (As to pointers: Both God and the actual hardware use it. (That’s another point against Java, and in favour of C/C++)). Stack frames. Finite memory. Etc.  Yet, at the same time, Python also merrily exposes system calls. Think of a script-kiddie delinking a file or a folder!

Yes, Python is better (much, much better) than BASIC for teaching some of the ideas of computers and programming to kids (e.g. to CBSE Std. XI kids in India). But taken as an independent language, and for the routine programming use, it would have to rank much below average.

… But then, Python shouldn’t at all be seen that way in the first place.

As I said, Python should be seen not as a language but simply as a powerful and sturdy tool for integration of an incredible variety of the super-fast C++ (even FORTRAN) libraries, in the simplest possible manner, and with cleanest—i.e., most readable kind of—source code.

Ok. Enough about Python. … I don’t say you should become a Python programmer. I merely told you the reasons why I converted.

* * * * *   * * * * *   * * * * *

Now, a word about the MATE part of the title.

I have installed the UbuntuMATE 1.8 shell on my Ubuntu 14.04.01 LTS, replacing the Gnome Unity shell.

Good riddance!

And, I have also taken this opportunity to disable the overlaying of the scroll-bars [^] (even though doing so would have been possible also on Unity, I learned only now (“samay se pahele…”)).

I feel light. Yes. Light. … Lighter. … Lightest.

The theme I run is Redmond. This way, I get the same Windows experience which I liked and still do. [No, thanks, but I won’t like to try the tiles environment even for free, let alone out of piracy.]

MATE is neat. It should be included in the official Ubuntu distro as early as possible. (I gather that they do have a definite plan to do so.)

And, in the interests of conserving mankind’s shared resources including disk-spaces, energy usage, and most importantly, mental sanity, first, all the distributions of all versions of the Unity shell should be permanently deleted from all the servers of the world, except for an archival copy or two for reference in the studies of pathological cases of the practice of computer science. Then, this archival copy should be given a permanent resting place next to that of the Microsoft Bob.

[Oh, yes, BTW, I know, the title is slightly stupid… But what else can strike you if you write something—anything—about Unity and Microsoft Bob also in the same post? Also, Java?]

* * * * *   * * * * *   * * * * *

OK. Back to trying out the different C++ libraries, now far more easily, from within the Python environment. … I am going to have to try many libraries first (and also explore Python itself further) before I come true on the promise about the next version of that check-dams toy program.

* * * * *   * * * * *   * * * * *

A Song I Like:
(Hindi) “ik raastaa hai zindagi…”
Singers: Kishore Kumar, Lata Mangeshkar
Music: Rajesh Roshan
Lyrics: Sahir Ludhianvi

[PS: This song is from 1979—the year I was in my XII. It is, thus, some 36 years old.

Going further back by 36 years, to 1943, guess we are somewhere firmly in the “unknown” territory of Hindi film songs. Sehgal’s “baabool moraa” is from even slightly earlier times, from 1938, and it sounds so quaint. Talking of 1943 proper, even the more modern sounding Anil Biswas had this “dheere dheere aa re baadal” in the movie Kismet—do check it out, how quaint it sounds to the ears of even my generation. And I am not even talking of the other “gems” of that era!

… When I was growing up, the elders would always talk about how beautiful the old songs were—and how trashy, the new ones. But come to think of it, there was a much smaller period of just 23 years between say Chori Chori (1956) and 1979, than there is between 1979 and 2015.

Yet, this song of 1979 sounds so completely modern, at least as far as its audio goes. BTW, in case you don’t know, in this section of my blog, I primarily refer only to the audio tracks of songs. Making an exception for this song, and thus referring also to its video, except for a little oddity about the kind of clothes Shashi Kapoor is seen wearing, there is almost nothing that is 35 years old to that song. … May be because of the rural/jungle settings in which it was shot. Even Shashi Kapoor’s bike looks almost every bit modern. I don’t know which one it actually was, but going by the shape of the tank, of the headlamp, and of the front side indicators, not to mention the overall way in which it takes to the road and handles the curves, I would bet that it was a Yamaha bike. (It can’t possibly be a Rajdoot 350—which also was about the same size—because the Rajdoot 350 was launched in India, I just checked, only 4 years later, in 1983—the year I graduated from COEP.)

Conclusion? There is some sort of an underlying cultural reason why people in their 50s (like me) no longer sound or even look as old as people in their 50s used to, when we were young. … Even if we now are that old. Well, at least, “mature.” [No Hollywood actresses were harmed in the writing of this post. [Hollywood is in California, USA.]]]

[E&OE]

/

# Getting dusty…

I have been getting dusty for some time now.

… No, by “dusty” I don’t mean that dust of the “Heat and Dust” kind, even though it’s been quite the regular kind of an “unusually hot” summer this year, too.

[In case you don’t know, “Heat and Dust” was a neat movie that I vaguely recall I had liked when it had come on the scene some 2–3 decades ago. Guess I was an undergrad student at COEP or so, back then or so. (Google-devataa now informs me that the movie was released in 1983, the same year that I graduated from COEP.)]

Anyway, about the title of this post: By getting dusty, I mean that I have been trying to get some definite and very concrete beginning, on the software development side, on modelling things using “dust.” That is, particles. I mean to say, methods like molecular dynamics (MD), smoothed particle hydrodynamics (SPH), lattice Boltzmann method (LBM), etc. … I kept on postpoing writing a blog post here with the anticipation that I would succeed in tossing a neat toy code for a post.

… However, I soon came face-to-face with the  sobering truth that since becoming a professor, my programming skills have taken a (real) sharp dip.

Can you believe that I had trouble simply getting wxWidgets to work on Ubuntu/Win64? Or to get OpenGL to work on Ubuntu? It took almost two weeks for me to do that! (And, I haven’t yet got OpenGL to work with wxWidgets on Ubuntu!) … So, finally, I (once again) gave up the idea of doing some neat platform-neutral C++ work, and, instead (once again) went back to VC++. Then there was a problem awaiting me regarding VC++ too.

Actually, I had written a blog post against the modern VC++ at iMechanica and all (link to be inserted) but that was quite some time back (may be a couple of years or so). In the meanwhile, I had forgotten how bad VC++ has really become over the years, and I had to rediscover that discouraging fact once again!

So, I then tried installing VC++ 6 on Win7 (64-bit)—and, yet another ugly surprise. VC++ 6 won’t run on Win7. It’s possible to do that using some round-about ways, but it all is now a deprecated technology.

Finally, I resigned myself to using VC++ 10 on Win7. Three weeks of precious vacation time already down!

That‘s what I meant when I said how bad a programmer I have turned out to be, these days.

Anyway, that’s when I finally could begin writing some real though decidedly toy code for some simple MD, just so that I could play around with it a bit.

Though writing MD code seems such a simple, straight-forward game (what’s the Verlet algorithm if not plain and simple FDM?), I soon realized that there are some surprises in it, too.

All of the toy MD code to be found on the Internet (and some of the serious code, too) assumes only a neat rectangular or a brick-like domain. If you try to accommodate an arbitrary shaped domain (even if only with straight-lines for boundaries), you immediately run into the run-time efficiency issues. The matter gets worse if you try to accommodate holes in the domain—e.g., a square hole in a square domain like what my Gravatar icon shows. (It was my initial intention to quickly do an MD simulation for this flow through square cavity having a square hole.)

Next, once you are able to handle arbitrarily shaped domains with arbitrarily shaped holes, small bugs begin to show up. Sometimes, despite no normal flux condition, my particles were able to slip past the domain boundaries, esp. near the points where two bounding edges meet. However, since the occurrence was rare, and hard to debug for (what do you do if it happens only in the 67,238th iteration? hard-code a break after 67,237, recompile, run, go off for a coffee?) I decided to downscale the complexity of the basic algorithm.

So, instead of using the Lennard-Jones potential (or some other potential), I decided to switch off the potential field completely, and have some simple perfectly hard and elastically colliding disks. (I did implement separate shear and normal frictions at the time of collisions, though. (Of course, the fact that frictions don’t let the particles be attracted, ever, is a different story, but, remember, we are trying to be simple here.)) The particles were still managing to escape the domain, at rare times. But at least, modelling with the hard elastic disks allowed me to locate the bugs better. Turned out to be a very, very stupid-some matter: my algorithm had to take care for a finite-sized particle interacting with the boundary edges.

But, yes, I could then get something like more than 1000 particles happily go colliding with each other for more than 10^8 collisions. (No, I haven’t parallelized the code. I merely let it run on my laptop while I was attending in a longish academics-related meeting.)

Another thing: Some of the code on the ‘net, I found, simply won’t work for even a modestly longer simulation run.  For instance, Amit Kumar’s code here [^]. (Sorry Amit, I should have written you first. Will drop you a line as soon as this over-over-overdue post is out the door.) The trouble with such codes is with the time-step, I guess. … I don’t know for sure, yet; I am just loud-thinking about the reason. And, I know for a fact that if you use Amit’s parameter values, a gas explosion is going to occur rather soon, may be right after something like 10^5 collisions or so. Java runs too slowly and so Amit couldn’t have noticed it, but that’s what happens with those parameter values in my C++ code.

I haven’t yet fixed all my bugs, and in fact, haven’t yet implemented the Lennard-Jones model for the arbitrarily shaped domains (with (multiple) holes). I thought I should first factor out the common code well, and then proceed. … And that’s when other matters of research took over.

Well, why did I get into MD, in the first place? Why didn’t I do something useful starting off with the LBM?

Well, the thing is like this. I know from my own experience that this idea of a stationary control volume and a moving control volume is difficult for students to get. I thought it would be easy to implement an MD fluid, and then, once I build in the feature of “selecting” (or highlighting) a small group of molecules close to each other, with these molecules together forming a “continuous” fluid parcel, I could then directly show my students how this parcel evolves with the fluid motion—how the location, shape, momentum and density of the parcel undergoes changes. They could visually see the difference between the Eulerian and Lagrangian descriptions. That, really speaking, was my motivation.

But then, as I told you, I discovered that I have regressed low enough to have become a very bad programmer by now.

Anyway, playing around this way also gave me some new ideas. If you have been following this blog for some time, you would know that I have been writing in favor of homeopathy. While implementing my code, I thought that it might be a good idea to implement not just LJ, but also the dipole nature of water molecules, and see how the virtual water behaves: does it show the hypothesized character of persistence of structure or not. (Yes, you read it here first.) But, what the hell, I have far too many other things lined up for me to pursue this thread right away. But, sure, it’s very interesting to me, and so, I will do something in that direction some time in future.

Once I get my toy MD code together (for both hard-disk and LJ/other potentials models, via some refactorability/extensibility thrown in), then I guess I would move towards a toy SPH code. … Or at least that’s what I guess would be the natural progression in all this toys-building activity. This way, I could reuse many of my existing MD classes. And so, LBM should logically follow after SPH—what do you think? (And, yes, I will have to take the whole thing from the current 2D to 3D, sometime in future, too.)

And, all of that is, of course, assuming that I manage to become a better programmer.

Before closing, just one more note: This vacation, I tried Python too, but found that (i) to accomplish the same given functional specs, my speed of development using C++ is almost the same (if not better) than my speed with Python, and (ii) I keep missing the nice old C/C++ braces for the local scope. Call it the quirky way in which an old programmer’s dusty mind works, but at least for the time being, I am back to C++ from that brief detour to Python.

Ok, more, later.

* * * * *   * * * * *   * * * * *

A Song I Like:
(Marathi) “waaT sampataa sampenaa, kuNi waaTet bhetenaa…”
Music: Datta Dawajekar
Lyrics: Devakinandan Saaraswat
Singer: Jayawant Kulkarni

[PS: A revision is perhaps due for streamlining the writing. May be I will come back and do that.]

[E&OE]

/

# What would you choose as the Top 5 Equations? Top 10?

I was going to write my thoughts on the issue of “Universe: Finite or Infinite?” this weekend. Though the weekend has technically already begun, I have been busy at my day-job. Further, though I have been hurriedly jotting down my points in a small pocket notebook (a habit that has come to serve me well), I still have not found the time to put them in a computer file (usually, just a .txt file) and order them. I hope to be doing the upcoming week. However, in the meanwhile, the title question struck me, and so, I decided to “ship it out” this blog post. Read, and do let me know your thoughts…

* * * * *   * * * * *   * * * * *

Equations are of central importance in all of science and engineering, but especially so in physics and mechanics.

Even leaving aside algebraic equations, handbooks on PDEs alone list hundreds of equations. However, a few of these do stand out, either because they encapsulate some fundamental aspect of physics/science/engg., or because they serve as simpler prototypes for more complex situtations, or simply because they are so complex as to be fascinating by themselves. There might be other considerations too… But the fact is, some equations really do stand out as compared to others.

If so, what equations would you single out as being most important or interesting? To make the matters more interesting, first, please think of making a short list of only 5 equations. Then, if necessary, make it one of 10 equations—but no more, please! 🙂

As to me, here is my list, put together in a completely off-hand manner:

Top five:
(1-3) The linear wave-, diffusion- and potential-equations.
(4) The Schrodinger equation
(5) The Navier-Stokes equation

(6) The Maxwell Equations
(7) The equation defining the Fourier transform
(8) Newton’s second law (dp/dt = F)
(9) The Lame equation (of elasticity)

Am I already nearing the limit or what… Hmm… But, nope, I am not sure whether I want to include E = mc^2. … I will give this entire matter a second thought some time later on.

But, how about you? What would be your choices for the top 5/10 equations? Why?

(Also posted today in the Computational Scientists & Engineers group at LinkedIn, and at iMechanica [^]).

* * * * *   * * * * *   * * * * *

A Song I Like:
(Hindi) “wahaan kaun he teraa…”
Singer and Music: S. D. Burman
Lyrics: Shailendra

[PS: Song was “TBD” while publishing this post yesterday. Added the song on May 1, 2011. … Still, I am not sure if I haven’t mentioned this song before. I will check later on, and if so, will replace it with some other song.]

[E&OE]

/

# How to supply a visualization for the displacement gradient tensor

[This post was initially posted at iMechanica. After posting it, I realized there was some inconsistency with it, and I noted so at the thread at iMechanica. What I am posting below is a slightly modified version.]

It all began with a paper that I proposed for an upcoming conference in India. The extended abstract got accepted, of course, but my work is still in progress, and today I am quite sure that I cannot meet the deadline. So, I am going to withdraw it, and then submit a longer version of it to a journal, later.

Anyway, here is a gist of the idea behind the paper. I am building a very small pedagogical software called “toyDNS.” DNS stands for Displacement, straiN, and streSs, and the order of the letters in the acronymn emphasizes what I (now) believe is the correct hierarchical order for the three concepts. Anyway, let’s keep the hierarchical order aside and look into what the software does—which I guess could be more interesting.

The sofware is very very small and simple. It begins by showing the user a regular 2D grid (i.e. squares). The user distorts the grid using the mouse, which is somewhat similar to the action of an image-warping software. The software, then, immediately (i.e. in real time, without using menus etc.) computes and shows the following fields in the adjacent windows: (i) the displacement vector field, (ii) the displacement gradient tensor field, (iii) the rotation field, (iv) the strain field, (v) and the stress field. The software assumes plane-stress, linear elasticity, and uses static configuration data for material properties like nu and E. The software also shows the boundary tractions (forces) that would be required to maintain the displacement field that the user has specified.

Basically, the idea is that the beginning undergraduate student encountering the mechanics of materials for the first time, gets to see the importance of the rotation field (which is usually not emphasized in textbooks or courses), and thereby is able to directly appreciate the reason why an arbitrary displacement field uniquely determines the corresponding strain and stress fields but why the converse is not true—why an arbitrary stress/strain field cannot uniquely determine a corresponding displacement field. To illustrate this point (call it the compatibility issue if you wish) is the whole rationale behind this toy software.

Now, when it comes to visualizing the fields, I can always use arrows for showing the vector fields of displacements and forces. For strains and stresses, I can use Lame’s ellipse (this being a 2D space). In fact, since the strain and stress fields are symmetric, in 2D, they each have only 3 components, which means that the symmetric tensor object taken as a whole can directly map onto an RGB (or HLS) color-space, and so, I can also show a single, full-color field plot for the strain (or stress) field.

Ok. So far, so good.

The problem is with the displacement gradient tensor (DG for short here). Since the displacement field is arbitrary, there is no symmetry to the DG tensor. Hence, even in 2D, there are 4 independent components to it—i.e. one component too many than what can be accomodated in the three-component color-space. So, a direct depiction of the tensor object taken as a whole is not possible, and something else has to be done. So, I thought of the following idea.

First, the notation. Assume that the DG tensor is being described thus:

DG11 DG12
DG21 DG22

=

du/dx du/dy
dv/dx dv/dy

where DGij are the components of the DG tensor, u and v are the x- and y-components of the displacement field, and the d’s represent the partial differentation. (Also imagine as if the square brackets of the matrix notation are placed around the components listing above.)

Consider that DGij can be taken to represent a component of a vector that refers to the i-th face and j-th direction. Understanding this scheme is easier to do for the stress tensor. For the stress tensor, Sij is the component of the traction vector acting across the i-the face and pointing in the j-th direction. For instance, in fig. 2.3 here: http://en.wikipedia.org/wiki/Stress_(mechanics) , T^{e_1} is the vector acting across the face normal to the 1-axis.

Even if the DG tensor is not symmetric, the basic idea would still apply, wouldn’t it?

Thus, each row in the DG tensor represents a vector: the first row is a vector acting on the face normal to the x-axis, and the second is another vector (which, for DG, is completely indpendent of the first) acting on the face normal to the y-axis. For 2D, substitute “line” in place of “face.”

If I now show these two vectors, they would completely describe the DG tensor. This representation would be somewhat similar to the “cross-bars” visualization commonly used in engineering software for the stress tensor, wherein the tensor field is shown using periodically arranged cross-bars—very convenient if the grid is regular and uniform and has square elements.

Notice a salient difference, however. Since the DG tensor is asymmetric, the two vectors will not in general lie at right-angles to each other. The latter is the case only with the symmetric tensors such as the strain and stress tensors. [Correction added while posting this entry at this blog here: Here, for the strain and stress vectors, I have already assumed that the two vectors are aligned along the principal axes. Notice that for the DG tensor, my description assumes that the reference faces or lines are aligned with the global xy reference frame that is attached to the domain. This introduces the inconsistency that I later noted at the iMechanica blog.]

My question is this: Do you see any issues with this kind of visualization for the DG tensor? Is there any loss of generality by following this scheme of visualization? I mean, I read some literature on visualization of asymmetric tensors, and noticed that they sometimes worry about the eigenvalues being complex, not real. I think that complex eigenvalues would not be a consideration for the above kind of depiction of the DG tensor—the rotation part will be separately shown in a separate window anyway. But, still, I wanted to have the generality aspect cross-checked. Hence this post. Am I missing something? assuming too much? What are the other things, if any, that I need to consider? Also: Would you be “intuitively” comfortable with this scheme? Can you think of or suggest any alternatives?

Addenda (while now posting this entry at this blog here, on Oct. 28, 2010):

At iMechanica, Biswajit Banerjee provided a very helpful link to notes on Mohr’s circle by Prof. Rebecca Brannon (Uni. Utah). Then I also found a few papers by her. Dr. Phillips Wallstedt also provided a helpful pointer. I still have to go through all this material, esp. the paper suggested by Wallstedt. Anyhow, let me reiterate my point: Lame’s ellipse is a superior visualization as compared to Mohr’s circle. Brannon’s notes are truly helpful in that they directly show how to use Mohr’s circle for asymmetric tensors as well. Much of the material regarding asymmetric tensors in her notes was not at all known to me, and I am really grateful to her for having posted it online (and to Biswajit for pointing it out to me). Yet, I am going to try and see what(ever) that can be done along the cross-bars and/or Lame’s ellipse side. One solution now obvious to me (after Brannon’s notes—which I have merely browsed not read through) is what she has already shown in her notes: show the eigenvectors in reference to the non-rectangular stress element—even if she adds an “arghh” to the idea :). I think it is not at all a bad idea. Indeed, the more I think about it, the better I like it. At least, it will be consistent with the cross-bars visualization for the symmetric tensors.  … And, oh, BTW, it reminds me of a point that I have wanted to make for a long time, viz., the possibly misleading nature of the stress element they usually always use in showing stress tensor definition. … Next post!

* * * * *    * * * * *    * * * * *

A Song I Like:

[Initially I thought of skipping this section just for this post, but then changed my mind. … This song is dedicated to the fond memory of a certain relative of ours who passed away yesterday, at a ripe age of 85+. It was my sister’s mother-in-law—but she never let us feel that “in-law” part of it. She was an almost completely unlettered lady who lived all her life in a village, herself working in farms. She was a lifelong “warkari,” and more: a soul of a very rare kind of simplicity and beauty.]

(Marathi) “saavaLyaa viThhalaa, tujhyaa daari aale”
Singer: Suman Kalyanpur
Music: Dashrath Pujari
Lyrics: R. N. Pawar

[E&OE]

/