How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.


Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…


But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.


As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.


When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…


I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…


A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan

 

Advertisements

Blog-Filling—Part 3

Note: A long Update was added on 23 November 2017, at the end of the post.


Today I got just a little bit of respite from what has been a very tight schedule, which has been running into my weekends, too.

But at least for today, I do have a bit of a respite. So, I could at least think of posting something.

But for precisely the same reason, I don’t have any blogging material ready in the mind. So, I will just note something interesting that passed by me recently:

  1. Catastrophe Theory: Check out Prof. Zhigang Suo’s recent blog post at iMechanica on catastrophe theory, here [^]; it’s marked by Suo’s trademark simplicity. He also helpfully provides a copy of Zeeman’s 1976 SciAm article, too. Regular readers of this blog will know that I am a big fan of the catastrophe theory; see, for instance, my last post mentioning the topic, here [^].
  2. Computational Science and Engineering, and Python: If you are into computational science and engineering (which is The Proper And The Only Proper long-form of “CSE”), and wish to have fun with Python, then check out Prof. Hans Petter Langtangen’s excellent books, all under Open Source. Especially recommended is his “Finite Difference Computing with PDEs—A Modern Software Approach” [^]. What impressed me immediately was the way the author begins this book with the wave equation, and not with the diffusion or potential equation as is the routine practice in the FDM (or CSE) books. He also provides the detailed mathematical reason for his unusual choice of ordering the material, but apart from his reason(s), let me add in a comment here: wave \Rightarrow diffusion \Rightarrow potential (Poisson-Laplace) precisely was the historical order in which the maths of PDEs (by which I mean both the formulations of the equations and the techniques for their solutions) got developed—even though the modern trend is to reverse this order in the name of “simplicity.” The book comes with Python scripts; you don’t have to copy-paste code from the PDF (and then keep correcting the errors of characters or indentations). And, the book covers nonlinearity too.
  3. Good Notes/Teachings/Explanations of UG Quantum Physics: I ran across Dan Schroeder’s “Entanglement isn’t just for spin.” Very true. And it needed to be said [^]. BTW, if you want a more gentle introduction to the UG-level QM than is presented in Allan Adam (et al)’s MIT OCW 8.04–8.06 [^], then make sure to check out Schroeder’s course at Weber [^] too. … Personally, though, I keep on fantasizing about going through all the videos of Adam’s course and taking out notes and posting them at my Web site. [… sigh]
  4. The Supposed Spirituality of the “Quantum Information” Stored in the “Protein-Based Micro-Tubules”: OTOH, if you are more into philosophy of quantum mechanics, then do check out Roger Schlafly’s latest post, not to mention my comment on it, here [^].

The point no. 4. above was added in lieu of the usual “A Song I Like” section. The reason is, though I could squeeze in the time to write this post, I still remain far too rushed to think of a song—and to think/check if I have already run it here or not. But I will try add one later on, either to this post, or, if there is a big delay, then as the next “blog filler” post, the next time round.

[Update on 23 Nov. 2017 09:25 AM IST: Added the Song I Like section; see below]

OK, that’s it! … Will catch you at some indefinite time in future here, bye for now and take care…


A Song I Like:

(Western, Instrumental) “Theme from ‘Come September'”
Credits: Bobby Darin (?) [+ Billy Vaughn (?)]

[I grew up in what were absolutely rural areas in Maharashtra, India. All my initial years till my 9th standard were limited, at its upper end in the continuum of urbanity, to Shirpur, which still is only a taluka place. And, back then, it was a decidedly far more of a backward + adivasi region. The population of the main town itself hadn’t reached more than 15,000 or so by the time I left it in my X standard; the town didn’t have a single traffic light; most of the houses including the one we lived in) were load-bearing structures, not RCC; all the roads in the town were of single lanes; etc.

Even that being the case, I happened to listen to this song—a Western song—right when I was in Shirpur, in my 2nd/3rd standard. I first heard the song at my Mama’s place (an engineer, he was back then posted in the “big city” of the nearby Jalgaon, a district place).

As to this song, as soon as I listened to it, I was “into it.” I remained so for all the days of that vacation at Mama’s place. Yes, it was a 45 RPM record, and the permission to put the record on the player and even to play it, entirely on my own, was hard won after a determined and tedious effort to show all the elders that I was able to put the pin on to the record very carefully. And, every one in the house was an elder to me: my siblings, cousins, uncle, his wife, not to mention my parents (who were the last ones to be satisfied). But once the recognition arrived, I used it to the hilt; I must have ended up playing this record for at least 5 times for every remaining day of the vacation back then.

As far as I am concerned, I am entirely positive that appreciation for a certain style or kind of music isn’t determined by your environment or the specific culture in which you grow up.

As far as songs like these are concerned, today I am able to discern that what I had immediately though indirectly grasped, even as a 6–7 year old child, was what I today would describe as a certain kind of an “epistemological cleanliness.” There was a clear adherence to certain definitive, delimited kind of specifics, whether in terms of tones or rhythm. Now, it sure did help that this tune was happy. But frankly, I am certain, I would’ve liked a “clean” song like this one—one with very definite “separations”/”delineations” in its phrases, in its parts—even if the song itself weren’t to be so directly evocative of such frankly happy a mood. Indian music, in contrast, tends to keep “continuity” for its own sake, even when it’s not called for, and the certain downside of that style is that it leads to a badly mixed up “curry” of indefinitely stretched out weilings, even noise, very proudly passing as “music”. (In evidence: pick up any traditional “royal palace”/”kothaa” music.) … Yes, of course, there is a symmetrical downside to the specific “separated” style carried by the Western music too; the specific style of noise it can easily slip into is a disjointed kind of a noise. (In evidence, I offer 90% of Western classical music, and 99.99% of Western popular “music”. As to which 90%, well, we have to meet in person, and listen to select pieces of music on the fly.)

Anyway, coming back to the present song, today I searched for the original soundtrack of “Come September”, and got, say, this one [^]. However, I am not too sure that the version I heard back then was this one. Chances are much brighter that the version I first listened to was Billy Vaughn’s, as in here [^].

… A wonderful tune, and, as an added bonus, it never does fail to take me back to my “salad days.” …

… Oh yes, as another fond memory: that vacation also was the very first time that I came to wear a T-shirt; my Mama had gifted it to me in that vacation. The actual choice to buy a T-shirt rather than a shirt (+shorts, of course) was that of my cousin sister (who unfortunately is no more). But I distinctly remember she being surprised to learn that I was in no mood to have a T-shirt when I didn’t know what the word meant… I also distinctly remember her assuring me using sweet tones that a T-shirt would look good on me! … You see, in rural India, at least back then, T-shirts weren’t heard of; for years later on, may be until I went to Nasik in my 10th standard, it would be the only T-shirt I had ever worn. … But, anyway, as far as T-shirts go… well, as you know, I was into software engineering, and so….

Bye [really] for now and take care…]

 

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]

 

Haptic, tactile, virtual, surgery, etc.

Three updates made on 24th March 2016 appear near the end of this post.


Once in a while I check out the map of the visitors’ locations (see the right bar).

Since hardly any one ever leaves any comment at my blog, I can only guess who these visitors possibly could be. Over a period of time, guessing the particular visitors in this way has become an idle sort of a past-time for me. (No, I don’t obsess over it, and I don’t in fact spend much time on it—at the most half-a-minute or so, once in a while. But, yes, check, I certainly do!)

Among the recent visitors, there was one hit on 6th March 2016 coming from Troy, NY, USA (at 11:48:02 AM IST, to be precise). … Must be someone from RPI, I thought. (My blog is such that mostly only academics could possibly be interested in it; never the people with lucrative industrial jobs such as those in the SF Bay Area. Most of the hits from the Bay Area are from Mountain View, and that’s the place where the bots of Google’s search engines live.)

But I couldn’t remember engaging in any discussion with any one from RPI on any blog.

Who could this visitor possibly be? I could not figure it out immediately, so I let the matter go.


Yesterday, I noticed for the first time an ad for “several” post-doc positions at RPI, posted on iMechanica by Prof. Suvranu De [^]. It had been posted right on the same day: 6th March 2016. However, since recently I was checking out only my thread on the compactness of support [^], I had missed out on the main front page. Thus, I noticed the ad only today.

Curious, I dropped an informal email to Prof. De immediately, almost more or less by cognitive habits.


I am not too keen on going to the USA, and in fact, I am not even inclined to leave India. Reasons are manifold.

You, as every one else on the planet, of course comes to know all that ever happens to or in the USA. Americans make sure that you do—whether you like it or not. (Remember 9/11? They have of course forgotten it by now, but don’t you remember the early naughties when, imagining you to be just as dumb and thick-skinned as they are,  the kind of decibels they had pierced into your metaphorical ears (and in fact also in your soul)? Justifiable, you say? How about other big “controversies” which actually were nothing but scandals? Can’t you pick up one or two?)

Naturally, who would want to go to that uncivilized a place?

And even if you want to argue otherwise, let me suggest you to see if you can or cannot easily gather (or recollect) what all that has happened to me when I was in the USA?

So, the idea of trying to impress Dr. De for this post-doc position was, any which way, completely out of the question. Even if he is HoD at RPI.

And then, frankly, at my age, I don’t even feel like impressing any one for a mere post-doc; not these days anyway (I mean, some 6 years after the PhD defense, and after having to experience so many years of joblessness (including those reported via this blog)). … As far as I am concerned, either they know what and who I am, and act accordingly (including acting swiftly enough), or they don’t. (In the last case, mostly, they end up blaming me, as usual, in some or the other way.)

OK, so, coming back to what I wrote Dr. De. It was more in the nature of a loud thinking about the question of whether I should at all apply to them in the first place or not. … Given my experience of the other American post-docs advertised at iMechanica, e.g. those by Prof. Sukumar (UC Davis), and certain others in the USA, and also my experience of the Americans of the Indian origin (and even among them, those who are JPBTIs and those who are younger to me by age), I can’t keep any realistic expectation that I would ever receive any reply to that email of mine from Prof. De. The odds are far too against; check out the “follow-up” tag. (I could, of course, be psychically attacked, the way I was, right this week, a few days ago.)

Anyway, once thus grown curious about the subject matter, I then did a bit of a Web search, and found the following videos:

The very first listing after a Google search (using the search string: “Suvranu De”; then clicking on the tab: “videos”) was the one on “virtual lap band – surgical simulation”: [^].

Watching that video somehow made me sort of uneasy immediately. Uneasy, in a minor but a definitely irritating way. In a distinctly palpable way, almost as if it was a physical discomfort. No, not because the video carries the scene of tissue-cutting and all. … I have never been one of those who feel nervous or squeamish at the sight of blood, cuts, etc. (Most men, in fact, don’t!) So, my uneasiness was not on that count. …

Soon enough (i.e., much before the time I was in the middle of that video), I figured out the reason why.

I then checked out a few more videos, e.g., those here [^] and here [^]. … Exactly the same sense of discomfort or uneasiness, arising out of the same basic reason.

What kind of an uneasiness could there possibly be? Can you guess?

I don’t want to tell you, right away. I want you to guess. (Assume that an evil smile had transitorily appeared on my face.)

To close this post: If you so want, go ahead, check out those videos, see if it makes you uncomfortable watching some part of an implementation of this new idea. Then, the sin of the sins (“paapam, paapam, mahaapaapam” in Sanskrit): drop me a line (via a comment or an email) stating what that reason possibly could be. (Hint: It has nothing to do with the feely-feely-actually-silly/wily sort of psychological reasons. )

After a while, I will come back, and via an update to this post let you know the reason.


Update 1:

Yahoo! wants you to make a note of the “12 common mistakes to avoid in job interview”: [^]. They published this article today.


Update 2 (on 24th March 2016):

Surprise! Prof.  De soon enough (on 18th March IST) dropped me an email which was brief, professional, but direct to the point. A consequence, and therefore not much of a surprise: I am more or less inclined to at least apply for the position. I have not done so as of today; see the Update 3 below.


Update 3 (on 24th March 2016):

Right the same day (on 18th March 2016 about 10:00 PM IST), my laptop developed serious hardware issues including (but not limited to) yet another HDD crash! The previous crash was less than a year ago, in last June  [^].

Once again, there was  loss of (some) data: the initial and less-than-25%-completed drafts of 4+ research papers, some parts (from sometime in February onwards) of my notes on the current course on CFD, SPH, etc., as well as some preliminary Python code on SPH). The Update 2 in fact got delayed because of this development. I just got the machine back from the Dell Service last evening, and last night got it going on a rapid test installation of Windows 7. I plan to do a more serious re-installation over the next few days.


Update 4 (on 24th March 2016):

The thing in the three videos (about haptics, virtual surgery) that made me uncomfortable or uneasy was the fact that in each case, the surgeon was standing in a way that would have been just a shade uncomfortable to me. The surgeon’s hands were too “free” i.e. unsupported (say near elbow), his torso was stooping down in a wrong way (you couldn’t maintain that posture with accuracy in hands manipulation for long, I thought), and at the same time, he had to keep his eyes fixed on a monitor that was a bit too high-up for the eyes-to-hands coordination to work right. In short, there was this seeming absence of a consideration of ergonomics or the human factors engineering here. Of course, it’s only a prototype, and it’s only a casual onlooker’s take of the “geometry,” but that’s what made me distinctly uncomfortable.

(People often have rationalistic ideas about the proper (i.e. the least stress inducing and most efficient) posture.  In a later post, I will point out a few of these.)

 


 

A Song I Like:
(filled-in on 24th March 2016)

(Marathi) “thembaanche painjaN waaje…” [“rutu premaachaa aalaa”]
Singers: Avadhoot Gupte, Vaishali Samant
Music: Shashank Powar
Lyrics: Ravi Wadkar (?)

[An incidental note: The crash occurred—the computer suddenly froze—while I was listening to—actually, watching the YouTube video of—this song. … Somehow, even today, I still continue liking the song! … But yes, as is usual for this section, only the audio track is being referred to. (I had run into this song while searching for some other song, which I will use in a later post.)]


[Some editorial touches, other than the planned update, are possible, as always.]

[E&OE]