Introducing the world at large to a new concept, viz., “Blog-Filling”—Part 1

I hereby introduce to the world at large, awaiting for it with a withheld breath, a new concept, viz. (which is read as “namely” and not “that is,” though the difference has been lost on the English Newspaper Editors of my current town, apparently, long ago; apparently, out of not only a very poor sense of English, but of equally poor sense of supervision descending here from the likes of Delhi and Mumbai—the two highly despicable towns of India).


The concept itself pertains to the idea of having to fill some column-centimeters (or, column-inches in that deprecated country, viz., USA), with whatever it is that you have to fill with.


The world (including the said USA) has been waiting precisely for such a new concept, and I am particularly glad at having not only announcing it, but also having had developed the requisite skills.


The concept in question may most aptly be named: “Blog-Filling.” Translated into a noun, it reads: a “blog filler.”

This post now is [in case you didn’t already guess] is The Blog Filler. [Guess I might have already announced its arrival, given my psycho-epistemological habits i.e. second natures.]


Ummm… In case you still are found wondering, may I repeat, this post really is a blog-filler.


OK. Honest. I will deliver on the promised count. So, here we go: I mean on the RD+Gulzar+Lata song I had had [and may be, also have had/had had/had have/etc.] promised…


A Song I Like:

(Hindi) “silli hawaa chhoo gayee, sillaa badan chhill_ gayaa”
Credits: Are you so dumb as not to be able to guess even these?
OK. I will tell you what? I will note these down, right here:
Lyrics: Gulzaar
Music: R. D. Burman
Singer: Lata Mangeshkar


A “Philanthropic” Assertion:

Even if you are so dumb, and, as usual, richer-than-me, or an Approved SPPU Mechanical Engineering Faculty (or of Any Other Indian University/AICTE/UGC), as not having been able to even guess it, or, in summary, if you are an American Citizen:

Don’t worry, even if you have not been able to guess it. … It was just a small simple game…

…Continuing on the same lines [which lines, people like me don’t need]: now, take care, and best, and good-bye; I mean it; etc.


Bye for now. Don’t bother me too much.

 

Advertisements

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]

 

A flip, but not a flop…

“Why is it that when you look in the mirror, the left and right directions appear flipped, but not the up and down?”


Stop reading!

Do not read further until you have honestly tried answering that question!


The question was asked at the Physics StackExchange.

As often is the case, using only text is not at all good when it comes to explaining physics [^]; adding figures does help [^]. And then, animations are even better at it than having just “dead” (static) figures. Going further, interactive graphics, which let the user participate in manipulating the presentation of information, of course beats those mere animations. Better than that, if possible, is an actual demonstration in real life, accompanied by an explanation using simple words.

…As far as the above question is concerned, the Physics Girl [^] does a fairly good job [^].

The best mode of teaching-learning, of course, is an actual and immediate interaction with a person, who in turn might use (and allow you to use) any and all of the above options!

And that’s the reason why, regardless of how much technology progresses, the actual person-to-person type of teaching will never go out of business.


A Video I Liked:

A `Thought Leader’ gives a talk that will inspire your thoughts: [^]

 

 

WEF, Institutions, Media and Credibility

Some time ago, I had run into some Internet coverage about some WEF (World Economic Forum) report about institutions and their credibility rankings. I no longer remember where I had seen it mentioned, but the fact that such an article had appeared, had somehow stayed in the mind.

Today, in order to locate the source, I googled using the strings “WEF”, “Credibility” and “Media”. The following are a few links I got as a result of these searches. In each case, I first give the source organization, then the title of the article they published, and finally, the URL. Please note, all cover essentially the same story.

  • Edelman, “2017 Edelman TRUST BAROMETER Reveals Global Implosion of Trust,” [^]
  • Quartz, “The results are in: Nobody trusts anyone anymore,” [^]
  • PostCard, “Must read! World Economic Forum releases survey on Indian media, the results are shameful!,” [^]
  • TrollIndianPolitics, “`INDIAN MEDIA 2ND MOST UNTRUSTED INSTITUTION’ Reports WORLD ECONOMIC FORUM,” [^]
  • Financial Express, “WEF Report: ‘India most trusted nation in terms of institutions’,” [^]
  • Financial Times, “Public trust in media at all time low, research shows,” [^]
  • WEF, “Why credibility is the future of journalism,” [^]

“Same hotel, two different prices…” … [Sorry, just couldn’t resist it!]

Oh, BTW, I gather that the report says that institutions in India are more credible as compared to those in Singapore.

Do click the links if you haven’t yet done so, already. [No, I don’t get paid for the clicks on the outgoing links.]


Still getting settled in the new job and the city. Some stuff still is to be moved. But guess it was time to slip in at least a short post. So there. Take care and bye for now.