The quantum mechanical features of my laptop…

My laptop has developed certain quantum mechanical features after its recent repairs [^]. In particular, if I press the “power on” button, it does not always get “measured” into the “power-on” state.

That’s right. In starting the machine, it is not possible to predict when the power-on button may work, when it may lead to an actual boot-up. Sometimes it does, sometimes it doesn’t.

For instance, the last time I shut it down was on the last night, just before dinner. Then, after dinner, when I tried to restart it, the quantum mechanical features kicked in and the associated randomness was such that it simply refused the request. Ditto, this morning. Ditto, early afternoon today. But now (at around 18:00 hrs on 09 October), it somehow got up and going!


Fortunately, I have taken backup of crucial data (though not all). So, I can afford to look at it with a sense of humour.

But still, if I don’t come back for a somewhat longer period of time than is usual (about 8–10 days), then know that, in all probability, I was just waiting helplessly in getting this thing repaired, once again. (I plan to take it to the repairsman tomorrow morning.) …

…The real bad part isn’t this forced break in browsing or blogging. The real bad part is: my inability to continue with my ANN studies. It’s not possible to maintain any tempo in studies in this now-on-now-off sort of a manner—i.e., when the latter is not chosen by you.

Yes, I do like browsing, but once I get into the mood of studying a new topic (and, BTW, just reading through pop-sci articles does not count as studies) and especially if the studies also involve programming, then having these forced breaks is really bad. …

Anyway, bye for now, and take care.


PS: I added that note on browsing and then it struck me. Check out a few resources while I am gone and following up with the laptop repairs (and no links because right while writing this postscript, the machine crashed, and so I am somehow completing it using smartphone—I hate this stuff, I mean typing using at most two fingers, modtly just one):

  1. As to Frauchiger and Renner’s controversial much-discussed result, Chris Lee’s account at ArsTechnica is the simplest to follow. Go through it before any other sources/commentaries, whether to the version published recently in Nature Comm. or the earlier ones, since 2016.
  2. Carver Mead’s interview in the American Spectator makes for an interesting read even after almost two decades.
  3. Vinod Khosla’s prediction in 2017 that AI will make radiologists obsolete in 5 years’ time. One year is down already. And that way, the first time he made remarks to that sort of an effect were some 6+ years ago, in 2012!
  4. As to AI’s actual status today, see the Quanta Magazine article: “Machine learning confronts the elephant in the room” by Kevin Hartnett. Both funny and illuminating (esp. if you have some idea about how ML works).
  5. And, finally, a pretty interesting coverage of something about which I didn’t have any idea beforehand whatsoever: “New AI strategy mimics how brains learn to smell” by Jordana Cepelwicz in Quanta Mag.

Ok. Bye, really, for now. See you after the laptop begins working.


A Song I Like:
Indian, instrumental: Theme song of “Malgudi Days”
Music: L. Vaidyanathan

 

 

Advertisements

Some running thoughts on ANNs and AI—1

Go, see if you want to have fun with the attached write-up on ANNs [^] (but please also note the version time carefully—the write-up could change without any separate announcement).

The write-up is more in the nature of a very informal blabber of the kind that goes when people work out something on a research blackboard (or while mentioning something about their research to friends, or during brain-storming session, or while jotting things on the back of the envelop, or something similar).

 


A “song” I don’t like:

(Marathi) “aawaaj waaDaw DJ…”
“Credits”: Go, figure [^]. E.g., here [^]. Yes, the video too is (very strongly) recommended.


Update on 05 October 2018 10:31 IST:

Psychic attack on 05 October 2018 at around 00:40 IST (i.e. the night between 4th and 5th October, IST).

 

Caste Brahmins, classification, and ANN

1. Caste Brahmins:

First, a clarification: No, I was not born in any one of the Brahmin castes, particularly, not at all in the Konkanastha Brahmins’ caste.

Second, a suggestion: Check out how many caste-Brahmins have made it to the top in the Indian and American IT industry, and what sort of money they have made—already.

No, really.

If you at all bother visiting this blog, then I do want you to take a very serious note of both these matters.

No. You don’t have to visit this blog. But, yes, if you are going to visit this blog, to repeat, I do want you to takeĀ  matters like these seriously.

Some time ago, perhaps a year ago or so, a certain caste-Brahmin in Pune from some place (but he didn’t reveal his shakha, sub-caste, gotra, pravar, etc.) had insulted me, while maintaining a perfectly cool demeanor for himself, saying how he had made so much more money than me. Point taken.

But my other caste-Brahmin “friends” kept quiet at that time; not a single soul from them interjected.

In my off-the-cuff replies, I didn’t raise this point (viz., why these other caste-Brahmins were keeping quiet), but I am sure that if I were to do that, then, their typical refrain would have been (Marathi) “tu kaa chiDatos evhaDa, to tar majene bolat hotaa.” … English translation: Why do you get so angry? He was just joking.

Note the usual caste-Brahmin trick: they skillfully insert an unjustified premise; here, that you are angry!

To be blind to the actual emotional states or reactions of the next person, if he comes from some other caste, is a caste-habit with the caste-Brahmins. The whole IT industry is full of them—whether here in India, or there in USA/UK/elsewhere.

And then, today, another Brahmin—a Konkanastha—insulted me. Knowing that I am single, he asked me if I for today had taken the charge of the kitchen, and then, proceeded to invite my father to a Ganesh Pooja—with all the outward signs of respect being duly shown to my father.


Well, coming back to the point which was really taken:

Why have caste-Brahmins made so much money—to the point that they in one generation have begun very casually insulting the “other” people, including people of my achievements?

Or has it been the case that the people of the Brahmin castes always were this third-class, in terms of their culturally induced convictions, but that we did not come to know of it from our childhood, because the elderly people around us kept such matters, such motivations, hidden from us? May be in the naive hope that we would thereby not get influenced in a bad manner? Possible.

And, of course, how come these caste-Brahmins have managed to attract as much money as they did (salaries in excess of Rs. 50 lakhs being averagely normal in Pune) even as I was consigned only to receive “attract” psychic attacks (mainly from abroad) and insults (mainly from those from this land) during the same time period?

Despite all my achievements?

Do take matters like these seriously, but, of course, as you must have gathered by now, that is not the only thing I would have, to talk about. And, the title of this post anyway makes this part amply clear.


2. The classification problem and the ANNs:

I have begun my studies of the artificial neural networks (ANNs for short). I have rapidly browsed through a lot of introductory articles (as also the beginning chapters of books) on the topic. (Yes, including those written by Indians who were born in the Brahmin castes.) I might have gone through 10+ such introductions. Many of these, I had browsed through a few years ago (I mean only the introductory parts). But this time round, of course, I picked them up for a more careful consideration.

And soon enough (i.e. over just the last 2–3 days), I realized that no one in the field (AI/ML) was talking about a good explanation of this question:

Why is it that the ANN really succeeds as well as it does, when it comes to the classification tasks, but not others?

If you are not familiar with Data Science, then let me note that it is known that ANN does not do well on all the AI tasks. It does well only on one kind of them, viz., the classification tasks. … Any time you mention the more general term Artificial Intelligence, the layman is likely to think of the ANN diagram. However, ANNs are just one type of a tool that the Data Scientist may use.

But the question here is this: why does the ANN do so well on these tasks?

I formulated this question, and then found an answer too, and I would sure like to share it with you (whether the answer I found is correct or not). However, before sharing my answer, I want you to give it a try.

It would be OK by me if you answer this question in reference to just one or two concrete classification tasks—whichever you find convenient. For instance, if you pick up OCR (optical character recognition, e.g., as explained in Michael Nielson’s free online book [^]), then you have to explain why an ANN-based OCR algorithm works in classifying those MNIST digits / alphabets.


Hint: Studies of Vedic literature won’t help. [I should know!] OTOH, studies of good books on epistemology, or even just good accounts covering methods of science, should certainly come in handy.

I will give you all some time before I come back on that question.

In the meanwhile, have fun—if you wish to, and of course, if you are able to. With questions of this kind. (Translating the emphasis in the italics into chaste Marathi: “laayaki asali tar.” Got it?)


A song I like:
(Marathi) “ooncha nicha kaahi neNe bhagawant”
Lyrics: Sant Tukaram
Music and Singer: Snehal Bhatkar

 

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]