# LOL!

LOL!

Yeah! Just that!

LOL!!

Update on 2020.02.17 16:02 IST:

The above is a snap I took yesterday at the Bhau Institute [^]’s event: “Pune Startup Fest” [^].

The reason I found myself laughing out loud was this: Yesterday, some of the distinguished panelists made one thing very clear: The valuation for the same product is greater in the S.F. Bay Area than in Pune, because the eco-system there is much more mature, with the investors there having seen many more exits—whether successful or otherwise.

Hmmm…

When I was in the USA (which was in the 1990s), they would always say that not every one has to rush there to the USA, especially to the S.F. Bay Area, because technology works the same way everywhere, and hence, people should rather be going back to India. The “they” of course included the Indians already established there.

In short, their never-stated argument was this much: You can make as much money by working from India as from the SF Bay Area. (Examples of the “big three” of Indian IT Industry would often be cited, esp. of Narayana Moorthy’s.) So, “why flock in here”?

Looks like, even if they took some 2–3 decades to do so, finally, something better seems to have downed on them. They seem to have gotten to the truth, which is: Market valuations for the same product are much greater in the SF Bay Area than elsewhere!

So, this all was in the background, in the context.

Then, I was musing about their rate of learning last night, and that’s when I wrote this post! Hence the title.

But of course, not every thing was laughable about, or in, the event.

I particularly liked Vatsal Kanakiya’s enthusiasm (the second guy from the right in the above photo, his LinkedIn profile is here [^]). I appreciated his ability to keep on highlighting what they (their firm) are doing, despite a somewhat cocky (if not outright dismissive) way in which his points were being seen, at least initially. Students attending the event might have found his enthusiasm more in line with theirs, especially after he not only mentioned Guy Kawasaki’s 10-20-30 rule [^], but also cited a statistics from their own office to support it: 1892 proposals last month (if I got that figure right). … Even if he was very young, it was this point which finally made it impossible, for many in that hall, to be too dismissive of him. (BTW, he is from Mumbai, not Pune. (Yes, COEP is in Pune.))

A song I like:

(Hindi) ये मेरे अंधेरे उजाले ना होते (“ye mere andhere ujaale naa hote”)
Music: Salil Chowdhury
Singers: Talat Mahmood, Lata Mangeshkar
Lyrics: Rajinder Kishen

[Buildings made from the granite stone [I studied geology in my SE i.e. second year of engineering] have a way of reminding you of a few songs. Drama! Contrast!! Life!!! Money!!!! Success!!!!! Competition Success Review!!!!!!  Governments!!!!!!! *Business*men!!!!!!!!]

# Equations in the matrix form for implementing simple artificial neural networks

(Marathi) हुश्श्… [translit.: “hushsh…”, equivalent word prevalent among the English-speaking peoples: “phewww…”]

I’ve completed the first cut in writing a document of the same title as that of this post. I wrote it in LaTeX. (Too many equations!)

I’ve just uploaded the PDF file at my GitHub account, here [^]. Remember, it’s still only in the alpha stage. (A beta release will follow after a few days. The final release may take place after a couple of weeks or so.)

Below the fold, I copy-paste the abstract and the preface of this document.

“Equations in the matrix form for implementing simple artificial neural networks”

Abstract:

This document presents the basic equations in reference to which artificial neural networks are designed and implemented. The scope is restricted to
the simpler feed-forward networks, including those having hidden layers. Convolutional and recurrent networks are out of the scope.

Equations are often initially noted using an index-based notation for the typical element. However, all the equations are eventually cast in the direct
matrix form, using a consistent set of notation. Some of the minor aspects of notation were invented to make the presentation as simple and direct as
possible.

The presentation here regards a layer as the basic unit. The term “layer” is understood in the same sense in which APIs of modern libraries like
TensorFlow-Keras 2.x take it. The presentation here is detailed enough that neural networks with hidden layers could be implemented, starting from
the scratch.

Preface:

Raison d’être:

I wrote this document mainly for myself, to straighten out the different notations and formulae used in different sources and contexts.

In particular, I wanted to have a document that better matches the design themes used in today’s libraries (like TensorFlow-Keras 2.x) than the description in the text-books.

For instance, in many sources, the input layer is presented as consisting of both a fully connected layer and its corresponding activation layer. However, for flexibility, libraries like TF-Keras 2.x treat them as separate layers.

Also, some sources uniformly treat the input of any layer as $\vec{X}$, and output of any layer as activation, $\vec{a}$ , but such usage overloads the term “activation”. Confusions also creep in because different conventions exist: treating the bias by expanding the input vector with $1$ and the weights matrix with $w_0$ ; the “to–from” vs “from–to” convention for the weights matrix, etc.

I wanted to have a consistent notation that dealt with all such issues with a uniform, matrix-based notation that came as close to the numpy ndarray interface as possible.

Level of coverage:

The scope here is restricted to the simplest ANNs, including the simplest DL networks. Convolutional neural networks and recurrent neural networks are out of the scope.

Yet, this document wouldn’t make for a good tutorial for a complete beginner; it is likely to confuse him more than explaining anything to him. So, if you are completely new to ANNs, it is advisable to go through sources like Nielsen’s online book [^] to learn the theory of ANNs. Mazur’s fully worked out example of the back-propagation algorithm [^] should also prove to be very helpful,  before returning back to this document.

If you already know ANNs, and don’t want to see equations in the fully expanded forms—or, plain dislike the notation used here—then a good reference, roughly at the same level as this document, is the set of write-ups/notes by Mallya [^].

Feedback:

Any feedback, especially that regarding errors, typos, inconsistencies in notation, suggestions for improvements, etc., will be thankfully received.

How to cite this document:

TBD at the time of the final release version.

Further personal notings:

I began writing this document on 24 January 2020. By 30 January 2020, I had some 11 pages done up, which I released via the last post.

Unfortunately, it was too tentative, with lot of errors, misleading or inconsistent notation, etc. So, I deleted it immediately within a day. No point in having premature documents floating around in the cyberspace.

I had mentioned, right in the last post here on this blog (on 30 January 2020), that the post itself also would be gone. I will keep it for a while, and then, may be after a week or two, delete it.

Anyway, by the time I finished the alpha version today, the document had grown from the initial 11 pages to some 38 pages!

Typing out all the braces, square brackets, parentheses, subscripts for indices, subscripts for sizes of vectors and matrices… It all was tedious. … Somehow, I managed to finish it. (Will think twice before undertaking a similar project, but am already tempted to write a document each on CNNs and RNNs, too!)

Anyway, let me take a break for a while.

If interested in ANNs, please go through the document and let me have your feedback. Thanks in advance, take care, and bye for now.

A song I like:

[Just listen to Lata here! … Not that others don’t get up to the best possible levels, but still, Lata here is, to put it simply, heavenly! [BTW, the song is from 1953.]]

(Hindi) जाने न नजर पहचाने जिगर (“jaane naa najar pahechane jigar”)
Singers: Lata and Mukesh
Music: Shankar-Jaikishen
Lyrics: Hasrat Jaipuri

# Some thoughts concerning my New Year’s Resolutions

Here is some loud-thinking regarding what NYR’s I should make, and a tentative list for the same.

[That’s right. IMO, you don’t make NYRs on the 31st—you only finalize them on that day. You should have thought a bit for your list over at least a few days before The Evening comes. That’s how it should be done.]

Anyway, here’s what I think of it, as of today.

1. Quantum Mechanics:

1.1 What I did this year:

As to QM, I could not keep the time-table I had thought of when I made my resolutions last year [^].

Sure enough, by way of keeping the resolution, I did post the Outline document at iMechanica [^], and I did it right within the very optimistic time-frame too [10 February instead of 28 February]. However, I didn’t come to write the paper proper. The reason is, after posting the Outline document, I had a bit of interaction with a couple of physicists, and thereby realized that directly writing the paper would be premature.

So, I changed the plan on the fly. I then noted many clarifications over the year, both here and on twitter. In fact, I also completed the ontologies series—a big effort, consisting of 10 posts, many of them with more than 5k words (and containing a lot of equations too).

However, I have not had the time to write down a post on what my solution to the measurement problem is like. The reason is, Data Science came to occupy much of my time.

Yet, in the year 2020, I think I am going to pull my thoughts on the Measurement Problem together, and write a piece on this remaining topic too—either a blog post or an informal LaTeX document. I think the latter. (But am not sure about that. If you post PDFs, people unnecessarily think the material is less tentative that it really is.) I think this task should be definitely doable within the year. More on it, a bit later, below.

1.2 The ontologies series should be converted into a standalone document:

But before publishing something on the measurement problem (even if only on my blog), I also think that I should first convert my ontologies series of posts into a standalone LaTeX document. Since this series was written purely on the fly, without much planning, there happened some unnecessary repetitions. [Actually, it all began with some five minutes of idle weighing of this idea while going to sleep one night… I then got out of the bed, switched on the light, and hurriedly noted down the idea in a pocket diary. The hurried noting said three posts, one each on NM, EM and QM.] For the same reason, there also were some minor digressions or detours, and also some minor changes of notations (esp. in the ontology of EM, as I revised my positions regarding E fields and all). So, I could now take an opportunity to straighten out all such matters.

Ideally, I should also add some diagrams to this planned document (on the ontologies). But I would have neither the time nor the enthusiasm to make them.

So, if there is any enthusiastic guy/girl who wants to help me out in this respect, get in touch, or suggest me some suitable illustrator/animator who could work on a pro bono basis.

I won’t be able to pay any money. But it could make for a good project for students of commercial art, animation, etc. So, if interested, get in touch. (It goes without saying that if I begin to make money next year, I will make sure to pay something, at least by way of an honorarium. If I make even more money, I will be even more, up to good market rates.) If no artist is available, I will go ahead with cell-phone shots of my own rough, hand-drawn, sketches.

So, is this goal of converting the ontologies-related blog-posts into a document—a mini-book of sorts—doable? Right in 2020? I think definitely yes. I also think that I am going to pick this one up for a resolution.

1.3 Measurement problem: How to go about writing a paper on it:

Even as this activity begins, it should be possible to write something on the Measurement Problem. However, there is another issue to consider. Ideally, the writing should go with some simulations too. … Now, I am confident that I will be able to find the time to write the document, but I am not equally sure about having the time for conducting the simulations too. (Also, I won’t be seeking help from physicists or so. They are third-class people.)

As of today, I tend to think that I should first complete both (i) the standalone Ontologies document, and (ii) the Measurement Problem documents. Only then should I revise the Outline document (posted at iMechanica).

It’s only then that I should download the article template files from Nature / Science / PRL. … No, don’t get shocked—there is nothing shocking here.

I do believe that I do have a good paper here in the pipeline. Any one who solves the measurement problem in such a way that (i) it’s easily understandable even to engineers, (ii) there is a new but simple proposal for the necessary nonlinearity—one that does not introduce any extra variables to the Schrodinger’s equation, and yet one that can be shown to reduce to the linear formalism in a limit, and (iii) the approach can be directly translated into 3D simulations, then such a development would very easily qualify for publication even in Nature—provided the writing is brief enough. So there. … All the preparatory documents would then come in handy as “supplemental information”.

… Come to think of it, this would be my first journal paper. (At least as a first/sole author. In any case, it will be my first journal paper on a theory I myself formulated.)

But the question is, would it be possible to complete the paper right in 2020? I doubt. The reason is, I would also be busy with a very fast moving field, viz., Data Science. But still, I think that it would worth giving a good shot to conversion of the revised Outline document (itself TBD in 2020) into the form of a paper.

2. Data Science:

2.1 What was planned:

A job in Data Science didn’t come through during 2019, as anticipated. So, some of my planned activities related to the same didn’t occur. However, other productive activities came to replace them. So, it’s OK.

Employers have been less productive than I have been.

2.2 What I could be doing:

As of now, projecting into 2020:

The problem of how to make DL more accurate (even robust) seems interesting. I perhaps might have some new ideas to try out here… However, I don’t have enough of computing resources to be able to actually try these ideas out, empirically. So, this one probably will not make to my list of resolutions.

The approach seems relevant (at least with my current knowledge of ANNs and DL), but I am not sure how good it is. Theoretically, it’s not a big deal—“just a variation” on the same old, known, themes. But worth trying. And, it does seem that people haven’t pursued such ideas—even if the ideas seem to have good potential.

If a VC wants to give me an informal scholarship, I could pursue the idea further on a priority and turn over the results to him. Feel free to get in touch. (These rich dumbards won’t, I predict.)

3. Health:

I have always failed in keeping this one resolution of going for walks for at least 25–30 (preferably 45 minutes) a day. I could not, despite making a resolution about it—and working on it.

I think it would be a good idea to keep at least a “compromised” version of this resolution for this year too. Failures don’t matter. You have to give a try again. Also, the one related to सूर्य नमस्कार (“soorya namaskaars”). (I did better, much better, on this count in the last year.)

4. Mental health:

4.1 Blogging—what to do?

I think it’s high time to make a decision: Either close down this blog, or stop writing a lot on it. May be one post per 20 days or so. Or, something like it.

4.2 My blogging, overall:

My current rate at this blog, over 12 years, is close to one post every 11.01 days—not counting my posts/replies at iMechanica.

Last year, I also wrote unusually big posts (often longer than 2000 words, and in the ontologies series, many times going into the range of 5k to 10k words, just because I wanted to finish this series off).

At iMechanica, I find that I have made some 250 blog-posts/replies, out of which there could be some 30–40 blog posts proper (may be about 50–100 too; I haven’t counted them), and the rest are replies.

I had my personal Web site set up when I was doing PhD. I think I set it up in 2007. I used to post some blog-like updates on this Web site back then. Then I blogging here on the 3rd January 2008. I began blogging at iMechanica in March 2008.

Most Indians who used to blog regularly in those times have more or less discontinued doing any significant blogging. Professors persisted for a longer time, but they too have mostly stopped. Some got promoted or assumed greater administrative responsibilities, which must have affected their being regulars (people like, say, Dheeraj Sanghi or Abinandanan). Others might have simply lost interest. Very few still go on, and their pace has reduced a lot.

Another point: I also don’t get (m)any good quality replies. Most of my posts in fact are just monologues. There is a definite feeling that people from more powerful countries / positions (esp. Americans, but also others) want to read what I write, but they don’t want to acknowledge—lest this action on their part lead to an elevation of my position / prestige. It’s as if they want to benefit from me, but still want to feel superior at all times, anyway.

Not at all unexpected from Americans—I have spent 7 years of my life in that country, and I know them as a people pretty well. Retards eternally looking for getting compliments for being great, re-assurances that they are not fools, and obsessed about money and power. Without any thought of being reciprocal. Also, un-necessarily assuming a grumpiness (even “intellectual goon-some-ness”) while talking to foreigners. (“Hey, there was a guy here who did it first!”) That’s what they are like, when all facades are dropped. Not all of them, but most of them. (Yes, I am a facts-driven guy.) You couldn’t count on them to acknowledge that I post neat things, or positively reciprocate to my ontologies series, or the fact that I have solved the measurement problem. No scope for saying: “Hey, hey, hey, an American did it first!” That’s (especially) why.

But with about 44% share, the largest group of my readership is constituted of Americans. (Yes, I am a data-driven guy.)

Second come the Indians. They constitute the second biggest group, at about 37%. You already know what they are typically like. “Unless I pull down this Ajit Jadhav guy, I cannot rise higher up.” Here, I’ve quoted a past colleague from the IT field—a junior colleague of mine. No further comments necessary.

Anyway, what I wanted to highlight here is that, my experience of blogging has been remarkably different from what, say, Scott Aaronson, Atanu Dey, or Abinandanan might have had.

4.3 What could make for a good New Year’s resolution in this direction?

So, the question is: Should I keep engaging people who don’t know how to reciprocate values (or know too well how to deliberately pull down others so as to rise up in career, calling names and ascribing psychological weaknesses (“you are imposing” types) to accomplish such goals)? And for what reason or purpose? And should I be doing it all for free?

But then, with blogging, there also are advantages like a certain professional visibility. Now that I have got into Data Science, it’s important to have some visibility here too. So, may be closing down the blog wouldn’t be the best thing to do.

So, that’s another thing that I am thinking about.

Guess I will wrap up my thoughts on this matter and reach some decision by the time the new year’s eve arrives. … One option here is to start a new blog, mainly for Data Science, and with it, may be, shut this one down permanently…. Let me think about it….

5. Other things from the last year’s list:

I think I did pretty OK on the counts of diet and also meditation, though not much on exercises (though I did do them for some spans of time, as noted above).

6. Not resolutions, just a wish-list of sorts:

This is just a wish-list. I don’t see them as potential resolutions to make on the new year’s eve. But I might as well note them.

• Go on a “long” tour by car, mainly for site-seeing but also visiting temples as they come by—say to “Somanath/Dwaraka” in Gujarat (a long drive through Saurashtra is what I have somehow wanted to do for quite some time—I honestly don’t know why it caught my fancy, but it’s been almost a decade or so that it has). Or, may be, go some places in Rajasthan and MP and all. … The trouble here is, my car has now become old. 15 years old, in fact. (It was 6 years old when I bought it second-hand.) I just got it re-registered. Hmmm… 15 years completed and into the 16th year… Whether you call it “old” or “teenager”, one thing is for certain: it wouldn’t be reliable for going on such long a journey. And, I don’t have any money anyway. Not even for the petrol, let alone for buying a new car.
• Write a ML program to automatically recognize the “raaga”s of popular Indian songs. This idea has been with me for a long time, decades in fact. The first time I wondered aloud about it was in 1985, when I was teaching in an engineering college in Pune. A COEP guy who just did MTech from IITKGP had joined the college. He was into Indian classical music, I vaguely recall. In any case, this was the idea I had tossed across to him. … The first couple of AI books I bought and read were in the early 1990s; the titles I think were something like “Expert System” for medical applications. (I had bought them from Modern book-stall in the Camp area.)
• Every week, translate at least 1–3 verses from an उपनिषद (“upanishad”) into English (possibly also liberally using Marathi in the process), explaining the roots of the Sanskrit words, their context and sense, and hence the actual meaning of the verse (in literal and more figurative/speculative terms), after filtering out the externally slapped on mysticism, “interpretations,” etc. … Any actual mysticism already present in the verses will be kept in tact. But mysticism is in the least bothersome to me. The real issue is this: Whatever I write, it will be seen only in the context of the scholarly commentaries by others—many of the authors being of unnecessarily high reputations. So, my writings wouldn’t be seen for what they are: as an honest kind of an exercise, arising out of a hobby-like interest, purely for personal growth and satisfaction. People tend to think that उपनिषद (“upanishad”) are only for the scholars and the like—not for a personal, enjoyable, process of discovery for oneself… May be I should try a bit next year, but without making any resolution about it. Or, make a resolution for just a few verses per month, may be on a separate blog… Something to think about…

So, there.

I will think further, and post my final resolutions on the 31st or so.

A song I like:

(Hindi) “puruvaa suhaani aayee re…”
Singers: Lata Mangeshkar, Mahendra Kapoor, Manhar Udas
Music: Kalyanji–Anandji
Lyrics: Santosh Anand

[When this movie came, I was in school, may be in 6th/7th standard or so. (The movie date is 1971, which means I must be in 5th standard. But back then, we were in Shirpur, and it would take more than a year before any new movies came to Shirpur. People would go to Dhule or Jalgaon, nearby district places, if they wanted to see a latest movie.) So, when it eventually came to the town, I must have been in 6th or 7th—and my vague sense of memory seems to suggest that it must have been in 7th standard, 1st sem. Anyway, this movie was censored for us by our parents, obviously in reference to the mini-skirt of Saira Banu, I guess. (I vaguely remember that this movie was declared tax-free, but such a bit wouldn’t have any effect on parents.)

But the audio of the song would often get played on radio or loud-speakers at public function. It had a catching rhythm and energetic singing by all. So, it created a niche somewhere at the back of the mind….

… Some time this year, when I googled for this song, a HD YouTube video came up as the first link. … Well, Saira’s skirt would be looked at as being quite a normal dress today in India. No parent would censor the movie, I guess—not in the cities anyway… In any case, what really caught my eye while watching this video the first time wasn’t Saira Banu’s enthusiastic dancing (she seems to be actually enjoying the act), but a small sequence of steps of gliding backwards which they gave to Bharathi (check out at 01:25 here [^]).

I don’t know why, but while watching this song for the first time, this stepping back sequence came a bit unexpectedly, and may be that’s why, I somehow noticed just how smoothly, subtly, Bharathi has performed it. Perfectly in touch with the rhythm, with a perfect smoothness, with just the right kind of a light footwork. OK. It may not impress every one. Yet, somehow, it captured me… I don’t know if your reaction would be the same or not. But, personally, I found this small sequence to be a most expertly delivered: it was smooth and delightful. “That’s how dancing should be” I involuntarily thought—right on the fly when I first watched it.

I didn’t know the actress, so checked out on her name and background. Turns out to be a Kannada actress. …Well, obviously. It couldn’t have been any one but some South Indian lady—only they can get that smooth… I know for a fact (from IIT Madras as also later on through many colleagues) that in South India, at least in our times, most all girls would get taught at least some rudiments of dancing—at home, or at some nearby school, or in a temple, or so. It would be considered an essential part of a girl’s upbringing. … If you practice this skill right from the childhood, then being in step with the rhythm comes very naturally to you. You don’t do it “consciously”. The steps comes out a lightly, and it all looks natural…

… Anyway, we would often hear this song on the radio or loud-speakers, and I have enjoyed its rhythm and texture, the fresh tune, Lata’s alluring opening and also the Western-like “laa laa laa” thrown together, the ingeniously arranged orchestration with traditional Indian instruments, and, very very apt and grown-in-the-soil (almost “sweet”) Indian words. It’s पुरुवा [^] here—neither पूर्वा nor, obviously, पुरवा [^]. If you want, a fairly good translation is here [^] (though it could be improved a bit—just an addition of a comma here and there, that’s all (don’t disturb the literal translation aspect of it, done very well!)). All in all, the song has an unusual, innovative composition—and an overall a very happy sense to it… Hope you like it too. …]