Status update on my trials for the MNIST dataset

This post is going to be brief, relatively speaking.


1. My further trials for the MNIST dataset :

You know by now, from my last post about a month ago [^], that I had achieved a World Rank # 5 on the MNIST dataset (with 99.78 % accuracy), and that too, using a relatively slow machine (single CPU-only laptop).

At that time, as mentioned in that post, I also had some high hopes of bettering the result (with definite pointers as to why the results should get better).

Since then, I’ve conducted a lot of trials. Both the machine and I learnt a lot. However, during this second bout of trials, I came to learn much, much more than the machine did!

But natural! With a great result already behind me, my focus during the last month naturally shifted to better understanding the why’s and the how’s of it, rather than sheer chasing a further improvement in accuracy, by hook or crook.

So, I deliberately reduced my computational budget from 30+ hours per trial to 12 hours at the most. [Note again, my CPU-only hardware runs about 4–8 times slower, perhaps even 10+ times slower, as compared to the GPU-carrying machines.]

Within this reduced computing budget, I pursued a lot many different ideas and architectures, with some elements being well known to people already, and some elements having been newly invented by me.

The ideas I combined include: batch normalization, learnable pooling layers, models built with functional API (permitting complex entwinements of data streams, not just sequential), custom-written layers (not much, but I did try a bit), custom-written learning-rate scheduler, custom-written call-back functions to monitor progress at batch- and epoch-ends, custom-written class for faster loading of augmented data (TF-Keras itself spins out a special thread for this purpose), apart from custom logging, custom-written early stopping, custom-written ensembles, etc. … (If you are a programmer, you would respect me!)

But what was going to be the result? Sometimes there was some faint hope for some improvement in accuracy; most times, a relatively low accuracy was anyway expected, and that’s what I saw. (No surprises there—the computing budget, to begin with, had to kept small.)

Sometime during these investigations into the architectures and algorithms, I did initiate a few long trials (in the range of some 30–40 hours of training time). I took only one of these trials to completion. (I interrupted and ended all the other long trials more or less arbitrarily, after running them for, may be, 2 to 8 hours. A few of them seemed leering towards eventual over-fitting; for others, I simply would lose the patience!)

During the one long trial which I did run to completion, I did achieve a slight improvement in the accuracy. I did go up to 99.79 % accuracy.

However, I cannot be very highly confident about the consistency of this result. The algorithms are statistical in nature, and a slight degradation (or even a slight improvement) from 99.79 % is what is to be expected.

I do have all the data related to this 99.79 % accuracy result saved with me. (I have saved not just the models, the code, and the outputs, but also all the intermediate data, including the output produced on the terminal by the TensorFlow-Keras library during the training phase. The outputs of the custom-written log also have been saved diligently. And of course, I was careful enough to seed at least the most important three random generators—one each in TF, numpy and basic python.)

Coming back to the statistical nature of training, please note that my new approach does tend to yield statistically much more robust results (with much less statistical fluctuations, as is evident from the logs and all, and as also is only to be expected from the “theory” behind my new approach(es)).

But still, since I was able to conduct only one full trial with this highest-accuracy architecture, I hesitate to make any statement that might be mis-interpreted. That’s why, I have decided not to claim the 99.79 % result. I will mention the achievement in informal communications, even on blog posts (the way I am doing right now). But I am not going to put it on my resume. (One should promise less, and then deliver more. That’s what I believe in.)

If I were to claim this result, it would also improve my World Rank by one. Thus, I would then get to World Rank # 4.

Still, I am leaving the actual making of this claim to some other day. Checking the repeatability will take too much time with my too-slow-for-the-purposes machine, and I need to focus on other areas of data science too. I find that I have begun falling behind on them. (With a powerful GPU-based machine, both would have been possible—MNIST and the rest of data science. But for now, I have to prioritize.)

Summary:

Since my last post, I learnt a lot about image recognition, classification, deep learning, and all. I also coded some of the most advanced ideas in deep learning for image processing that can at all be implemented with today’s best technology—and then, a few more of my own. Informally, I can now say that now I am at World Rank # 4. However, for the reasons given above, I am not going to make the claim for the improvement, as of today. So, my official rank on the MNIST dataset remains at 5.

 


2. Miscellaneous:

I have closed this entire enterprise of the MNIST trials for now. With my machine, I am happy to settle at the World Rank # 5 (as claimed, and # 4, informally and actually).

I might now explore deep learning for radiology (e.g. detection of abnormalities in chest X-rays or cancers), for just a bit.

However, it seems that I have been getting stuck into this local minimum of image recognition for too long. (In the absence of a gainful employment in this area, despite my world-class result, it still is just a local minimum.) So, to correct my overall tilt in the pursuit of the topics, for the time being, I am going to keep image processing relatively on the back-burner.

I have already started exploring time-series analysis for stock-markets. I would also be looking into deep learning from text data, esp. NLP. I have not thought a lot about it, and now I need to effect the correction.

… Should be back after a few weeks.

In the meanwhile, if you are willing to pay for my stock-market tips, I would sure hasten designing and perfecting my algorithms for the stock-market “prediction”s. … It just so happens that I had predicted yesterday, (Sunday 10th May) that the Bombay Stock Exchange’s Sensex indicator would definitely not rise today (on Monday), and that while the market should trade at around the same range as it did on Friday, the Sensex was likely to close a bit lower. This has come true. (It in fact closed just a bit higher than what I had predicted.)

… Of course, “one swallow does not a summer make.”… [Just checked it. Turns out that this one has come from Aristotle [^]. I don’t know why, but I had always carried the impression that the source was Shakespeare, or may be some renaissance English author. Apparently, not so.]

Still, don’t forget: If you have the money, I do have the inclination. And, the time. And, my data science skills. And, my algorithms. And…

…Anyway, take care and bye for now.

 


A song I like:

(Hindi) तुम से ओ हसीना कभी मुहब्बत मुझे ना करनी थी… (“tum se o haseenaa kabhee muhabbat naa” )
Music: Laxmikant Pyarelal
Singers: Mohammad Rafi, Suman Kalyanpur
Lyrics: Anand Bakshi

[I know, I know… This one almost never makes it to the anyone’s lists. If I were not to hear it (and also love it) in my childhood, it wouldn’t make to my lists either. But given this chronological prior, the logical prior too has changed forever for me. …

This song is big on rhythm (though they overdo it a bit), and all kids always like songs that emphasize rhythms. … I have seen third-class songs from aspiring/actual pot-boilers, songs like तु चीझ बडी है मस्त मस्त (“too cheez baDi hai mast, mast”), being a huge hit with kids. Not just a big hit but a huge one. And that third-rate song was a huge hit even with one of my own nephews, when he was 3–4 years old. …Yes,  eventually, he did grow up…

But then, there are some song that you somehow never grow out of. For me, this song is one of them. (It might be a good idea to start running “second-class,” why, even “third-class” songs, about which I am a bit nostalgic. I listened to a lot of them during this boring lock-down, and even more boring, during all those long-running trials for the MNIST dataset. The boredom, especially on the second count, had to be killed. I did. … So, all in all, from my side, I am ready!)

Anyway, once I grew up, there were a couple of surprises regarding the credits of this song. I used to think, by default as it were, that it was Lata. No, it turned out to be Suman Kalyanpur. (Another thing. That there was no mandatory “kar” after the “pur” also was a surprise for me, but then, I digress here.)

Also, for no particular reason, I didn’t know about the music director. Listening to it now, after quite a while, I tried to take a guess. After weighing in between Shankar-Jaikishan and R.D. Burman, also with a faint consideration of Kalyanji-Anandji, I started suspecting RD. …Think about it. This song could easily go well with those from the likes of तीसरी मंझील (“Teesri Manjhil”) or काँरवा (“karvaan”) right?

But as a self-declared RD expert, I also couldn’t shuffle my memory and recall some incidence in which I could be found boasting to someone in the COEP/IITM hostels that this one indeed was an RD song. … So, it shouldn’t be RD either. … Could it be Shankar-Jaikishan then?  The song seemed to fit in with the late 60s SJ mold too. (Recall Shammi.) … Finally, I gave up, and checked out the music director.

Well, Laxmikant Pyarelal sure was a surprise to me! And it should be, to any one. Remember, this song is from 1967. This was the time that LP were coming out with songs like those in दोस्ती (“Dosti”),  मिलन (“Milan”), उपकार (“Upakar”), and similar. Compositions grounded in the Indian musical sense, through and through.

Well yes, LP have given a lot songs/tunes that go more in the Western-like fold. (Recall रोज शाम आती थी (“roz shyaam aatee thee”), for instance.) Still, this song is a bit too “out of the box” for LP when you consider their typical “box”. The orchestration, in particular, also at times feels as if the SD-RD “gang” like Manohari Singh, Sapan Chakravorty, et al. wasn’t behind it, and so does the rendering by Rafi (espcially with that sharp हा! (“hah!”) coming at the end of the refrain)… Oh well.

Anyway, give a listen and see how you find it.]