A question regarding the coronavirus

Update: Do read the big “Caution” section too!

Regarding the coronavirus:

It’s well known by now that soap kills the corona-virus. The science of it has been explained at many places, e.g., a tweet here [^], and a news article here [^].

The operative words here are: the lipid bi-layer, and the absence of covalent bond.

If I get the mechanism right (and please do correct me if I am going wrong in any detail), the virus carries a protective membrane of lipid bi-layer on the outside. This membrane is not monolithic. It consists of many smaller fragments loosely held together (may be like what happens with van der Waal’s forces). When a coronavirus is surrounded by soapy water, the lipid particles from the soap compete with the lipid-fragments of the virus’ membrane. So, essentially, the loosely held pieces of the virus’ membrane get preferentially stuck with the lipid particles of the soap, and they remain more strongly stuck with the soap particles. Therefore, when the soap’s lipid particles move away (at the nano-level, may be like in the Brownian motion), the virus membrane falls apart. In short, the lipid particles in the soap literally tear apart the virus’ outer cover.

Now, lipids are basically fats. There are fats in many substances other than in soap, e.g., in vegetable oils like the coconut oil.

There seem to be a lot of naive articles on the Internet about the health benefits of the coconut oil. (“Alternative medicine” etc.) However, no hard or well established facts. The best I could get to was this news article: [^].

No, I am certainly not jumping to any conclusions. More important, I am not recommending anything to any one. Not right away, anyway.

All that I am doing is to ask this specific question: Are there any peer-reviewed and replicated results regarding destruction of the viruses belonging to the coronavirus family, by coconut oil?

Realize, I am not asking for reports on the health benefits/hazards of eating coconut oil.

What I am asking for are some in-vitro trials in which a certain viral load is mixed with some small quantity of coconut oil, and the growth or otherwise of the virus is observed over a passage of time.

If such trials are encouraging enough, then the implication is obvious. After washing your hands periodically, apply coconut oil. (This way, the skin wouldn’t turn too dry either.) Other possibilities exist. Apply a mild (non-irritating) oil like a somewhat diluted eucalyptus oil to your nose. Etc. Even a mild—but well-established—improvement would only be too welcome, given the potential magnitude of this global pandemic.

But, of course, first, there have to be well done trials following some strict protocol. I wanted to know if there have been any such in the past, for the other corona-viruses, and if some are being planned for the current one (covid-19).

I wrote this post just for this one purpose.

Addendum after publication:

I had noticed just one link to some Filipino study in my initial Google search (before writing this post). However, I didn’t put the link here when I wrote the post. The tone of that particular link was not very clear, and frankly, in pursuing many different links, I came to gloss over this study. I just kept the general sense that the coconut oil could be fitting.

Anyway, looks like they in The Philippines are seriously considering such trials. See a university research announcement here [^], and a news article about it, here [^].

…Initially, when I read Ash Joglekar’s tweet [^] some 8–10 days ago, I had begun wondering about the lipids in oils and spices. But I thought that people would sure be pursuing such ideas anyway, and so didn’t write anything about it. I was also busy with my data science trials, and so was mostly away from the ‘net (see the section below).

After waiting for a while, when I didn’t notice any one talking about oils (like the eucalyptus oil), I decided to do a search on this topic today. There were quite a few blog posts or write-up about using essential oils and all. Various google searches throw up links to them. But I think that those ideas are not on the target (and may in fact introduce skin irritation!).

But the Filipino study seems to be different. I would like to see how their trials are progressing. Benefits of ingesting (eating) coconut oil was also a new thing to me.

I also think that others (say Indian scientists) could easily take up the necessary in-vitro observations, regardless of how the clinical trials in The Philippines progress. In fact, they should. The study won’t be very expensive, I think.

Further addition:

Come to think of it, my suggestion might look ludicrous. “Coconut oil? stopping viruses? Hah! Coconut oil has been in use for thousands of years. Why didn’t it help in all those plagues of the past?”

Ummm… My point is simple. Today, we understand the roles that different measures have to play in public health systems. We do advise frequently washing hands with soap water. We do acknowledge that the lowly soap-water works better than the much more costly hand sanitizers. The mechanism for the effectiveness of soap is known. The knowledge naturally leads to searching for other materials that might perhaps offer a similar mechanism, and are otherwise safe and suitable for adoption.

Yes, using coconut oil does look like a nutty idea, on the face of it. But then, come to think of it, soap was available in 1918 too, when the Spanish flu struck. Should we therefore have refused testing soap-water against the coronavirus? …

… All that I am saying is what holds for the simple soap can perhaps also hold for other suitable materials, which are otherwise quite safe on us. Coconut is known to be one of the safest “material” there is. Also other fatty substances.

Here, Indians might perhaps think of घी i.e. “ghee”, the clarified butter. The clarified butter obtained from cow’s milk, when seasoned for several years (simply by letting sit on the shelf) is known to be an effective remedy. It can be applied directly even to open wounds, and it does help wounds, scars, scabs, etc. heal. When I was young, people in the villages would use it. I was told by the elders that in the olden days, soldiers relied on precisely such things. And, even after seasoning, it still remains safe enough to be ingested too.

If such materials can help, not as anti-viral agents at par with, say Remdesivir or so, but at least as mild but provably effective preventive measures, well and good. If not, keep them aside. (I have no hidden agenda whatsoever about it all, one way or the other.)

But we should at least be willing to test their efficacy, if any. At least in the in-vitro trials. I am not even suggesting the in-vivo trials. The medical community will have to take that call—provided the in-vitro trials do show something promising. But I do suggest the in-vitro trials. These could be done very fast, wouldn’t cost a lot, and perhaps may throw up something really valuable. That’s my point.

So, there.



This post is not at all meant for advising application of coconut oil! Understand this part well!

Oils are sticky. This can fact can make for a huge risk factor if the coconut oil / similar substances do not show any anti-viral activity. In fact, in such a case, begin sticky, they may perhaps make a bad situation worse by accumulating bacteria/viruses!

So, don’t make worse of an already bad situation. Don’t interpret this post as advising any practice. This post is meant only to suggest undertaking the kind of research that has been outlined. It is only if (and when) that the suggested research shows any promising results that further actions, such as developing the right kind of a protocol for usage, will have to be developed. That’s the job for the medical research community.

Research is not the same as a well established and safe technique

So, for now, in practice, stick with only the tried and true: Wash your hands with soap frequently, for at least 20 seconds at a time, as has been advised. Do not start applying the coconut oil/etc. on hands.

An aside:

These days, I am busy conducting trials for testing some of my data science-related ideas.

I have only one machine. It’s a CPU-only machine. (There is a GPU in it, but it’s Radeon. I couldn’t get TF to run on top of it). So, even for relatively smaller tasks, the trials go on for very long times (2–5 hours is common, sometimes even more).

When a trial is in progress, all the 8 virtual cores are boosted to 100% load. So, it’s not possible to do anything else, such as blogging or checking emails. (FireFox in fact sometimes crashes the machine, and I don’t want to lose data or compromise the integrity of the trials.) That’s why, these days, I am mostly not available on the Internet.

These trials should go on for quite some time (at least a couple of weeks if not more). Therefore, my blogging and Internet presence is going to be minimal. Just to let you know.

I deliberately took an interruption today, just to write this post, because of its obvious topical importance.

A song I like:

(Western, pop) “Looking alive”
Singer: Madison Cunningham

[I happened to listen to the “Live from here” version first [^]. I like it better than the “Warehouse” version [^].

Anyway, see if you enjoy the song, take care, and by for now…may be for days or a couple of weeks or so—I have my data science trials to run.]

— First published: 2020.03.15 16:36 IST
— Added the “addendum” section: 2020.03.15 18:41 IST
— Added the songs section: 2020.03.16 14:17 IST
— Added the “Further addition” section: 2020.03.16 19:28 IST
— Added the “Caution” section: 2020.03.16 21:31 IST




Yeah! Just that!


Update on 2020.02.17 16:02 IST:

The above is a snap I took yesterday at the Bhau Institute [^]’s event: “Pune Startup Fest” [^].

The reason I found myself laughing out loud was this: Yesterday, some of the distinguished panelists made one thing very clear: The valuation for the same product is greater in the S.F. Bay Area than in Pune, because the eco-system there is much more mature, with the investors there having seen many more exits—whether successful or otherwise.


When I was in the USA (which was in the 1990s), they would always say that not every one has to rush there to the USA, especially to the S.F. Bay Area, because technology works the same way everywhere, and hence, people should rather be going back to India. The “they” of course included the Indians already established there.

In short, their never-stated argument was this much: You can make as much money by working from India as from the SF Bay Area. (Examples of the “big three” of Indian IT Industry would often be cited, esp. of Narayana Moorthy’s.) So, “why flock in here”?

Looks like, even if they took some 2–3 decades to do so, finally, something better seems to have downed on them. They seem to have gotten to the truth, which is: Market valuations for the same product are much greater in the SF Bay Area than elsewhere!

So, this all was in the background, in the context.

Then, I was musing about their rate of learning last night, and that’s when I wrote this post! Hence the title.

But of course, not every thing was laughable about, or in, the event.

I particularly liked Vatsal Kanakiya’s enthusiasm (the second guy from the right in the above photo, his LinkedIn profile is here [^]). I appreciated his ability to keep on highlighting what they (their firm) are doing, despite a somewhat cocky (if not outright dismissive) way in which his points were being seen, at least initially. Students attending the event might have found his enthusiasm more in line with theirs, especially after he not only mentioned Guy Kawasaki’s 10-20-30 rule [^], but also cited a statistics from their own office to support it: 1892 proposals last month (if I got that figure right). … Even if he was very young, it was this point which finally made it impossible, for many in that hall, to be too dismissive of him. (BTW, he is from Mumbai, not Pune. (Yes, COEP is in Pune.))


A song I like:

(Hindi) ये मेरे अंधेरे उजाले ना होते (“ye mere andhere ujaale naa hote”)
Music: Salil Chowdhury
Singers: Talat Mahmood, Lata Mangeshkar
Lyrics: Rajinder Kishen

[Buildings made from the granite stone [I studied geology in my SE i.e. second year of engineering] have a way of reminding you of a few songs. Drama! Contrast!! Life!!! Money!!!! Success!!!!! Competition Success Review!!!!!!  Governments!!!!!!! *Business*men!!!!!!!!]



Equations in the matrix form for implementing simple artificial neural networks

(Marathi) हुश्श्… [translit.: “hushsh…”, equivalent word prevalent among the English-speaking peoples: “phewww…”]

I’ve completed the first cut in writing a document of the same title as that of this post. I wrote it in LaTeX. (Too many equations!)

I’ve just uploaded the PDF file at my GitHub account, here [^]. Remember, it’s still only in the alpha stage. (A beta release will follow after a few days. The final release may take place after a couple of weeks or so.)

Below the fold, I copy-paste the abstract and the preface of this document.

“Equations in the matrix form for implementing simple artificial neural networks”


This document presents the basic equations in reference to which artificial neural networks are designed and implemented. The scope is restricted to
the simpler feed-forward networks, including those having hidden layers. Convolutional and recurrent networks are out of the scope.

Equations are often initially noted using an index-based notation for the typical element. However, all the equations are eventually cast in the direct
matrix form, using a consistent set of notation. Some of the minor aspects of notation were invented to make the presentation as simple and direct as

The presentation here regards a layer as the basic unit. The term “layer” is understood in the same sense in which APIs of modern libraries like
TensorFlow-Keras 2.x take it. The presentation here is detailed enough that neural networks with hidden layers could be implemented, starting from
the scratch.


Raison d’être:

I wrote this document mainly for myself, to straighten out the different notations and formulae used in different sources and contexts.

In particular, I wanted to have a document that better matches the design themes used in today’s libraries (like TensorFlow-Keras 2.x) than the description in the text-books.

For instance, in many sources, the input layer is presented as consisting of both a fully connected layer and its corresponding activation layer. However, for flexibility, libraries like TF-Keras 2.x treat them as separate layers.

Also, some sources uniformly treat the input of any layer as \vec{X}, and output of any layer as activation, \vec{a} , but such usage overloads the term “activation”. Confusions also creep in because different conventions exist: treating the bias by expanding the input vector with 1 and the weights matrix with w_0 ; the “to–from” vs “from–to” convention for the weights matrix, etc.

I wanted to have a consistent notation that dealt with all such issues with a uniform, matrix-based notation that came as close to the numpy ndarray interface as possible.

Level of coverage:

The scope here is restricted to the simplest ANNs, including the simplest DL networks. Convolutional neural networks and recurrent neural networks are out of the scope.

Yet, this document wouldn’t make for a good tutorial for a complete beginner; it is likely to confuse him more than explaining anything to him. So, if you are completely new to ANNs, it is advisable to go through sources like Nielsen’s online book [^] to learn the theory of ANNs. Mazur’s fully worked out example of the back-propagation algorithm [^] should also prove to be very helpful,  before returning back to this document.

If you already know ANNs, and don’t want to see equations in the fully expanded forms—or, plain dislike the notation used here—then a good reference, roughly at the same level as this document, is the set of write-ups/notes by Mallya [^].


Any feedback, especially that regarding errors, typos, inconsistencies in notation, suggestions for improvements, etc., will be thankfully received.

How to cite this document:

TBD at the time of the final release version.

Further personal notings:

I began writing this document on 24 January 2020. By 30 January 2020, I had some 11 pages done up, which I released via the last post.

Unfortunately, it was too tentative, with lot of errors, misleading or inconsistent notation, etc. So, I deleted it immediately within a day. No point in having premature documents floating around in the cyberspace.

I had mentioned, right in the last post here on this blog (on 30 January 2020), that the post itself also would be gone. I will keep it for a while, and then, may be after a week or two, delete it.

Anyway, by the time I finished the alpha version today, the document had grown from the initial 11 pages to some 38 pages!

Typing out all the braces, square brackets, parentheses, subscripts for indices, subscripts for sizes of vectors and matrices… It all was tedious. … Somehow, I managed to finish it. (Will think twice before undertaking a similar project, but am already tempted to write a document each on CNNs and RNNs, too!)

Anyway, let me take a break for a while.

If interested in ANNs, please go through the document and let me have your feedback. Thanks in advance, take care, and bye for now.

A song I like:

[Just listen to Lata here! … Not that others don’t get up to the best possible levels, but still, Lata here is, to put it simply, heavenly! [BTW, the song is from 1953.]]

(Hindi) जाने न नजर पहचाने जिगर (“jaane naa najar pahechane jigar”)
Singers: Lata and Mukesh
Music: Shankar-Jaikishen
Lyrics: Hasrat Jaipuri