# My new approach to QM—an update, and a request (May 2019)

This post has reference to my earlier post of 30th March 2019, here [^]. Being busy mainly with learning Data Science, I didn’t subsequently find the time to systematically study the papers and the book which were suggested by the IIT Bombay professors back in March-end.

However, in the meanwhile, thinking about the whole issue independently (and on a part-time basis), I have come to work through a detailed scheme for calculating the wavefunctions for the case of a 1D helium atom.

In particular, the abstract case I have worked through is the following:

A single helium atom placed in a 1D domain of a finite length, and with either reflecting boundary conditions (i.e. infinite potential walls) at the two ends (i.e. a 1D box), or possibly also with periodic boundary conditions imposed at the two ends (i.e. an infinite 1D lattice of 1D helium atoms). The problem is to find the energy eigenstates for the system wavefunction, assuming that the electrons do interact with each other.

The electrons are spinless. However, note, I have now addressed the case of the interacting electrons too.

I have not performed the actual simulations, though they can be done “any time.”

Yet, before proceeding to write the code, I would like to show the scheme itself to some computational quantum chemist/physicist, and have a bit of a to-and-fro regarding how they usually handle it in the mainstream QM/QChem, and about the commonality and differences (even the very basic reasonableness or otherwise) of my proposed scheme.

I can even go further and say that I have now got stuck at this point.

I will also continue to remain stuck at this same point unless one of the following two things happens: (i) a quantum chemist having a good knowledge of the computer simulation methods, volunteers to review my scheme and offer suggestions, or (ii) I myself study and digest a couple of text-books (of 500+ pages) and a few relevant papers (including those suggested by the IIT Bombay professors).

The second alternative is not feasible right now, simply because I don’t have enough time at hand. I am now busy with learning data science, and must continue to do so, so that I can land a job ASAP. (It’s been more than a year that I have been out of a job.)

So, if you are knowledgeable about this topic (the abstract case I am dealing with above, viz., that of 1D helium atom with spinless but interacting electrons), and also want to help me, then I request you to please see if you can volunteer just a bit of your time.

If no one comes to help me, it could take a much longer period of time for me to work through it all purely on my own—anywhere from 6–8 months to a year, or as is easily possible, even much more time—may be a couple of years or so, too. … Remember, I will also be working in a very highly competitive area of data science too, during all this time.

On the other hand, to someone who has enough knowledge of this matter, it wouldn’t be very strenuous at all. He only has to review the scheme and offer comments, and generally remain available for help, that’s all.

(It would be quite like someone approaching me for some informal guidance on FEM simulation of some engineering case. Even if I might not have modeled some particular case myself in the past, say a case of some fluid-structure interaction, I still know that I could always act as a sounding board and offer some general help to such a guy. I also know that doing isn’t going to be very taxing on me, that it’s not going to take too much of my own time. The situation here is quite similar. The quantum chemist/physicist doesn’t have to exert himself too much. I am confident of this part.)

So, there. See if you can help me out yourself, or suggest someone suitable to me. Thanks in advance.

A song I like:
(Marathi) “vaaTa sampataa sampenaa…”
Lyrics: Devakinandan Saaraswat
Music: Dattaa Daavajekar
Singer: Jayawant Kulkarni

/

# Further on QM, and on changing tracks over to Data Science

OK. As decided, I took a short trip to IIT Bombay, and saw a couple of professors of physics, for very brief face-to-face interactions on the 28th evening.

No chalk-work at the blackboard had to be done, because both of them were very busy—but also quick, really very quick, in getting to the meat of the matter.

As to the first professor I saw, I knew beforehand that he wouldn’t be very enthusiastic with any alternatives to anything in the mainstream QM.

He was already engrossed in a discussion with someone (who looked like a PhD student) when I knocked at the door of his cabin. The prof immediately mentioned that he has to finish (what looked like a few tons of) pending work items, before going away on a month-long trip just after a couple of days! But, hey, as I said (in my last post), directly barging into a professor’s cabin has always done wonders for me! So, despite his having some heavy^{heavy} schedule, he still motioned me to sit down for a quick and short interaction.

The three of us (the prof, his student, and me) then immediately had a very highly compressed discussion for some 15-odd minutes. As expected, the discussion turned out to be not only very rapid, and also quite uneven, because there were so many abrupt changes to the sub-topics and sub-issues, as they were being brought up and dispatched in quick succession. …

It was not an ideal time to introduce my new approach, and so, I didn’t. I did mention, however, that I was trying to develop some such a thing. The professor was of the opinion that if you come up with a way to do faster simulations, it would always be welcome, but if you are going to argue against the well-established laws, then… [he just shook head].

I told him that I was clear, very clear on one point. Suppose, I said, that I have a complex-valued field that is defined only over the physical 3D, and suppose further that my new approach (which involves such a 3D field) does work out. Then, suppose further that I get essentially the same results as the mainstream QM does.

In such a case, I said, I am going to say that here is a possibility of looking at it as a real physical mechanism underlying the QM theory.

And if people even then say that because it is in some way different from the established laws, therefore it is not to be taken seriously, then I am very clear that I am going to say: “You go your way and I will go mine.”

But of course, I further added, that I still don’t know yet how the calculations are done in the mainstream QM for the interacting electrons—that is, without invoking simplifying approximations (such as the fixed nucleus). I wanted to see how these calculations are done using the computational modeling approach (not the perturbation theory).

It was at this point that the professor really got the sense of what I was trying to get at. He then remarked that variational formulations are capable enough, and proceeded to outline some of their features. To my query as to what kind of an ansatz they use, and what kind of parameters are involved in inducing the variations, he mentioned Chebyshev polynomials and a few other things. The student mentioned the Slater determinants. Then the professor remarked that the particulars of the ansatz and the particulars of the variational techniques were not so crucial because all these techniques ultimately boil down to just diagonalizing a matrix. Somehow, I instinctively got the idea that he hasn’t been very much into numerical simulations himself, which turned out to be the case. In fact he immediately said so himself: “I don’t do wavefunctions. [Someone else from the same department] does it.” I decided to see this other professor the next day, because it was already evening (almost approaching 6 PM or so).

A few wonderful clarifications later, it was time for me to leave, and so I thanked the professor profusely for accommodating me. The poor fellow didn’t even have the time to notice my gratitude; he had already switched back to his interrupted discussion with the student.

But yes, the meeting was fruitful to me because the prof did get the “nerve” of the issue right, and in fact also gave me two very helpful papers to study, both of them being review articles. After coming home, I have now realized that while one of them is quite relevant to me, the other one is absolutely god-damn relevant!

Anyway, after coming out of the department on that evening, I was thinking of calling my friend to let him know that the purpose of the visit to the campus was over, and thus I was totally free. While thinking about calling him and walking through the parking lot, I just abruptly noticed a face that suddenly flashed something recognizable to me. It was this same second professor who “does wavefunctions!”

I had planned on seeing him the next day, but here he was, right in front me, walking towards his car in a leisurely mood. Translated, it meant: he was very much free of all his students, and so was available for a chat with me! Right now!! Of course, I had never had made any acquaintance with him in the past. I had only browsed through his home page once in the recent times, and so could immediately make out the face, that’s all. He was just about to open the door of his car when I approached him and introduced myself. There followed another intense bout of discussions, for another 10-odd minutes.

This second prof has done numerical simulations himself, and so, he was even faster in getting a sense of what kind of ideas I was toying with. Once again, I told him that I was trying for some new ideas but didn’t get any deeper into my approach, because I myself still don’t know whether my approach will produce the same results as the mainstream QM does or not. In any case, knowing the mainstream method of handling these things was crucial, I said.

I told him how, despite my extensive Internet searches, I had not found suitable material for doing calculations. He then said that he will give me the details about a book. I should study this book first, and if there are still some difficulties or some discussions to be had, then he would be available, but the discussion would then have to progress in reference to what is already given in that book. Neat idea, this one was, perfect by me. And turns out that the book he suggested was neat—absolutely perfectly relevant to my needs, background as well as preparation.

And with that ends this small story of this short visit to IIT Bombay. I went there with a purpose, and returned with one 50 page-long and very tightly written review paper, a second paper of some 20+ tightly written pages, and a reference to an entire PG-level book (about 500 pages). All of this material absolutely unknown to me despite my searches, and as it seems as of today, all of it being of utmost relevance to me, my new ideas.

But I have to get into Data Science first. Else I cannot survive. (I have been borrowing money to fend off the credit card minimum due amounts every month.)

So, I have decided to take a rest for today, and from tomorrow onwards, or may be a day later—i.e., starting from the “shubh muhurat” (auspicious time) of the April Fool’s day, I will begin my full-time pursuit of Data Science, with all that new material on QM only to be studied on a part-time basis. For today, however, I am just going to be doing a bit of a time-pass here and there. That’s how this post got written.

Take care, and wish you the same kind of luck as I had in spotting that second prof just like that in the parking lot. … If my approach works, then I know who to contact first with my results, for informal comments on them. … I wish you this same kind of a luck…

Work hard, and bye for now.

A song I like
(Marathi) “dhunda_ madhumati raat re, naath re…”
Music: Master Krishnarao
Singer: Lata Mangeshkar

[A Marathi classic. Credits are listed in a purely random order. A version that seems official (released by Rajshri Marathi) is here: [^] . However, somehow, the first stanza is not complete in it.

As to the set shown in this (and all such) movies, right up to, say the movie “Bajirao-Mastani,” I have—and always had—an issue. The open wide spaces for the palaces they show in the movies are completely unrealistic, given the technology of those days (and the actual remains of the palaces that are easy to be recalled by anyone). The ancients (whether here in India or at any other place) simply didn’t have the kind of technology which is needed in order to build such hugely wide internal (covered) spaces. Neitehr the so-called “Roman arch” (invented millenia earlier in India, I gather), nor the use of the monolithic stones for girders could possibly be enough to generate such huge spans. Idiots. If they can’t get even simple calculations right, that’s only to be expected—from them. But if they can’t even recall the visual details of the spans actually seen for the old palaces, that is simply inexcusable. Absolutely thorough morons, these movie-makers must be.]

/

# The self-field, and the objectivity of the classical electrostatic potentials: my analysis

This blog post continues from my last post, and has become overdue by now. I had promised to give my answers to the questions raised last time. Without attempting to explain too much, let me jot down the answers.

1. The rule of omitting the self-field:

This rule arises in electrostatic interactions basically because the Coulombic field has a spherical symmetry. The same rule would also work out in any field that has a spherical symmetry—not just the inverse-separation fields, and not necessarily only the singular potentials, though Coulombic potentials do show both these latter properties too.

It is helpful here to think in terms of not potentials but of forces.

Draw any arbitrary curve. Then, hold one end of the curve fixed at the origin, and sweep the curve through all possible angles around it, to get a 3D field. This 3D field has a spherical symmetry, too. Hence, gradients at the same radial distance on opposite sides of the origin are always equal and opposite.

Now you know that the negative gradient of potential gives you a force. Since for any spherical potential the gradients are equal and opposite, they cancel out. So, the forces cancel out to.

Realize here that in calculating the force exerted by a potential field on a point-particle (say an electron), the force cannot be calculated in reference to just one point. The very definition of the gradient refers to two different points in space, even if they be only infinitesimally separated apart. So, the proper procedure is to start with a small sphere centered around the given electron, calculate the gradients of the potential field at all points on the surface of this sphere, calculate the sum of the forces exerted on the domain contained inside the spherical surface by these forces, and then take the sphere to the limiting of vanishing size. The sum of the forces thus exerted is the net force acting on that point-particle.

In case of the Coulombic potentials, the forces thus calculated on the surface of any sphere (centered on that particle) turn out to be zero. This fact holds true for spheres of all radii. It is true that gradients (and forces) progressively increase as the size of the sphere decreases—in fact they increase without all bounds for singular potentials. However, the aforementioned cancellation holds true at any stage in the limiting process. Hence, it holds true for the entirety of the self-field.

In calculating motions of a given electron, what matters is not whether its self-field exists or not, but whether it exerts a net force on the same electron or not. The self-field does exist (at least in the sense explained later below) and in that sense, yes, it does keep exerting forces at all times, also on the same electron. However, due to the spherical symmetry, the net force that the field exerts on the same electron turns out to be zero.

In short:

Even if you were to include the self-field in the calculations, if the field is spherically symmetric, then the final net force experienced by the same electron would still have no part coming from its own self-field. Hence, to economize calculations without sacrificing exactitude in any way, we discard it out of considerations.The rule of omitting the self-field is just a matter of economizing calculations; it is not a fundamental law characterizing what field may be objectively said to exist. If the potential field due to other charges exists, then, in the same sense, the self-field too exists. It’s just that for the motions of the self field-generating electron, it is as good as non-existent.

However, the question of whether a potential field physically exists or not, turns out to be more subtle than what might be thought.

2. Conditions for the objective existence of electrostatic potentials:

It once again helps to think of forces first, and only then of potentials.

Consider two electrons in an otherwise empty spatial region of an isolated system. Suppose the first electron ($e_1$), is at a position $x_1$, and a second electron $e_2$ is at a position $x_2$. What Coulomb’s law now says is that the two electrons mutually exert equal and opposite forces on each other. The magnitudes of these forces are proportional to the inverse-square of the distance which separates the two. For the like charges, the forces is repulsive, and for unlike charges, it is attractive. The amount of the electrostatic forces thus exerted do not depend on mass; they depend only the amounts of the respective charges.

The potential energy of the system for this particular configuration is given by (i) arbitrarily assigning a zero potential to infinite separation between the two charges, and (ii) imagining as if both the charges have been brought from infinity to their respective current positions.

It is important to realize that the potential energy for a particular configuration of two electrons does not form a field. It is merely a single number.

However, it is possible to imagine that one of the charges (say $e_1$) is held fixed at a point, say at $\vec{r}_1$, and the other charge is successively taken, in any order, at every other point $\vec{r}_2$ in the infinite domain. A single number is thus generated for each pair of $(\vec{r}_1, \vec{r}_2)$. Thus, we can obtain a mapping from the set of positions for the two charges, to a set of the potential energy numbers. This second set can be regarded as forming a field—in the $3D$ space.

However, notice that thus defined, the potential energy field is only a device of calculations. It necessarily refers to a second charge—the one which is imagined to be at one point in the domain at a time, with the procedure covering the entire domain. The energy field cannot be regarded as a property of the first charge alone.

Now, if the potential energy field $U$ thus obtained is normalized by dividing it with the electric charge of the second charge, then we get the potential energy for a unit test-charge. Another name for the potential energy obtained when a unit test-charge is used for the second charge is: the electrostatic potential (denoted as $V$).

But still, in classical mechanics, the potential field also is only a device of calculations; it does not exist as a property of the first charge, because the potential energy itself does not exist as a property of that fixed charge alone. What does exist is the physical effect that there are those potential energy numbers for those specific configurations of the fixed charge and the test charge.

This is the reason why the potential energy field, and therefore the electrostatic potential of a single charge in an otherwise empty space does not exist. Mathematically, it is regarded as zero (though it could have been assigned any other arbitrary, constant value.)

Potentials arise only out of interaction of two charges. In classical mechanics, the charges are point-particles. Point-particles exist only at definite locations and nowhere else. Therefore, their interaction also must be seen as happening only at the locations where they do exist, and nowhere else.

If that is so, then in what sense can we at all say that potential energy (or electrostaic potential) field does physically exist?

Consider a single electron in an isolated system, again. Assume that its position remains fixed.

Suppose there were something else in the isolated system—-something—some object—every part of which undergoes an electrostatic interaction with the fixed (first) electron. If this second object were to be spread all over the domain, and if every part of it were able to interact with the fixed charge, then we could say that the potential energy field exists objectively—as an attribute of this second object. Ditto, for the electric potential field.

Note three crucially important points, now.

2.1. The second object is not the usual classical object.

You cannot regard the second (spread-out) object as a mere classical charge distribution. The reason is this.

If the second object were to be actually a classical object, then any given part of it would have to electrostatically interact with every other part of itself too. You couldn’t possibly say that a volume element in this second object interacts only with the “external” electron. But if the second object were also to be self-interacting, then what would come to exist would not be the simple inverse-distance potential field energy, in reference to that single “external” electron. The space would be filled with a very weird field. Admitting motion to the property of the local charge in the second object, every locally present charge would soon redistribute itself back “to” infinity (if it is negative), or it all would collapse into the origin (if the charge on the second object were to be positive, because the fixed electron’s field is singular). But if we allow no charge redistributions, and the second field were to be classical (i.e. capable of self-interacting), then the field of the second object would have to have singularities everywhere. Very weird. That’s why:

If you want to regard the potential field as objectively existing, you have to also posit (i.e. postulate) that the second object itself is not classical in nature.

Classical electrostatics, if it has to regard a potential field as objectively (i.e. physically) existing, must therefore come to postulate a non-classical background object!

2.2. Assuming you do posit such a (non-classical) second object (one which becomes “just” a background object), then what happens when you introduce a second electron into the system?

You would run into another seeming contradiction. You would find that this second electron has no job left to do, as far as interacting with the first (fixed) electron is concerned.

If the potential field exists objectively, then the second electron would have to just passively register the pre-existing potential in its vicinity (because it is the second object which is doing all the electrostatic interactions—all the mutual forcings—with the first electron). So, the second electron would do nothing of consequence with respect to the first electron. It would just become a receptacle for registering the force being exchanged by the background object in its local neighborhood.

But the seeming contradiction here is that as far as the first electron is concerned, it does feel the potential set up by the second electron! It may be seen to do so once again via the mediation of the background object.

Therefore, both electrons have to be simultaneously regarded as being active and passive with respect to each other. They are active as agents that establish their own potential fields, together with an interaction with the background object. But they also become passive in the sense that they are mere point-masses that only feel the potential field in the background object and experience forces (accelerations) accordingly.

The paradox is thus resolved by having each electron set up a field as a result of an interaction with the background object—but have no interaction with the other electron at all.

2.3. Note carefully what agency is assigned to what object.

The potential field has a singularity at the position of that charge which produces it. But the potential field itself is created either by the second charge (by imagining it to be present at various places), or by a non-classical background object (which, in a way, is nothing but an objectification of the potential field-calculation procedure).

Thus, there arises a duality of a kind—a double-agent nature, so to speak. The potential energy is calculated for the second charge (the one that is passive), in the sense that the potential energy is relevant for calculating the motion of the second charge. That’s because the self-field cancels out for all motions of the first charge. However,

The potential energy is calculated for the second charge. But the field so calculated has been set up by the first (fixed) charge. Charges do not interact with each other; they interact only with the background object.

2.4. If the charges do not interact with each other, and if they interact only with the background object, then it is worth considering this question:

Can’t the charges be seen as mere conditions—points of singularities—in the background object?

Indeed, this seems to be the most reasonable approach to take. In other words,

All effects due to point charges can be regarded as field conditions within the background object. Thus, paradoxically enough, a non-classical distributed field comes to represent the classical, massive and charged point-particles themselves. (The mass becomes just a parameter of the interactions of singularities within a $3D$ field.) The charges (like electrons) do not exist as classical massive particles, not even in the classical electrostatics.

3. A partly analogous situation: The stress-strain fields:

If the above situation seems too paradoxical, it might be helpful to think of the stress-strain fields in solids.

Consider a horizontally lying thin plate of steel with two rigid rods welded to it at two different points. Suppose horizontal forces of mutually opposite directions are applied through the rods (either compressive or tensile). As you know, as a consequence, stress-strain fields get set up in the plate.

From an external viewpoint, the two rods are regarded as interacting with each other (exchanging forces with each other) via the medium of the plate. However, in reality, they are interacting only with the object that is the plate. The direct interaction, thus, is only between a rod and the plate. A rod is forced, it interacts with the plate, the plate sets up stress-strain field everywhere, the local stress-field near the second rod interacts with it, and the second rod registers a force—which balances out the force applied at its end. Conversely, the force applied at the second rod also can be seen as getting transmitted to the first rod via the stress-strain field in the plate material.

There is no contradiction in this description, because we attribute the stress-strain field to the plate itself, and always treat this stress-strain field as if it came into existence due to both the rods acting simultaneously.

In particular, we do not try to isolate a single-rod attribute out of the stress-strain field, the way we try to ascribe a potential to the first charge alone.

Come to think of it, if we have only one rod and if we apply force to it, no stress-strain field would result (i.e. neglecting inertia effects of the steel plate). Instead, the plate would simply move in the rigid body mode. Now, in solid mechanics, we never try to visualize a stress-strain field associated with a single rod alone.

It is a fallacy of our thinking that when it comes to electrostatics, we try to ascribe the potential to the first charge, and altogether neglect the abstract procedure of placing the test charge at various locations, or the postulate of positing a non-classical background object which carries that potential.

In the interest of completeness, it must be noted that the stress-strain fields are tensor fields (they are based on the gradients of vector fields), whereas the electrostatic force-field is a vector field (it is based on the gradient of the scalar potential field). A more relevant analogy for the electrostatic field, therefore might the forces exchanged by two point-vortices existing in an ideal fluid.

4. But why bother with it all?

The reason I went into all this discussion is because all these issues become important in the context of quantum mechanics. Even in quantum mechanics, when you have two charges that are interacting with each other, you do run into these same issues, because the Schrodinger equation does have a potential energy term in it. Consider the following situation.

If an electrostatic potential is regarded as being set up by a single charge (as is done by the proton in the nucleus of the hydrogen atom), but if it is also to be regarded as an actually existing and spread out entity (as a $3D$ field, the way Schrodinger’s equation assumes it to be), then a question arises: What is the role of the second charge (e.g., that of the electron in an hydrogen atom)? What happens when the second charge (the electron) is represented quantum mechanically? In particular:

What happens to the potential field if it represents the potential energy of the second charge, but the second charge itself is now being represented only via the complex-valued wavefunction?

And worse: What happens when there are two electrons, and both interacting with each other via electrostatic repulsions, and both are required to be represented quantum mechanically—as in the case of the electrons in an helium atom?

Can a charge be regarded as having a potential field as well as a wavefunction field? If so, what happens to the point-specific repulsions as are mandated by the Coulomb law? How precisely is the $V(\vec{r}_1, \vec{r}_2)$ term to be interpreted?

I was thinking about these things when these issues occurred to me: the issue of the self-field, and the question of the physical vs. merely mathematical existence of the potential fields of two or more quantum-mechanically interacting charges.

Guess I am inching towards my full answers. Guess I have reached my answers, but I need to have them verified with some physicists.

5. The help I want:

As a part of my answer-finding exercises (to be finished by this month-end), I might be contacting a second set of physicists soon enough. The issue I want to learn from them is the following:

How exactly do they do computational modeling of the helium atom using the finite difference method (FDM), within the context of the standard (mainstream) quantum mechanics?

That is the question. Once I understand this part, I would be done with the development of my new approach to understanding QM.

I do have some ideas regarding the highlighted question. It’s just that I want to have these ideas confirmed from some physicists before (or along-side) implementing the FDM code. So, I might be approaching someone—possibly you!

Please note my question once again. I don’t want to do perturbation theory. I would also like to avoid the variational method.

Yes, I am very comfortable with the finite element method, which is basically based on the variational calculus. So, given a good (detailed enough) account of the variational method for the He atom, it should be possible to translate it into the FEM terms.

However, ideally, what I would like to do is to implement it as an FDM code.

So there.

Please suggest good references and / or people working on this topic, if you know any. Thanks in advance.

A song I like:

[… Here I thought that there was no song that Salil Chowdhury had composed and I had not listened to. (Well, at least when it comes to his Hindi songs). That’s what I had come to believe, and here trots along this one—and that too, as a part of a collection by someone! … The time-delay between my first listening to this song, and my liking it, was zero. (Or, it was a negative time-delay, if you refer to the instant that the first listening got over). … Also, one of those rare occasions when one is able to say that any linear ordering of the credits could only be random.]

Music: Salil Chowdhury
Lyrics: Gulzaar
Singer: Lata Mangeshkar

/

# A preliminary document on my fresh new approach to QM

I have uploaded the outline document I had promised as an attachment to a blog post at iMechanica; see here [^]. I will copy-paste the text of that post below:

Hello, World!

Here is a document that jots down, in a brief, point-wise manner, the elements of my new approach to understanding quantum mechanics.

Please note that the writing is very much at a preliminary stage. It is very much a work in progress. However, it does jot down many essential ideas.

I am uploading the document at iMechanica just to have an externally verifiable time-stamp to it. Further versions will also be posted at this thread.

Comments are welcome. However, I may not be able to respond all of them immediately, because (i) I wish to immediately switch over to my studies of Data Science (ii) discussions on QM, especially on its foundations, tend to get meandering very fast.

Best,

–Ajit

It was only yesterday that I had said that preparing this document would take longer, may be a week or so. But soon later, I discarded the idea of revising the existing document (18 pages), and instead tried re-writing a separate summary for it completely afresh. Turns out that starting afresh all over again was a very good idea. Yesterday, I was at about 2 pages, and today, I finished jotting down all the ideas, at least in essence, right within 8 pages (+ 1 page of the reference section).

A song I like:

[It happens to be one of the songs which I first heard roughly around the same time that I got my own bicycle, and the times when I first came across the quantum mechanical riddles—which was during my X–XII standard days. I had liked it immediately—I mean the song. I don’t know why it doesn’t appear frequently enough on people’s lists of their favorites, but it has a very, very fresh feel to it. It anyway is a song of the teenage, of all those naive expectations and unspoiled optimism. Usha Mangeshkar, with her deft, light touch, somehow manages to bring that sense of life, that sense of freshness and confidence, fully alive in this song. The music and the lyrics are neat too… All in all, an absolutely wonderful song. … Perhaps also equally important, it is of great nostalgic value to me, too.

It used to be not easily available. So, let me give you the link to listening and buying it at Gaana.com; it’s song # 9 on this page [^] ]

Singer: Usha Mangeshkar
Music: Dasharath Pujari
Lyrics: Ram More

/

# A general update

Hmmm… Slightly more than 3 weeks since I posted anything here. A couple of things happened in the meanwhile.

1. Wrapping up of writing QM scripts:

First, I wrapped up my simulations of QM. I had reached a stage (just in my mind, neither on paper nor on laptop) whereby the next thing to implement would have been: the simplest simulations using my new approach. … Ummm… I am jumping ahead of myself.

OK, to go back a bit. The way things happened, I had just about begun pursuing Data Science when this QM thingie (conference) suddenly came up. So, I had to abandon Data Science as is, and turn my attention full-time to QM. I wrote the abstract, sent it to the conference, and started jotting down some of the early points for the eventual paper. Frequent consultations with text-books was a part of it, and so was searching for any relevant research papers. Then, I also began doing simulations of the simplest textbook cases, just to see if I can find any simpler route from the standard / mainstream QM to my re-telling of the facts covered by it.

Then, as things turned out, my abstract for the conference paper got rejected. However, now that I had gotten a tempo for writing and running the simulations, I decided to complete at least those standard UG textbook cases before wrapping up this entire activity, and going back to Data Science. My last post was written when I was in the middle of this activity.

While thus pursuing the standard cases of textbook QM (see my last post), I also browsed a lot, thought a lot, and eventually found that simulations involving my approach shouldn’t take as long as a year, not even several months (as I had mentioned in my last post). What happened here was that during the aforementioned activity, I ended up figuring out a far simpler way that should still illustrate certain key ideas from my new approach.

So, the situation, say in the first week of December, was the following: (i) Because the proposed paper had been rejected, there was no urgency for me to continue working on the QM front. (ii) I had anyway found a simpler way to simulate my new approach, and the revised estimates were that even while working part-time, I should be able to finish the whole thing (the simulations and the paper) over just a few months’ period, say next year. (iii) At the same time, studies of Data Science had anyway been kept on the back-burner.

That’s how (and why) I came to wrap up all my activity on the QM front, first thing.

I then took a little break. I then turned back to Data Science.

2. Back to Data Science:

As far as learning Data Science goes, I knew from my past experience that books bearing titles such as: “Learn Artificial Intelligence in 3 Days,” or “Mastering Machine Learning in 24 Hours,” if available, would have been very deeply satisfying, even gratifying.

However, to my dismay, I found that no such titles exist. … Or, may be, such books are there, but someone at Google is deliberately suppressing the links to them. Whatever be the case, forget becoming a Guru in 24 hours (or even in 3 days), I found that no one was promising me that I could master even just one ML library (say TensorFlow, or at least scikit-learn) over even a longer period, say about week’s time or so.

Sure there were certain other books—you know, books which had blurbs and reader-reviews which were remarkably similar to what goes with those mastering-within-24-hours sort of books. However, these books had less appealing titles. I browsed through a few of these, and found that there simply was no way out; I would have to begin with Michael Nielsen’s book [^].

Which I did.

Come to think of it, the first time I had begun with Nielsen’s book was way back, in 2016. At that time, I had not gone beyond the first couple of sections of the first chapter or so. I certainly had not come to even going through the first code snippet that Nielsen gives, let alone running it, or trying any variations on it.

This time around, though, I decided to stick it out with this book. I had to. … What was the end result?

Well, unlike me, I didn’t take any jumps while going through this particular book. I began reading it in the given sequence, and then found that I could even continue with the same (i.e., reading in sequence)! I also made some furious underlines, margin-notes, end-notes, and all that. (That’s right. I was not reading this book online; I had first taken a printout.) I also sketched a few data structures in the margins, notably for the code around the “w” matrices. (I tend to suspect every one else’s data structures except for mine!) I pursued this activity covering about everything in the book, except for the last chapter. It was at this point that finally my patience broke down. I went back to my usual self and began jumping back and forth over the topics.

As a result, I can’t say that I have finished the book. But yes, I think I’ve got a fairly idea of what’s there in it.

So there.

3. What books to read after Nielsen’s?

Of course, Nielsen’s book wasn’t the only thing that I pursued over the past couple of weeks. I also very rapidly browsed through some other books, checked out the tutorial sites on libraries like scikit-learn, TensorFlow, etc. I came to figure out two things:

As the first thing, I found that I was unnecessarily getting tense when I saw young people casually toss around some fearsome words like “recurrent learning,” “convolutional networks,” “sentiments analysis,” etc., all with such ease and confidence. Not just on the ‘net but also in real life. … I came to see them do that when I attended a function for the final-rounds presentations at Intel’s national-level competition (which was held in IISER Pune, a couple of months ago or so). Since I had seen those quoted words (like “recurrent learning”) only while browsing through text-books or Wiki articles, I had actually come to feel a bit nervous at that event. Ditto, when I went through the Quora answers. Young people everywhere in the world seemed to have put in a lot of hard-work in studying Data Science. “When am I going to catch up with them, if ever?” I had thought.

It was only now, after going through the documentation and tutorials for these code libraries (like scikit-learn) that I came to realize that the most likely scenario here was that most of these kids were simply talking after trying out a few ready-made tutorials or so. … Why, one of the prize-winning (or at least, short-listed) presentations at that Intel competition was about the particles-swam optimization, and during their talk, the students had even shown a neat visualization of how this algorithm works when there are many local minima. I had got impressed a lot by that presentation. … Now I gathered that it was just a ready-made animated GIF lifted from KDNuggets or some other, similar, site… (Well, as it turns out, it must have been from the Wiki! [^])

As the second thing, I realized that for those topics which Nielsen doesn’t cover, good introductory books are hard to find. (That was a bit of an understatement. My real feel here is that, we are lucky that Nielsen’s book is at all available in the first place!)

…If you have any tips on a good book after Nielsen’s then please drop me an email or a comment; thanks in advance.

4. A tentative plan:

Anyway, as of now, a good plan seems to be: (i) first, to complete the first pass through Nielsen’t book (which should take just about a couple of days or so), and then, to begin pursuing all of the following, more or less completely simultaneously: (ii) locating and going through the best introductory books / tutorials on other topics in ML (like PCA, k-means, etc); (iii) running tutorials of ML libraries (like scikit-learn and TensorFlow); (iv) typing out LaTeX notes for Nielsen’s book (which would be useful eventually for such things as hyper-parameter tuning), and running modified (i.e., simplified) versions of his code (which means, the second pass through his book); and finally (v) begin cultivating some pet project from Data Science for moonlighting over a long period of time (just the way I have maintained a long-running interest in the micro-level water-resources engineering).

As to the topic for the pet project, here are the contenders as of today. I have not finalized anything just as yet (and am likely not to do so for quite some time), but the following seem to be attractive: (a) Predicting rainfall in India (though getting granular enough data is going to be a challenge), (b) Predicting earth-quakes (locations and/or intensities), (c) Identifying the Indian classical “raaga” of popular songs, etc. … I also have some other ideas but these are more in the nature of professional interests (especially, for application in engineering industries). … Once again, if you feel there is some neat idea that could be adopted for the pet project, then sure point it out to me. …

…Anyway, that’s about it! Time to sign off. Will come back next year—or if some code / notes get written before that, then even earlier, but no definite promises.

So, until then, happy Christmas, and happy new year!…

A song I like:

(Marathi) “mee maaze mohita…”
Lyrics: Sant Dnyaaneshwar
Music and Singer: Kishori Amonkar

[One editing pass is still due; should be effected within a day or two. Done on 2018.12.18 13:41 hrs IST.]