Still loitering around…

As noted in the last post, I’ve been browsing a lot. However, I find that the signal-to-noise ratio is, in a way, too low. There are too few things worth writing home about. Of course, OTOH, some of these things are so deep that they can keep one occupied for a long time.

Anyway, let me give many (almost all?) of the interesting links I found since my last post. These are being noted in no particular order. In most cases, the sub-title says it all, and so, I need not add comments. However, for a couple of videos related to QM, I do add significant amount of comments. … BTW, too many hats to do the tipping to. So let me skip that part and directly give you the URLs…


“A `digital alchemist’ unravels the mysteries of complexity”:

“Computational physicist Sharon Glotzer is uncovering the rules by which complex collective phenomena emerge from simple building blocks.” [^]


“Up and down the ladder of abstraction. A systematic approach to interactive visualization.” [^]

The tweet that pointed to this URL had this preface: “One particular talent stands out among the world-class programmers I’ve known—namely, an ability to move effortlessly between different levels of abstraction.”—Donald Knuth.

My own thinking processes are such that I use visualization a lot. Nay, I must. That’s the reason I appreciated this link. Incidentally, it also is the reason why I did not play a lot with the interactions here! (I put it in the TBD / Some other day / Etc. category.)


“The 2021 AI index: major growth despite the pandemic.”

“This year’s report shows a maturing industry, significant private investment, and rising competition between China and the U.S.” [^]


“Science relies on constructive criticism. Here’s how to keep it useful and respectful.” [^]

The working researcher, esp. the one who blogs / interacts a lot, probably already knows most all this stuff. But for students, it might be useful to have such tips collected in one place.


“How to criticize with kindness: Philosopher Daniel Dennett on the four steps to arguing intelligently.” [^].

Ummm… Why four, Dan? Why not, say, twelve? … Also, what if one honestly thinks that retards aren’t ever going to get any part of it?… Oh well, let me turn to the next link though…


“Susan Sontag on censorship and the three steps to refuting any argument” [^]

I just asked about four steps, and now comes Sontag. She comes down to just three steps, and also generalizes the applicability of the advice to any argument… But yes, she mentions a good point about censorship. Nice.


“The needless complexity of modern calculus: How 18th century mathematicians complicated calculus to avoid the criticisms of a bishop.” [^]

Well, the article does have a point, but if you ask me, there’s no alternative to plain hard work. No alternative to taking a good text-book or two (like Thomas and Finney, as also Resnick and Halliday (yes, for maths)), paper and pen / pencil, and working your way through. No alternative to that… But if you do that once for some idea, then every idea which depends on it, does become so simple—for your entire life. A hint or a quick reference is all you need, then. [Hints for the specific topic of this piece: the Taylor series, and truncation thereof.] But yes, the article is worth a fast read (if you haven’t read / used calculus in a while). … Also, Twitterati who mentioned this article also recommended the wonderful book from the next link (which I had forgotten)…


“Calculus made easy” [^].

The above link is to the Wiki article, which in turn gives the link to the PDF of the book. Check out the preface of the book, first thing.


“The first paper published in the first overlay journal (JTCAM) in Solid Mechanics” [^]

It’s too late for me (I have left mechanics as a full-time field quite a while ago) but I do welcome this development. … A few years ago, Prof. Timothy Gowers had begun an overlay journal in maths, and then, there also was an overlay journal for QM, and I had welcomed both these developments back then; see my blog post here [^].


“The only two equations that you should know: Part 1” [^].

Dr. Joglekar makes many good points, but I am not sure if my choice for the two equations is going to be the same.

[In fact, I don’t even like the restriction that there should be just two equations. …And, what’s happenning? Four steps. Then, three steps. Now, two equations… How long before we summarily turn negative, any idea?]

But yes, a counter-balance like the one in this article is absolutely necessary. The author touches on E = mc^2 and Newton’s laws, but I will go ahead and add topics like the following too: Big Bang, Standard Model, (and, Quantum Computers, String Theory, Multiverses, …).


“Turing award goes to creators of computer programming building blocks” [^] “Jeffrey Ullman and Alfred Aho developed many of the fundamental concepts that researchers use when they build new software.”

Somehow, there wasn’t as much of excitement this year as the Turing award usually generates.

Personally, though, I could see why the committee might have decided to recognize Aho and Ullman’s work. I had once built a “yacc”-like tool that would generate the tables for a table-driver parser, given the abstract grammar specification in the extended Backus-Noor form (EBNF). I did it as a matter of hobby, working in the evenings. The only resource I used was the “dragon book”, which was written by Profs. Aho, Sethi, and Ullman. It was a challenging but neat book. (I am not sure why they left Sethi out. However, my knowledge of the history of development of this area is minimal. So, take it as an idle wondering…)

Congratulations to Profs. Aho and Ullman.


“Stop calling everything AI, machine-learning pioneer says” [^] “Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent”

Well, “every one” knows that, but the fact is, it still needs to be said (and even explained!)


“How a gene for fair skin spread across India” [^] “A study of skin color in the Indian subcontinent shows the complex movements of populations there.”

No, the interesting thing about this article, IMO, was not that it highlighted Indians’ fascination / obsession for fairness—the article actually doesn’t even passingly mention this part. The real interesting thing, to me, was: the direct visual depiction, as it were, of Indian Indologists’ obsession with just one geographical region of India, viz., the Saraswati / Ghaggar / Mohan Ja Daro / Dwaarkaa / Pakistan / Etc. And, also the European obsession with the same region! … I mean check out how big India actually is, you know…

H/W for those interested: Consult good Sanskrit dictionaries and figure out the difference between निल (“nila”) and नील (“neela”). Hint: One of the translations for one of these two words is “black” in the sense “dark”, but not “blue”, and vice-versa for the other. You only have to determine which one stands for what meaning.

Want some more H/W? OK… Find out the most ancient painting of कृष्ण (“kRSNa”) or even राम (“raama”) that is still extant. What is the colour of the skin as shown in the painting? Why? Has the painting been dated to the times before the Europeans (Portugese, Dutch, French, Brits, …) arrived in India (say in the second millennium AD)?


“Six lessons from the biotech startup world” [^]

Dr. Joglekar again… Here, I think every one (whether connected with a start-up or not) should go through the first point: “It’s about the problem, not about the technology”.

Too many engineers commit this mistake, and I guess this point can be amplified further—the tools vs. the problem. …It’s but one variant of the “looking under the lamp” fallacy, but it’s an important one. (Let me confess: I tend to repeat the same error too, though with experience, one does also learn to catch the drift in time.)


“The principle of least action—why it works.” [^].

Neat article.

I haven’t read the related book [“The lazy universe: an introduction to the principle of least action”] , but looking at the portions available at Google [^], even though I might have objections to raise (or at least comments to make) on the positions taken by the author in the book, I am definitely going to add it to the list of books I recommend [^].

Let me mention the position from which I will be raising my objections (if any), in the briefest (and completely on-the-fly) words:

The principle of the least action (PLA) is a principle that brings out what is common to calculations in a mind-bogglingly large variety of theoretical contexts in physics. These are the contexts which involve either the concept of energy, or some suitable mathematical “generalizations” of the same concept.

As such, PLA can be regarded as a principle for a possible organization of our knowledge from a certain theoretical viewpoint.

However, PLA itself has no definite ontological content; whatever ontological content you might associate with PLA would go on changing as per the theoretical context in which it is used. Consequently, PLA cannot be seen as capturing an actual physical characteristic existing in the world out there; it is not a “thing” or “property” that is shared in common by the objects, facts or phenomena in the physical world.

Let me give you an example. The differential equation for heat conduction has exactly the same form as that for diffusion of chemical species. Both are solved using exactly the same technique, viz., the Fourier theory. Both involve a physical flux which is related to the gradient vector of some physically existing scalar quantity. However, this does not mean that both phenomena are produced by the same physical characteristic or property of the physical objects. The fact that both are parabolic PDEs can be used to organize our knowledge of the physical world, but such organization proceeds by making appeal to what is common to methods of calculations, and not in reference to some ontological or physical facts that are in common to both.

Further, it must also be noted, PLA does not apply to all of physics, but only to the more fundamental theories in it. In particular, try applying it to situations where the governing differential equation is not of the second-order, but is of the first- or the third-order [^]. Also, think about the applicability of PLA for dissipative / path-dependent processes.

… I don’t know whether the author (Dr. Jennifer Coopersmith) covers points like these in the book or not… But even if she doesn’t (and despite any differences I anticipate as of now, and indeed might come to keep also after reading the book), I am sure, the book is going to be extraordinarily enlightening in respect of an array of topics. … Strongly recommended.


Muon g-2.

I will give some the links I found useful. (Not listed in any particular order)

  • Dennis Overbye covers it for the NYT [^],
  • Natalie Wolchoever for the Quanta Mag [^],
  • Dr. Luboš Motl for his blog [^],
  • Dr. Peter Woit for his blog [^],
  • Dr. Adam Falkowski (“Jester”) for his blog [^],
  • Dr. Ethan Siegel for the Forbes [^], and,
  • Dr. Sabine Hossenfelder for Sci-Am [^].

If you don’t want to go through all these blog-posts, and only are looking for the right outlook to adopt, then check out the concluding parts of Hossenfelder’s and Siegel’s pieces (which conveniently happen to be the last two in the above list).

As to the discussions: The Best Comment Prize is hereby being awarded, after splitting it equally into two halves, to “Manuel Gnida” for this comment [^], and to “Unknown” for this comment [^].


The five-man quantum mechanics (aka “super-determinism”):

By which, I refer to this video on YouTube: “Warsaw Spacetime Colloquium #11 – Sabine Hossenfelder (2021/03/26)” [^].

In this video, Dr. Hossenfelder talks about… “super-determinism.”

Incidentally, this idea (of super-determinism) had generated a lot of comments at Prof. Dr. Scott Aaronson’s blog. See the reader comments following this post: [^]. In fact, Aaronson had to say in the end: “I’m closing this thread tonight, honestly because I’m tired of superdeterminism discussion.” [^].

Hossenfelder hasn’t yet posted this video at her own blog.

There are five people in the entire world who do research in super-determinism, Hossenfelder seems to indicate. [I know, I know, not all of them are men. But I still chose to say the five-man QM. It has a nice ring to it—if you know a certain bit from the history of QM.]

Given the topic, I expected to browse through the video really rapidly, like a stone that goes on skipping on the surface of water [^], and thus, being done with it right within 5–10 minutes or so.

Instead, I found myself listening to it attentively, not skipping even a single frame, and finishing the video in the sequence presented. Also, going back over some portions for the second time…. And that’s because Hossenfelder’s presentation is so well thought out. [But where is the PDF of the slides?]

It’s only after going through this video that I got to understand what the idea of “super-determinism” is supposed to be like, and how it differs from the ordinary “determinism”. Spoiler: Think “hidden variables”.

My take on the video:

No, the idea (of super-determinism) isn’t at all necessary to explain QM.

However, it still was neat to get to know what (those five) people mean by it, and also, more important: why these people take it seriously.

In fact, given Hossenfelder’s sober (and intelligent!) presentation of it, I am willing to give them a bit of a rope too. …No, not so long that they can hang themselves with it, but long enough that they can perform some more detailed simulations. … I anticipate that when they conduct their simulations, they themselves are going to understand the query regarding the backward causation (raised by a philosopher during the interactive part of the video) in a much better manner. That’s what I anticipate.

Another point. Actually, “super-determinism” is supposed to be “just” a theory of physics, and hence, it should not have any thing to say about topics like consciousness, free-will, etc. But I gather that at least some of them (out of the five) do seem to think that the free-will would have to be denied, may be as a consequence of super-determinism. Taken in this sense, my mind has classified “super-determinism” as being the perfect foil to (or the other side of) panpsychism. … As to panpsychism, if interested, check out my take on it, here [^].

All along, I had always thought that super-determinism is going to turn out to be a wrong idea. Now, after watching this video, I know that it is a wrong idea.

However, precisely for the same reason (i.e., coming to know what they actually have in mind, and also, how they are going about it), I am not going to attack them, their research program. … Not necessary… I am sure that they would want to give up their program on their own, once (actually, some time after) I publish my ideas. I think so. … So, there…


“Video: Quantum mechanics isn’t weird, we’re just too big” YouTube video at: [^]

The speaker is Dr. Phillip Ball; the host is Dr. Zlatko Minev. Let me give some highlights of their bio’s: Ball has a bachelor’s in chemistry from Oxford and a PhD in physics from Bristol. He was an editor at Nature for two decades. Minev has a BS in physics from Berkeley and a PhD in applied physics from Yale. He works in the field of QC at IBM (which used to be the greatest company in the computers industry (including software)).

The abstract given at the YouTube page is somewhat misleading. Ignore it, and head towards the video itself.

The video can be divided into two parts: (i) the first part, ~47 minutes long, is a presentation by Ball; (ii) the second part is a chat between the host (Minev) and the guest (Ball). IMO, if you are in a hurry, you may ignore the second part (the chat).

The first two-third portion of the first part (the presentation) is absolutely excellent. I mean the first 37 minutes. This good portion (actually excellent) gets over once Ball goes to the slide which says “Reconstructing quantum mechanics from informational rules”, which occurs at around 37 minutes. From this point onward, Ball’s rigour dilutes a bit, though he does recover by the 40:00 minutes mark or so. But from ~45:00 to the end (~47:00), it’s all down-hill (IMO). May be Ball was making a small little concession to his compatriots.

However, the first 37 minutes are excellent (or super-excellent).

But even if you are absolutely super-pressed for time, then I would still say: Check out at least the first 10 odd minutes. … Yes, I agree 101 percent with Ball, when it comes to the portion from ~5:00 through 06:44 through 07:40.

Now, a word about the mistakes / mis-takes:

Ball says, in a sentence that begins at 08:10 that Schrodinger devised the equation 1924. This is a mistake / slip of the tongue. Schrodinger developed his equation in late 1925, and published it in 1926, certainly not in 1924. I wonder how come it slipped past Ball.

Also, the title of the video is somewhat misleading. “Bigness” isn’t really the distinguishing criterion in all situations. Large-distance QM entanglements have been demonstrated; in particular, photons are (relativistic) QM phenomena. So, size isn’t necessarily always the issue (even if the ideas of multi-scaling must be used for bridging between “classical” mechanics and QM).

And, oh yes, one last point… People five-and-a-half feet tall also are big enough, Phil! Even the new-borns, for that matter…

A personal aside: Listening to Ball, somehow, I got reminded of some old English English movies I had seen long back, may be while in college. Somehow, my registration of the British accent seems to have improved a lot. (Or may be the Brits these days speak with a more easily understandable accent.)


Status of my research on QM:

If I have something to note about my research, especially that related to the QM spin and all, then I will come back a while later and note something—may be after a week or two. …

As of today, I still haven’t finished taking notes and thinking about it. In fact, the status actually is that I am kindaa “lost”, in the sense: (i) I cannot stop browsing so as to return to the study / research, and (ii) even when I do return to the study, I find that I am unable to “zoom in” and “zoom out” of the topic (by which, I mean, switching the contexts at will, in between all: the classical ideas, the mainstream QM ideas, and the ideas from my own approach). Indeed (ii) is the reason for (i). …

If the same thing continues for a while, I will have to rethink whether I want to address the QM spin right at this stage or not…

You know, there is a very good reason for omitting the QM spin. The fact of the matter is, in the non-relativistic QM, the spin can only be introduced on an ad-hoc basis. … It’s only in the relativistic QM that the spin comes out as a necessary consequence of certain more basic considerations (just the way in the non-relativistic QM, the ground-state energy comes out as a consequence of the eigenvalue nature of the problem; you don’t have to postulate a stable orbit for it as in the Bohr theory). …

So, it’s entirely possible that my current efforts to figure out a way to relate the ideas from my own approach to the mainstream QM treatment of the spin are, after all, a basically pointless exercise. Even if I do think hard and figure out some good and original ideas / path-ways, they aren’t going to be enough, because they aren’t going to be general enough anyway.

At the same time, I know that I am not going to get into the relativistic QM, because it has to be a completely distinct development—and it’s going require a further huge effort, perhaps a humongous effort. And, it’s not necessary for solving the measurement problem anyway—which was my goal!

That’s why, I have to really give it a good thought—whether I should be spending so much time on the QM spin or not. May giving some sketchy ideas (rather, making some conceptual-level statements) is really enough… No one throws so much material in just one paper, anyway! Even the founders of QM didn’t! … So, that’s another line of thought that often pops up in my mind. …

My current plan, however, is to finish taking the notes on the mainstream QM treatment of the spin anyway—at least to the level of Eisberg and Resnick, though I can’t finish it, because this desire to connect my approach to the mainstream idea also keeps on interfering…

All in all, it’s a weird state to be in! … And, that’s what the status looks like, as of now…


… Anyway, take care and bye for now…


A song I, ahem, like:

It was while browsing that I gathered, a little while ago, that there is some “research” which “explains why” some people “like” certain songs (like the one listed below) “so much”.

The research in question was this paper [^]; it was mentioned on Twitter (where else?). Someone else, soon thereafter, also pointed out a c. 2014 pop-sci level coverage [^] of a book published even earlier [c. 2007].

From the way this entire matter was now being discussed, it was plain and obvious that the song had been soul-informing for some, not just soul-satisfying. The song in question is the following:

(Hindi) सुन रुबिया तुम से प्यार हो गया (“sun rubiyaa tum se pyaar ho gayaa”)
Music: Anu Malik
Lyrics: Prayag Raj
Singers: S. Jaanaki, Shabbir Kumar

Given the nature of this song, it would be OK to list the credits in any order, I guess. … But if you ask me why I too, ahem, like this song, then recourse must be made not just to the audio of this song [^] but also to its video. Not any random video but the one that covers the initial sequence of the song to an adequate extent; e.g., as in here [^].


History:
2021.04.09 19:22 IST: Originally published.
2021.04.10 20:47 IST: Revised considerably, especially in the section related to the principle of the least action (PLA), and the section on the current status of my research on QM. Also minor corrections and streamlining. Guess now I am done with this post.

Yesss! I did it!

Last evening (on 2021.01.13 at around 17:30 IST), I completed the first set of computations for finding the bonding energy of a helium atom, using my fresh new approach to QM.

These calculations still are pretty crude, both by technique and implementation. Reading through the details given below, any competent computational engineer/scientist would immediately see just how crude they are. However, I also hope that he would also see that I can still say that these initial results may be taken as definitely validating my new approach.

It would be impossible to give all the details right away. So, what I give below are some important details and highlights of the model, the method, and the results.

For that matter, even my Python scripts are currently in a pretty disorganized state. They are held together by duct-tape, so to say. I plan to rearrange and clean up the code, write a document, and upload them both. I think it should be possible to do so within a month’s time, i.e., by mid-February. If not, say due to the RSI, then probably by February-end.

Alright, on to the details. (I am giving some indication about some discarded results/false starts too.)


1. Completion of the theory:

As far as development of my new theory goes, there were many tricky issues that had surfaced since I began trying simulating my new approach, which was starting in May–June 2020. The crucially important issues were the following:

  • A quantitatively precise statement on how the mainstream QM’s \Psi, defined as it is over the 3N-dimensional configuration space, relates to the 3-dimensional wavefunctions I had proposed earlier in the Outline document.
  • A quantitatively precise statement on how the wavefunction \Psi makes the quantum particles (i.e. their singularity-anchoring positions) move through the physical space. Think of this as the “force law”, and then note that if a wrong statement is made here, then the entire system dynamics/evolution has to go wrong. Repurcussions will exist even in a simplest system having two interacting particles, like the helium atom. The bonding energy calculations of the helium atom are bound to go wrong if the “force law” is wrong. (I don’t actually calculate the forces, but that’s a different matter.)
  • Also to be dealt with was this issue: Ensuring that the anti-symmetry property for the indistinguishable fermions (electrons) holds.

I had achieved a good clarity on all these (and similar other) matters by the evening of 5th January 2021. I also tried to do a few simulations but ran into problem. Both these developments were mentioned via an update at iMechanica on the evening of 6th January 2021, here [^].


2. Simulations in 1D boxes:

By “box” I mean a domain having infinite potential energy walls at the boundaries, and imposition of the Dirichlet condition of \Psi(x,t) = 0 at the boundaries at all times.

I did a rapid study of the problems (mentioned in the iMechanica update). The simulations for this study involved 1D boxes from 5 a.u. to 100 a.u. lengths. (1 a.u. of length = 1 Bohr radius.) The mesh sizes varied from 5 nodes to 3000 nodes. Only regular, structured meshes of uniform cell-sides (i.e., a constant inter-nodal distance, \Delta x) were used, not non-uniform meshes (such as log-based).

I found that the discretization of the potential energy (PE) term indeed was at the root of the problems. Theoretically, the PE field is singular. I have been using FDM. Since an infinite potential cannot be handled using FDM, you have to implement some policy in giving a finite value for the maximum depth of the PE well.

Initially, I chose the policy of setting the max. depth to that value which would exist at a distance of half the width of the cell. That is to say, V_S \approx V(\Delta x/2), where V_S denotes the PE value at the singularity (theoretically infinite).

The PE was calculated using the Coulomb formula, which is given as V(r) = 1/r when one of the charges is fixed, and as V_1(r_s) = V_2(r_s) = 1/(2r_s) for two interacting and moving charges. Here, r_s denotes the separation between the interacting charges. The rule of half cell-side was used for making the singularity finite. The field so obtained will be referred to as the “hard” PE field.

Using the “hard” field was, if I recall it right, quite OK for the hydrogen atom. It gave the bonding energies (ground-state) ranging from -0.47 a.u. to -0.49 a.u. or lower, depending on the domain size and mesh refinement (i.e. number of nodes). Note, 1 a.u. of energy is the same as 1 hartree. For comparison, the analytical solution gives -0.5, exactly. All energy calculations given here refer to only the ground-state energies. However, I also computed and checked up to 10 eigenvalues.

Initially, I tried both dense and sparse eigenvalue solvers, but eventually settled only on the sparse solvers. The results were indistinguishable (at least numerically) . I used SciPy’s wrappings for the various libraries.

I am not quite sure whether using the hard potential was always smooth or not, even for the hydrogen atom. I think not.

However, the hard Coulomb potential always led to problems for the helium atom in a 1D box (being modelled using my new approach/theory). The lowest eigen-value was wrong by more than a factor of 10! I verified that the corresponding eigenvector indeed was an eigenvector. So, the solver was giving a technically correct answer, but it was an answer to the as-discretized system, not to the original physical problem.

I therefore tried using the so-called “soft” Coulomb potential, which was new to me, but looks like it’s a well known function. I came to know of its existence via the OctopusWiki [^], when I was searching on some prior code on the helium atom. The “soft” Coulomb potential is defined as:

V = \dfrac{1}{\sqrt{(a^2 + x^2)}}, where a is an adjustable parameter, often set to 1.

I found this potential unsatisfactory for my work, mainly because it gives rise to a more spread-out wavefunction, which in turn implies that the screening effect of one electron for the other electron is not captured well. I don’t recall exactly, but I think that there was this issue of too low ground-state eigenvalues also with this potential (for the helium modeling). It is possible that I was not using the right SciPy function-calls for eigenvalue computations.

Please take the results in this section with a pinch of salt. I am writing about them only after 8–10 days, but I have written so many variations that I’ve lost the track of what went wrong in what scenario.

All in all, I thought that 1D box wasn’t working out satisfactorily. But a more important consideration was the following:

My new approach has been formulated in the 3D space. If the bonding energy is to be numerically comparable to the experimental value (and not being computed as just a curiosity or computational artifact) then the potential-screening effect must be captured right. Now, here, my new theory says that the screening effect will be captured quantitatively correctly only in a 3D domain. So, I soon enough switched to the 3D boxes.


3. Simulations of the hydrogen atom in 3D boxes:

For both hydrogen and helium, I used only cubical boxes, not parallelpipeds (“brick”-shaped boxes). The side of the cube was usually kept at 20 a.u. (Bohr radii), which is a length slightly longer than one nanometer (1.05835 nm). However, some of my rapid experimentation also ranged from 5 a.u. to 100 a.u. domain lengths.

Now, to meshing

The first thing to realize is that with a 3D domain, the total number of nodes M scales cubically with the number of nodes n appearing on a side of the cube. That is to say: M = n^3. Bad thing.

The second thing to note is worse: The discretized Hamiltonian operator matrix now has the dimensions of M \times M. Sparse matrices are now a must. Even then, meshes remain relatively coarse, else computation time increases a lot!

The third thing to note is even worse: My new approach requires computing “instantaneous” eigenvalues at all the nodes. So, the number of times you must give a call to, say eigh() function, also goes as M = n^3. … Yes, I have the distinction of having invented what ought to be, provably, the most inefficient method to compute solutions to many-particle quantum systems. (If you are a QC enthusiast, now you know that I am a completely useless fellow.) But more on this, just a bit later.

I didn’t have to write the 3D code completely afresh though. I re-used much of the backend code from my earlier attempts from May, June and July 2020. At that time, I had implemented vectorized code for building the Laplacian matrix. However, in retrospect, this was an overkill. The system spends more than 99 % of execution time only in the eigenvalue function calls. So, preparation of the discretized Hamiltonian operator is relatively insignificant. Python loops could do! But since the vectorized code was smaller and a bit more easily readable, I used it.

Alright.

The configuration space for the hydrogen atom is small, there being only one particle. It’s “only” M in size. More important, the nucleus being fixed, and there being just one particle, I need to solve the eigenvalue equation only once. So, I first put the hydrogen atom inside the 3D box, and verified that the hard Coulomb potential gives cool results over a sufficiently broad range of domain sizes and mesh refinements.

However, in comparison with the results for the 1D box, the 3D box algebraically over-estimates the bonding energy. Note the word “algebraically.” What it means is that if the bonding energy for a H atom in a 1D box is -0.49 a.u., then with the same physical domain size (say 20 Bohr radii) and the same number of nodes on the side of the cube (say 51 nodes per side), the 3D model gives something like -0.48 a.u. So, when you use a 3D box, the absolute value of energy decreases, but the algebraic value (including the negative sign) increases.

As any good computational engineer/scientist could tell, such a behaviour is only to be expected.

The reason is this: The discretized PE field is always jagged, and so it only approximately represents a curvy function, especially near the singularity. This is how it behaves in 1D, where the PE field is a curvy line. But in a 3D case, the PE contour surfaces bend not just in one direction but in all the three directions, and the discretized version of the field can’t represent all of them taken at the same time. That’s the hand-waving sort of an “explanation.”

I highlighted this part because I wanted you to note that in 3D boxes, you would expect the helium atom energies to algebraically overshoot too. A bit more on this, later, below.


4. Initial simulations of the helium atom in 3D boxes:

For the helium atom too, the side of the cube was mostly kept at 20 a.u. Reason?

In the hydrogen atom, the space part of the ground state \psi has a finite peak at the center, and its spread is significant over a distance of about 5–7 a.u. (in the numerical solutions). Then, for the helium atom, there is going to be a dent in the PE field due to screening. In my approach, this dent physically moves over the entire domain as the screening electron moves. To accommodate both their spreads plus some extra room, I thought, 20 could be a good choice. (More on the screening effect, later, below.)

As to the mesh: As mentioned earlier, the number of eigenvalue computations required are M, and the time taken by each such a call goes significantly up with M. So, initially, I kept the number of nodes per side (i.e. n) at just 23. With two extreme planes sacrificed to the deity of the boundary conditions, the actual computations actually took place on a 21 \times 21 \times 21 mesh. That still means, a system having 9261 nodes!

At the same time, realize how crude and coarse mesh this one is: Two neighbouring nodes represent a physical distance of almost one Bohr radius! … Who said theoretical clarity must come also with faster computations? Not when it’s QM. And certainly not when it’s my theory! I love to put the silicon chip to some real hard work!

Alright.

As I said, for the reasons that will become fully clear only when you go through the theory, my approach requires M number of separate eigenvalue computation calls. (In “theory,” it requires M^2 number of them, but some very simple and obvious symmetry considerations reduce the computational load to M.) I then compute the normalized 1-particle wavefunctions from the eigenvector. All this computation forms what I call the first phase. I then post-process the 1-particle wavefunctions to get to the final bonding energy. I call this computation the second phase.

OK, so in my first computations, the first phase involved the SciPy’s eigsh() function being called 9261 number of times. I think it took something like 5 minutes. The second phase is very much faster; it took less than a minute.

The bonding energy I thus got should have been around -2.1 a.u. However, I made an error while coding the second phase, and got something different (which I no longer remember, but I think I have not deleted the wrong code, so it should be possible to reproduce this wrong result). The error wasn’t numerically very significant, but it was an error all the same. This status was by the evening of 11th January 2021.

The same error continued also on 12th January 2021, but I think that if the errors in the second phase were to be corrected, the value obtained could have been close to -2.14 a.u. or so. Mind you, these are the results with a 20 a.u. box and 23 nodes per side.

In comparison, the experimental value is -2.9033 a.u.

As to computations, Hylleraas, back in 1927 a PhD student, used a hand-held mechanical calculator, and still got to -2.90363 a.u.! Some 95+ years later, his method and work still remain near the top of the accuracy stack.

Why did my method do so bad? Even more pertinent: How could Hylleraas use just a mechanical calculator, not a computer, and still get to such a wonderfully accurate result?

It all boils down to the methods, tricks, and even dirty tricks. Good computational engineers/scientists know them, their uses and limitations, and do not hesitate building products with them.

But the real pertinent reason is this: The technique Hylleraas used was variational.


5. A bit about the variational techniques:

All variational techniques use a trial function with some undetermined parameters. Let me explain in a jiffy what it means.

A trial function embodies a guess—a pure guess—at what the unknown solution might look like. It could be any arbitrary function.

For example, you could even use a simple polynomial like y = a_0 + a_1 x_1 + a_2 x_2^2 + a_3 x_3^3 by way of a trial function.

Now, observe that if you change the values of the a_0, a_1 etc. coefficients, then the shape of the function changes. Just assign some random values and plot the results using MatPlotLib, and you will know what I mean.

… Yes, you do something similar also in Data Science, but there, the problem formulation is relatively much simpler: You just tweak all the a_i coefficients until the function fits the data. “Curve-fitting,” it’s called.

In contrast, in variational calculus, you don’t do this one-step curve-fitting. You instead take the y function and substitute it into some theoretical equations that have something to do with the total energy of the system. Then you find an expression which tells how the energy, now expressed as a function of y, which itself is a function of a_i‘s, varies as these unknown coefficients a_i are varied. So, these a_i‘s basically act as parameters of the model. Note carefully, the y function is the primary unknown function, but in variational calculus, you do the curve-fitting with respect to some other equation.

So, the difference between simple curve-fitting and variational methods is the following. In simple curve-fitting, you fit the curve to concrete data values. In variational calculus, you fit an expression derived by substituting the curve into some equations (not data), and then derive some further equations that show how some measure like energy changes with variations in the parameters. You then adjust the parameters so as to minimize that abstract measure.

Coming back to the helium atom, there is a nucleus with two protons inside it, and two electrons that go around the nucleus. The nucleus pulls both the electrons, but the two electrons themselves repel each other. (Unlike and like charges.) When one electron strays near the nucleus, it temporarily decreases the effective pull exerted by the nucleus on the other electron. This is called the screening effect. In short, when one electron goes closer to the nucleus, the other electron feels as if the nucleus had discharged a little bit. The effect gets more and more pronounced as the first electron goes closer to the nucleus. The nucleus acts as if it had only one proton when the first electron is at the nucleus. The QM particles aren’t abstractions from the rigid bodies of Newtonian mechanics; they are just singularity conditions in the aetherial fields. So, it’s easily possible that an electron sits at the same place where the two protons of the nucleus are.

One trouble with using the variational techniques for problems like modeling the helium atom is this. It models the screening effect using a numerically reasonable but physically arbitrary trial function. Using this technique can give a very accurate result for bonding energy, provided that the person building the variational model is smart, as Hylleraas sure was. But the trial function is just a guess work. It can’t be said to capture any physics, as such. Let me give an example.

Suppose that some problem from physics is such that a 5-degree polynomial happens to be the physically accurate form of solution for it. However, you don’t know the analytical solution, not even its form.

Now, the variational technique doesn’t prevent you from using a cubic polynomial as the trial function. That’s because, even if you use a cubic polynomial, you can still get to the same total system energy.

The actual calculations are far more complicated, but just as a fake example to illustrate my main point, suppose for a moment that the area under the solution curve is the target criterion (and not a more abstract measure like energy). Now, by adjusting the height and shape of a cubic polynomial, you can always alter its shape such that it happens to give the right area under the curve. Now, the funny part is this. If the trial function we choose is only cubic, then it is certain to miss, as a matter of a general principle, all the information related to the 3rd- and 4th-order derivatives. So, the solution will have a lot of high-order physics deleted from itself. It will be a bland solution; something like a ghost of the real thing. But it can still give you the correct area under the curve. If so, it still fulfills the variational criterion.

Coming back to the use of variational techniques in QM, like Hylleraas’ method:

It can give a very good answer (even an arbitrarily accurate answer) for the energy. But the trial function can still easily miss a lot of physics. In particular, it is known that the wavefunctions (actually, “orbitals”) won’t turn out to be accurate; they won’t depict physical entities.

Another matter: These techniques work not in the physical space but in the configuration space. So, the opportunity of taking what properly belongs to Raam and giving it to Shaam is not just ever-present but even more likely.

Even simpler example is this. Suppose you are given 100 bricks and asked to build a structure on a given area for a wall on the ground. You can use them to arrange one big tower in the wall, two towers, whatever… There still would be in all 100 bricks sitting on the same area on the ground. The shapes may differ; the variational technique doesn’t care for the shape. Yet, realize, having accurate atomic orbitals means getting the shape of the wall too right, not just dumping 100 bricks on the same area.


6. Why waste time on yet another method, when a more accurate method has been around for some nine decades?

“OK, whatever” you might say at this point. “But if the variational technique was OK by Hylleraas, and if it’s been OK for the entire community of physicists for all these years, then why do you still want to waste your time and invent just another method that’s not as accurate anyway?”

My answer:

Firstly, my method isn’t an invention; it is a discovery. My calculation method directly follows the fundamental principles of physics through and through. Not a single postulate of the mainstream QM is violated or altered; I merely have added some further postulates, that’s all. These theoretical extensions fit perfectly with the mainstream QM, and using them directly solves the measurement problem.

Secondly, what I talked about was just an initial result, a very crude calculation. In fact, I have alrady improved the accuracy further; see below.

Thirdly, I must point out a possibility which your question didn’t at all cover. My point is that this actually isn’t an either-or situation. It’s not either variational technique (like Hylleraas’s) or mine. Indeed, it would very definitely be possible to incorporate the more accurate variational calculations as just parts of my own calculations too. It’s easy to show that. That would mean, combining “the best of both worlds”. At a broader level, the method would still follow my approach and thus be physically meaningful. But within carefully delimited scope, trial-functions could still be used in the calculation procedures. …For that matter, even FDM doesn’t represent any real physics either. Another thing: Even FDM can itself can be seen as just one—arguably the simplest—kind of a variational technique. So, in that sense, even I am already using the variational technique, but only the simplest and crudest one. The theory could easily make use of both meshless and mesh-requiring variational techniques.

I hope that answers the question.


7. A little more advanced simulation for the helium atom in a 3D box:

With my computational experience, I knew that I was going to get a good result, even if the actual result was only estimated to be about -2.1 a.u.—vs. -2.9033 a.u. for the experimentally determined value.

But rather than increasing accuracy for its own sake, on the 12th and 13th January, I came to focus more on improving the “basic infrastructure” of the technique.

Here, I now recalled the essential idea behind the Quantum Monte Carlo method, and proceeded to implement something similar in the context of my own approach. In particular, rather than going over the entire (discretized) configuration space, I implemented a code to sample only some points in it. This way, I could use bigger (i.e. more refined) meshes, and get better estimates.

I also carefully went through the logic used in the second phase, and corrected the errors.

Then, using a box of 35 a.u. and 71 nodes per side of the cube (i.e., 328,509 nodes in the interior region of the domain), and using just 1000 sampled nodes out of them, I now found that the bonding energy was -2.67 a.u. Quite satisfactory (to me!)


8. Finally, a word about the dirty tricks department:

I happened to observe that with some choices of physical box size and computational mesh size, the bonding energy could go as low as -3.2 a.u. or even lower.

What explains such a behaviour? There is this range of results right from -2.1 a.u. to -2.67 a.u. to -3.2 a.u. …Note once again, the actual figure is: -2.90 a.u.

So, the computational results aren’t only on the higher side or only on the lower side. Instead, they form a band of values on both sides of the actual value. This is both a good news and a bad news.

The good plus bad news is that it’s all a matter of making the right numerical choices. Here, I will mention only 2 or 3 considerations.

As one consideration, to get more consistent results across various domain sizes and mesh sizes, what matters is the physial distance represented by each cell in the mesh. If you keep this mind, then you can get results that fall in a narrow band. That’s a good sign.

As another consideration, the box size matters. In reality, there is no box and the wavefunction extends to infinity. But a technique like FDM requires having to use a box. (There are other numerical techniques that can work with infinite domains too.) Now, if you use a larger box, then the Coulomb well looks just like the letter `T’. No curvature is captured with any significance. With a lot of physical region where the PE portion looks relatively flat, the role played by the nuclear attraction becomes less significant, at least in numerical work. In short, the atom in a box approaches a free-particle-in-a-box scenario! On the other hand, a very small box implies that each electron is screening the nuclear potential at almost all times. In effect, it’s as if you are modelling a H- ion rather than an He atom!

As yet another consideration: The policy for choosing the depth of the potential energy matters. A concrete example might help.

Consider a 1D domain of, say, 5 a.u. Divide it using 6 nodes. Put a proton at the origin, and compute the electron’s PE. At the distance of 5 a.u., the PE is 1.0/5.0 = 0.2 a.u. At the node right next to singularity, the PE is 1 a.u. What finite value should you give to the PE be at the nucleus? Suppose, following the half-cell side rule, you give it the value of 1.0/0.5 = 2 a.u. OK.

Now refine the mesh, say by having 10 nodes going over the same physical distance. The physically extreme node retains the same value, viz. 0.2 a.u. But the node next to the singularity now has a PE of 1.0/0.5 = 2 a.u., and the half cell-side rule now gives the a value of 1.0/0.25 = 4.0 a.u. at the nucleus.

If you plot the two curves using the same scale, the differences are especially is striking. In short, mesh refinement alone (keeping the same domain size) has resulted in keeping the same PE at the boundary but jacking up the PE at the nucleus’ position. Not only that, but the PE field now has a more pronounced curvature over the same physical distance. Eigenvalue problems are markedly sensitive to the curvature in the PE.

Now, realize that tweaking this one parameter alone can make the simulation zoom on to almost any value you like (within a reasonable range). I can always choose this parameter in such a way that even a relatively crude model could come to reproduce the experimental value of -2.9 a.u. very accurately—for energy. The wavefunction may remain markedly jagged. But the energy can be accurate.

Every computational engineer/scientist understands such matters, especially those who work with singlarities in fields. For instance, all computational mechanical engineers know how the stress values can change by an order of magnitude or more, depending on how you handle the stress concentrators. Singularities form a hard problem of computational science & engineering.

That’s why, what matters in computational work is not only the final number you produce. What matters perhaps even more are such things as: Whether the method works well in terms of stability; trends in the accuracy values (rather than their absolute values); whether the method can theoretically accomodate some more advanced techniques easily or not; how it scales with the size of the domain and mesh refinement; etc.

If a method does fine on such counts, then the sheer accuracy number by itself does not matter so much. We can still say, with reasonable certainty, that the very theory behind the model must be correct.

And I think that’s what my yesterday’s result points to. It seems to say that my theory works.


9. To wind up…

Despite all my doubts, I always thought that my approach is going to work out, and now I know that it does—nay, it must!

The 3-dimensional \Psi fields can actually be seen to be pushing the particles, and the trends in the numerical results are such that the dynamical assumptions I introduced, for calculating the motions of the particles, must be correct too. (Another reason for having confidence in the numerical results is that the dynamical assumptions are very simple, and so it’s easy to think how they move the particles!) At the same time, though I didn’t implement it, I can easily see that the anti-symmetrical property of at least 2-particle system definitely comes out directly. The physical fields are 3-dimensional, and the configuration space comes out as a mathematical abstraction from them. I didn’t specifically implement any program to show detection probabilities, but I can see that they are going to come to right—at least for 2-particle systems.

So, the theory works, and that matters.

Of course, I will still have quite some work to do. Working out the remaining aspects of spin, for one thing. A three interacting-particles system would also be nice to work through and to simulate. However, I don’t know which system I could/should pick up. So, if you have any suggestions for simulating a 3-particle system, some well known results, then do let me know. Yes, there still are chances that I might still need to tweak the theory a little bit here and little bit there. But the basic backbone of the theory, I am now quite confident, is going to stand as is.

OK. One last point:

The physical fields of \Psi, over the physical 3-dimensional space, have primacy. Due to the normalization constraint, in real systems, there are no Dirac’s delta-like singularities in these wavefunctions. The singularities of the Coulomb field do enter the theory, but only as devices of calculations. Ontologically, they don’t have a primacy. So, what primarily exist are the aetherial, complex-valued, wavefunctions. It’s just that they interact with each other in such a way that the result is as if the higher-level V term were to have a singularity in it. Indeed, what exists is only a single 3-dimensional wavefunction; it is us who decompose it variously for calculational purposes.

That’s the ontological picture which seems to be emerging. However, take this point with the pinch of the salt; I still haven’t pursued threads like these; been too busy just implementing code, debugging it, and finding and comparing results. …


Enough. I will start writing the theory document some time in the second half of the next week, and will try to complete it by mid-February. Then, everything will become clear to you. The cleaned up and reorganized Python scripts will also be provided at that time. For now, I just need a little break. [BTW, if in my …err…“exuberance” online last night, if I have offended someone, my apologies…]

For obvious reasons, I think that I will not be blogging for at least two weeks…. Take care, and bye for now.


A song I like:

(Western, pop): “Lay all your love on me”
Band: ABBA

[A favourite since my COEP (UG) times. I think I looked up the words only last night! They don’t matter anyway. Not for this song, and not to me. I like its other attributes: the tune, the orchestration, the singing, and the sound processing.]


History:
— 2021.01.14 21:01 IST: Originally published
— 2021.01.15 16:17 IST: Very few, minor, changes overall. Notably, I had forgotten to type the powers of the terms in the illustrative polynomial for the trial function (in the section on variational methods), and now corrected it.

My new year’s resolutions—2021 edition

Before we come to my New Year’s Resolutions for the last year and then for the new year, do take a moment to check out the poll I’ve posted on Twitter, and see if you wish to respond to it. I would be grateful to you if you do. The tweet in question is the following one:

The poll gets over within a day: On 01 January 2021 (all time-zones in the world) + a few hours.


1. A quick review of my blog posts in the year 2020:

Not counting the present post, I made in all 26 posts overall during the year 2020. So, on an average, there was one post every fortnight.

This statistic is somewhat misleading. My posts are always much longer than those of your average blogger (i.e. among those few left who still blog!) My posts are almost always > 1.5 words in length, often > 3 k words, and many times >= 5k words. Also, my last year’s NYRs had actually included making even fewer posts!

Let me pick out the more important posts I made this year:

05 February 2020: Equations in the matrix form for implementing simple artificial neural networks [^]
Don’t underestimate the amount of effort which has gone into writing the document mentioned in this post. The PDF document itself was uploaded at my GitHub account, here [^]. (Let me now make it available also from this blog, see here [^]). In this document, I have worked out each and every equation in a consistent, matrix-based format. Although the title says: “simple” neural networks, that word is somewhat misleading. Even the backward passes have been covered, in 100% detail, for all the fundamentally important layers. These same layers are used even in the most modern Deep Learning networks.

13 April 2020: Status: 99.78 % accuracy on the MNIST dataset (World Rank # 5, using a single CPU-only laptop) [^]
The Covid lockdowns had begun already. I had been on the lookout for jobs for more than a year by then. But now I knew that due to the lockdowns, no further interview calls are going to come in—not for me anyway. So, I had a lot of time at hand. I had just written the document on the equations of ANNs/DL. So, the knowledge was fresh. So, I decided to put to use some of my research ideas, with the foremost benchmark of Deep Learning. I broke through in World’s Top 10! (No Indian had ever been in top 20, perhaps even in top 50. That includes Indians from IITs and those gone abroad for higher studies/research/jobs/startups.)

11 May 2020: Status update on my trials for the MNIST dataset [^]
Continuation of the above work. I raised my performance by a tiny but significant 0.01%. I also briefly mention the kind of tricks I tried.

10 June 2020: Python scripts for simulating QM, part 2: Vectorized code for the H atom in a 1D/2D/3D box. [Also, un-lockdown and covid in India.] [^]
This post also mentioned some of the cutting-edge work I had done earlier, in software engineering—like, in writing a yacc-like tool, given just abstract grammar of a computer language.

18 July 2020: On the Bhagavad-Geetaa, ch. 2, v. 47—part 1: कर्मण्येवाधिकारस्ते (“karmaNyevaadhikaaraste”) [^]
This post still remains rather rough. It could do with some good editing. But the real meaning of the Sanskrit phrases aren’t going to change. Check it out if you think you know this famous verse from the Gita.

31 July 2020: “Simulating quantum `time travel’ disproves butterfly effect in quantum realm”—not! [^]
An unplanned post. I never meant to write anything on QM from this angle, but a new paper came up, and it was hyped rather too much. And that was a relative statement. It was hyped even more than the super-high type of hype which has always been an old normal in these fields of Foundations of QM and Quantum Computers

09 August 2020: Sound separation, voice separation from a song, the cocktail party effect, etc., and AI/ML [^]
This was my personal, opinionated, take on some AI/ML-based products/services.

23 August 2020: Talking of my attitudes… [^]
My answers to a questionnaire. This post should be of interest to those who don’t want all the details of my new approach, but just want to know how I view the various issues and interpretations concerning QM

17 October 2020: Update: Almost quitting QM… [^]
I try to be the worst possible critic of my new approach to QM. So, naturally, doubts like these can easily come up. It’s just that once I notice such things, I deal with them.

08 November 2020: Updates: RSI. QM tunnelling time. [^]
Another unplanned post. It covers a very impressive line of experimentation in QM, its coverage in other blogs, as also my own comments. In particular, I thought that this piece of work is likely to be nominated for a physics Nobel too (and gave my reasons for the same).

09 December 2020: Some comments on QM and CM—Part 1: Coleman’s talk. Necessity of ontologies. [^]
and
19 December 2020: Some comments on QM and CM—Part 2: Without ontologies, “classical” mechanics can get very unintuitive too. (Also, a short update.) [^]
Another couple of unplanned posts. I took this opportunity to present something on my (revised) views of ontologies in physics.

The above posts pretty much capture the two issues which kept me pre-occupied during 2020—Data Science, and Foundations of QM.

Other posts were relatively more topical (like updates), or not so important (though I might have made some good points in them, e.g., this 16 June 2020 post: The singularities closest to you [^].

Twitter:

Apart from this blog, I also made a lot of tweets. Off-hand, these included: Comments on pop-sci articles and videos; comments on papers from QM; comments on developments in Data Science (e.g. the big news related to the Protein-Folding problem),  etc. Also, longer twitter-threads (up to 9 or 10 tweets long, not longer) mentioning my thoughts on various topics, e.g., my ideas on relation of maths with reality and physics, finer points related to my ongoing development of my new approach, etc.

Comments at others’ blogs:

I’ve been making many comments at others’ blogs, and some of them have included my spontaneous/ live/ latest thoughts too. However, I don’t want to go through all of that at this point of time. Most of these times, I have saved these spontaneous responses, and I use them in my R&D too, especially of QM and Data Science.


2. A quick review of my last year’s resolutions (for the year 2020):

The resolutions I made last year, are here: My new year’s resolutions—2020 edition [^]

How did I fare on those? Let me jot down in brief:

2.1. Review of last year’s NYRs: Data Science:

2.1.1: A set of brief notes

I did write the “Equations in the matrix form” document; see the comment above.

I had actually planned to write a set of “of brief notes (in LaTeX) on topics from machine learning.” I didn’t. Two reasons:

  • Reason 1: Looking at the way the IT industry people treated my applications through job sites, I came to realize that publishing such notes was likely to push the IT industry people in considering me only for the training jobs.
  • Reason 2: I achieved 99.78% accuracy (see above). Writing notes, just to show my understanding, naturally took a back-seat. Now I was actually putting to use the knowledge in research too, not just compiling it in a neatly processed form (as in the Equations document mentioned above).

2.1.2. New ideas:

I was going to develop new ideas concerning ML/DL/AI, and perhaps publish a paper.

Well, I did develop my ideas, at least w.r.t. the image data problems, as noted above. However, I have decided to withhold publications. (I think that so far, I’ve been able to hold also the network-hacking/screen-grabbing and bitmap-shipping (remember Citrix?) folks at the bay. So, I think I should be able to publish some time later.

2.2. Review of last year’s NYRs: Quantum Mechanics:

Yes, I worked on all the points noted in the last year’s NYRs, and a lot more.

In particular, my development during the year threw some completely new conceptual issues to me, and I dealt with them. In the process, my understanding of QM became even better and deeper. However RSI struck right around the same time, which was yet another unexpected development. As a result, I could not implement in code all my new ideas and see them in action (and verify them!).

2.3. Review of last year’s NYRs: Health etc.:

I had resolved for going “for walks (30 minutes) on at least six days each month”, and to “do surya namaskars on at least two days a week”.

I failed.

As to walking, it soon enough got ruled out due to Covid. As to “soorya namaskaara”s, I did try to keep up with “at least 12 namaskaar’s every day”, but I couldn’t. And, once there was a break, it soon became a complete break.

On the positive side, I didn’t have any alcoholic drink (not even wine) after 16th March 2020. … No, it wasn’t a religious thing. I just avoided going out, due to Covid, and soon later, there were the lockdowns. Then, even when the wine shops reopened, I just thought of continuing until (a) the year-end, or (b) a validation-through-simulations of my new ideas in QM, whichever came first. As a result, I didn’t.

I plan to have a bit tonight, as an exception. I also plan to have a celebration with drinks once I achieve the milestones noted for the new year (2021).

2.4. Review of last year’s NYRs: Blogging:

I had resolved to reduce blogging.

However, there were enough developments that I ended up doing 26 posts instead of the planned 12 to 20 posts.

2.5. Review of last year’s NYR’s: Translations:

I was going to attempt translating उपनिषद (Upanishad). Well, I did pursue this activity but mostly in the mind. OTOH, I did publish on what used to be arguably the most often quoted verse from Gitaa.

2.6. Review of last year’s NYR’s: Meditation, side-readings (on all topics—not just spirituality), etc.

I had said: “Satisfactory pace achieved already. No need to change it; certainly no need to make NYR’s about them as such.” Yes, my reading did continue, satisfactorily enough.


3. My new year’s resolutions for 2021:

3.1. Try to “un-become” “Bohmianism”:

Remember, I had resolved to be a Bohmian via my Diwali-time resolutions? See this 23 November 2020 post: “Now I am become Bohmianism” [^]

Now my NYR is to cancel it!

Reason: I won’t tell you right away. You should be able to figure it out anyway, esp. over the course of the new year!

Aside: However, you know, NYR’s are sooo sooo hard to keep. So, don’t be surprised if I end up saying something like “We the Bohmians…,” also in the new year!

3.2. My new approach to quantum mechanics:

Spinless particles:

Conduct some basic simulations and write *some* preliminary documentation on the spinless electrons (up to 3 electron systems) by Q1-end. If the RSI severity goes up, by Q2-end.

QM with spin:

Conduct some basic simulations and write *some* preliminary documentation on the spinning electrons (up to 3 electron systems) by Q2-end. If the RSI strikes, by Q3-end or year-end.

Depending on the progress, revise my 2019-times Outline document on my new approach. Note, revising that document is optional.

Scope of this NYR:

The above resolutions do not cover the more advanced topics like: photons, detailed studies on the times taken by QM processes, detailed studies of the multi-scaling issues, etc.

However, the above resolutions do cover predicting the bonding energies of electrons in the helium atom (the 1D case), and the preliminary three-particle simulations.

3.3. Data science:

Work a bit on some projects I have in mind—at least two or three of them.

Note: Nothing big here. Just some small little projects of personal interest. Details will become apparent over the course of the year.

3.4. Health etc.:

As noted above, the last year’s NYRs failed. So: Adjust the resolution further, for this year. Accordingly, the NYR for this year is:

Commit to performing one soorya namaskaar every day.

If even this routine fails for whatever reasons (say, a genuine reason like travel etc., or even plain out of irregularity, forgetfulness, boredom, etc.), then it still would be fine. But the next time the issue crosses the mind, just resume the “at least one” routine back again. (Yes, such resumptions are an integral part of this very resolution itself!)

Also included in this resolution is this point: Publish a summary of my actual performance at the end of the next year. (So, keep a record!)

No resolutions concerning food, drink, etc., this year.

3.5. Sanskrit:

Start learning Sanskrit in a more formal manner, doing an online course (or more of them).

3.6. Miscellaneous:

That’s it! Nothing in the miscellaneous department, this year. The rest of the routines are doing fine (like, e.g., meditation, studies of other topics, and all). No need to change anything about them; no need to make any resolution either.


Apart from it all, take care of yourself, and have a Happy, Productive and Prosperous New Year!

I will only return after I have progressed to a definite extent in my QM-related work, which might be a couple of weeks from now on.


A song I like:

(Instrumental, “fusion”): “The River”
Composer: Ananda Shankar

[This was one of the instrumental pieces I would most often listen to, back in my IIT Madras days (1985–87).

… By now, I’ve forgotten whether I had heard this piece first in IIT Madras or in Pune. I do distinctly remember buying the original cassette for this album (“Ananda Shankar: A Life in Music”) in Pune, in particular, from a certain shop in the “Wonderland” shopping complex, on the Main Street in the Pune Camp area, and listening to it often in Pune. So, some chances are that I listened to it first in Pune. OTOH, my nearest retrievable memory also says two things: (1) I would listen to it on a Sony National Panasonic “Walkman” (having a 3-band equalizer), and (2) I had bought such a  Walkman in the Burma Bazaar area of Madras (now Chennai), though I no longer remember whether I bought it while being a student at IIT M or some time later. … So, all in all, I am not sure when or where I listened to it the first time.

… In any case, I am sure that the song became a routine transitioning music on TV and radio in India only some time later (may be after a few years or so). At least I hadn’t listened to it first on TV/radio. …

This song was one of my all time top-most favorites for a long time during my youth. Frankly though, listening to it once again only last year, after a gap of some two+ decades, it sounded a slight bit different. … But yes, it still remains one of my most favorites. (It’s surprising that I happened not to have run it here so far.)

This piece is short (just about 3 minutes long), but it is absolutely innovative, fresh, and creates a wonderfully unhurried, placid, but not lackadaisical mood. … Just as if you were sitting by the side of a river on an unhurried evening, while resplendent colors slowly unfolded in the sky and also on the placid waters, until it became fully dark, completely unknown to you…. Or, as a small, lone wasp of a cloud loitered around the sky, became thinner, and somehow, the next time you looked at it, it was almost gone… An outstanding piece this one is, IMO, even when compared to other pieces by Ananda Shankar himself… This piece carries that unmistakable Indian touch even as the composition unfolds with a Western-sounding orchestration/trappings. (And it was always a bad idea, IMO, to use this piece for transitioning in between TV/radio programs… But then, that’s an entirely different matter!)

So, anyway, give it a listen and see if you like it too. … Back then, it sounded very fresh and innovative. But with a lot of music of the similar kind, not to mention the easy access to the World music these days, if you are listening to it for the first time, this piece may not sound all that extraordinary. But back then, it was, for me at least. … A good quality audio can be found here [^].

Alright, bye for now and take care…

]


History:
— 2020.12.31 19:19 IST: First published
— 2020.12.31 20:26 IST: Very minor corrections and additions, all in the songs section.