# Loitering around…

Update:

OK. I am getting back to working on the remaining topics, in particular, taking down detailed notes on the QM spin. I plan to begin this activity starting this evening. Also, I can now receive queries, from “any” one, regarding my work on QM, including the bit mentioned in the post below. [The meaning of  “any” one ‘ is explained below.]

[2021.03.24 13:17 IST]

Am just about completing one full day of plain loitering around, doing nothing.

No, of course, it couldn’t possibly have been literally nothing—whether of the शून्य (“shoonya”) variety, or the शुन्य (“shunya”) one. (Go consult a good Sanskrit dictionary for the subtle differences in the meaning of these two terms.)

So, what I mean to say by “doing nothing” is this:

The last entry in my research journal has the time-stamp of 2021.03.18 21:40:34 IST. So, by now, it’s almost like a full day of doing “nothing” for me.

It’s actually worse than that…

In fact, I started loitering around, including on the ‘net, even earlier, i.e., a few days ago. May be from 16th March, may be earlier. However, my journal pages (still loose, still not filed into the plastic folder) do show some entries, which get shorter and shorter, well until the above time-stamp. …The entry before the afore-mentioned has the time-stamp: 2021.03.18 19:12:52 IST.

But not a single entry over the past whole day.

So, what did I do over the last one day? and also, over a few days before it?

Well, the list is long.

I browsed. (Yes, including Twitter, Instagram, and FaceBook—others’ accounts, of course!)

I also downloaded a couple of research papers, and one short history-like book. I generally tried to read through them. Unsurprisingly, I found that I could not. The reason is: I just don’t have any mental energy left to do anything meaningful.

Apparently, I have exhausted all my energy in thinking about the linear momentum operator.

I think that by now I have thought about this one topic in most every way any human being possibly could. At least in parts (i.e., one part taken at a time). I have analyzed and re-analyzed, and re-re-analyzed. I kept on jotting down my “thoughts” (also in a way that would be mostly undecipherable to any one).

I kept getting exhausted, and still, I kept pushing myself. I kept on analyzing. Going back to my earlier thoughts, refining them and/or current thoughts. Et cetera.

In the end, finally, I reached the point where I couldn’t even push myself any longer—in spite of all my stamina to keep pursuing threads in “thinking”. I’ve some stories to share here, but some other time. …To cut all of them long stories short:

Some 12 hours after I thus fully crashed out of all my mental energies, at some moment, I somehow realized that:

I had already built a way to visualize a path in between my approach and the mainstream QM, regarding the linear momentum operator.

I made a briefest possible entry (consisting of exactly one small sketch over some 2″ by 5″ space). Which was at 2021.03.18 21:40:34 IST.

Then, I stopped pursuing it too.

Why bother? Especially when I can now visualize “it” any time I want?

But how good is it?

I think, it should work. But it also appears to be too shaky and too tenuous a connection to me—between the mainstream QM and my new approach.

Of course I’ve noted down a bit of maths to go with it too, and also the physical units for the quantities involved. Yet, two points remain:

As a relatively minor point: I haven’t had the energy to work out (let alone to do even the quick and dirty simulations for) all possible permutations and combinations of the kind of elements I am dealing with. So, there is a slim possibility that terms may cancel each other and so the formulation may not turn out to be general enough. (I’ve been fighting with such a thing for a long time by now.)

But as a relatively much more important point: As I said, this whole way of thinking about it seems too tenuous to me. Even if it works out OK (i.e., after considering all the permutations and combinations involved), this very way of looking at the things would still look at best tenuous to any one.

The only consolation I have is this idea (which had already become absolutely banal even decades ago):

Every thing about QM is different from the pre-quantum theories.

That’s the only thin thread of logic by which I my ideas hang. … Not as good as I wanted it. But not as bad as hanging all loose either…

And, yes, I’ve thought through the ontological aspects as well. … The QM ontology is radically different from the ontologies of all the pre-quantum theories. Especially, that of NM (Newtonian mechanics of particles and rigid bodies). But it is not so radically different from the ontology already required for EM (the Maxwell-Lorentz electrodynamics)—though there is a lot of difference between the EM and the QM ontologies.

And that’s what the current status looks like.

“So, when do you plan to publish it?”

Ummm… Not a good question. A better question, for me, is this:

What do I propose to do with my time, now?

The answer is simple. I will go in for what I know is going to be the most productive route.

Which is: I am going to continue loitering around.

Then, I will begin with taking detailed notes on the QM spin—the next topic from the mainstream QM—as soon as my mental energy returns.

That’s right. I won’t be even considering writing down my thoughts about that goddamn linear momentum operator. Not for any time in the near future. That’s the only way to optimize productivity. My productivity, that is.

So, sorry, I won’t be writing anything on the linear momentum any time soon, even if it precisely was the topic that kept me pre-occupied for such a long time—and also formed the topic of blogging for quite some over the recent past. So, sorry, this entire blog-post (the present one) is going to remain quite vague to you, for quite some time. You even might feel cheated at this point.

Well, but I do have a strong defence from my side: I’ve always said, time and once again, that I was always ready to share all my thoughts to “any” one. I mean, any one who (i) knows the theory of the mainstream QM (including its foundational issues), and (ii) also has looked into the experimental aspects of it (at least in the schematic form.)

So, any such a person can always drop a line to me.

Oh wait!

Don’t write anything to me right away. Hold on for a few days. I just want to kill my time around for now. That’s why.

I’ll let you know (may be via an update here), once I begin actually taking down my notes on the QM spin. That’s the time you—“you” the “any” one—may get in touch with me. That is, if “you” want to know what I’ve thought about that goddamn linear momentum operator. [OK. As the update at the top of the post indicates, now I’m ready.]

OK, bye for now, take care in the meanwhile, and don’t be surprised if I also visit your blog and all…

A Many songs I like:

[I also listened to a lot of songs over the past few days. I couldn’t find a single song that went very well with any one of my overall moods over the past few days… So, don’t try to read too much into this choice. And, I’ve got bored, so I won’t offer any further comment on this song either. (And, one way or the other, I actually don’t know why I like this song or the extent to which I actually like it. Not as of now, any way!)

(Hindi) जनम जनम का साथ है निभाने को (“janam janam kaa saath hai nibhaane ko”)
Music: Shankar-Jaikishan
Lyrics: Hasrat Jaipuri

I could not find a good quality original audio track. The “revival” version is here: [^]. It was this version which I first listened to, and used to listen to, while taking leisurely evening drives (for up to, say, 50 miles almost every day) in the area around Santa Rosa. Which was in California. But it didn’t feel that way. (It also was the home town of the “Peanuts” comics creator.) …

…OK, I will throw in one more:

(Marathi) तूं तेव्हा तशी (“too, tevhaa tashee”)
Music and Singer: Pt. Hridayanath Mangeshkar
Lyrics: Aaratee Prabhu

Which is yet another poem by Aaratee Prabhu, converted into a song by Hridayanath. But I won’t be able to talk about it. Not as of today anyway. Listening is good. A good quality audio is here [^].

…And, since I have been listening to songs a lot over the past few days, one more, just for this time around…

(Western, Pop) “How deep is your love”
Band: Bee Gees

I don’t know what the “Official Video” means, but it is here: [^]. I also don’t know what the “Deluxe Edition” of the audio means, but it’s here [^]. … I always happened to listen to the audio, which was, you know, at many places in Pune like in the H’ club (of the student-run mess at COEP hostels); at the movie theatres running English movies in Pune (like Rahul, the old West-End, and Alka); most all restaurants from the Pune Camp area (and also a few from the Deccan area); also in the IIT Madras hostels; etc. All of this was during the ’80s, only. I don’t know why, but seems like I never came across this song, even at any of these places, once it was ’90s. … As usual, I didn’t even know the words, and so, couldn’t have searched for it. A few days ago, I was just going through a compilation of songs of ’70s when I spotted this one, and then searched on its lyrics and credits and all. I had remembered—and actually known—only the music… But yes, now that I know them, the words too seem pretty good…

Anyway, enough is enough. I already wrote a lot!  High time to go back to doing nothing…
]

History:
2021.03.19 22:27 IST: Originally published.
2021.03.24 13:25 IST: Update noted at the top of the post and also inline. Some minor corrections/editing.

# Still if-ish…

1. Progress has slowed down:

Yep. … Rather, progress has been coming in the sputters.

I had never anticipated that translating my FDM code (for my new approach to QM) into a coherent set of theoretical statements is going to be so demanding or the progress so uneven. But that’s what has actually occurred.

To be able to focus better on the task at hand, I took this blog and my Twitter account off the ‘net from 26th February through 09th March.[* See the footnote below]

Yes, going off the ‘net did help.

Still, gone is that more of less smooth (or “linear”) flow of progress which I experienced in, say, mid-December 2020 through mid-January 2021 times or so, especially in January. Indeed, looking back at the past couple of weeks or so, I can say that a new pattern seems to have emerged. This pattern goes like this:

• On day 1, I get some good idea about how to capture / encapsulate / present something, or put it in a precise mathematical form. So, I get excited. (I even feel like coming back on the ‘net and saying something.)
• But right on day 2, I begin realizing that it doesn’t capture the truth in sufficient generality, i.e., that the insight is only partial. Or, may be, the idea even has loopholes in it, which come to the light only when I do a quick and dirty simulation about it.
• By the time it’s day 2-end, day 3 or at most day 4, I have become discouraged, and even begin thinking of postponing everything to a June-July 2021-based schedule.
• However, soon enough, I get some idea, hurriedly write it down…
• …But only for the whole cycle to repeat once again!

This kind of a cycle has repeated some 3–4 times within the past 15–20 days alone.

“Tiring” isn’t the right word. “Fatigue” is.

But there is no way out. I don’t have any one to even discuss anything (though I am ready, as always, from my side.)

And, it still isn’t mid-March yet. So, I keep going back to the “drawing board.” Somehow.

[* Footnote: Curiously though, both WordPress and RevolverMaps have reported hits to this blog right in this period—even when it was not available for public viewing! … What’s going on?]

2. Current status:

In a way, persistence does seem to have yielded something on the positive side, though it has not been good enough (and, any progress that did come, has been coming haltingly).

In particular, with persistence, I kept on finding certain loop-holes in my thinking (though not in the special cases which I have implemented in code). These are not major conceptual errors. But errors, they still are. Some of these can be traced back to the June-July times last year. Funny enough, as I flip through my thoughts (and at times through my journal pages), some bits of some ideas regarding how I could possibly get out of these loop-holes, seem to have occurred, in some seed form (or half-baked form), right back to those times. …

Anyway, the current status is that I think that I am nearing completing a correct description, for the new approach, for the linear momentum operator.

This is the most important operator, because in QM, you use this operator, together with the position operators, in order to derive the operators for so many other dynamical quantities, e.g. the total energy, the angular momentum, etc. (See Shankar’s treatment, which was reproduced in the postulates document here [^].)

The biggest source of trouble for the linear momentum operator has been in establishing a mathematically precise pathway (and not just a conceptual one) between my approach and the mainstream QM. What I mean to say is this:

I could have simply postulated an equation (which I used in my code), and presented it as simply coming out of the blue, and be done with it. It would work; many people in QM have followed precisely this path. But I didn’t want to do that.

I also wanted to see if I can make the connections between my new approach and the MSQM as easy to grasp as possible (i.e., for an expert of MSQM). Making people understand wasn’t the only motive, however. I also wanted to anticipate as many objections as I could—apart from spotting errors, that is. Another thing: Given my convictions, I also have to make sure that whatever I propose, there has to be a consistent ontological “picture” which goes with it. I don’t theorize with ontology as an after-thought.

But troubles kept coming up right in the first consideration—in clearly spelling out the precise differences of the basic ideas between my approach and the MSQM.

And yes, MSQM does have a way of suddenly throwing up issues that are quite tricky to handle.

Just for this topic of linear momentum, check out, for instance, this thread at the Physics StackExchange [^] (especially, Dr. Luboš Motl’s answer), and this thread [^] (especially, Dr. Arnold Neumaier’s answer). The more advanced parts of both these threads are, frankly, beyond my capacity. Currently, I only aim for that level of rigour which is at, say, exactly and precisely the first three sentences from Motl’s answer!…

…We the engineers can happily ignore any unpleasant effects that might occur at the singular and boundary points. We simply try and see if we can get away ejecting such isolated domain points from any theoretical consideration! If something workable can still be obtained even after removing such points out of consideration, we go for it. So, that’s the first thing we check. Usually, it turns out we can isolate them out, and so we proceed to do precisely that! And that is precisely the level at which I am operating…

Even then, issues are tricky. And, at least IMO, a good part of the blame must lie with the confusions wrought by the Instrumentalist’s dogma.

… What the hell, if $\Psi(x,t)$ isn’t an observable itself, then why does it find a place in their theory (even if only indirectly, as in Heisenberg’s formulation)? … Why can’t I just talk of a property that exists at each infinitesimal CV (control volume) $\text{d}x$? why must I instead take something of interest, then throw in the middle an operator (say a suitable Dirac’s delta), and then bury it all behind an integral sign? why can’t those guys (I mean the mathematical terms) break the cage of the integral sign, and come out in the open, just to feel some neat fresh air?

… Little wonder these MSQM folks live with an in-principle oscillatory universe. It’s a weird universe they have.

In their universe, Schrodinger’s cat is initially in a superposition of being alive and dead. But that’s not actually the most surprising part. Schrodinger’s cat then momentarily (or for a long but finite time) becomes full dead; but then, immediately, it “returns” from that state (of being actually dead) to once again be in a superposition of dead + alive; it spends some time in that superposition; it then momentarily (or for a long but finite time) becomes fully alive too; but only to return back into that surreal superposition…

And it is this whole cycle which goes on repeating ad infinitum.

… No one tells you. But that’s precisely what the framework of MS QM actually predicts.

MSQM doesn’t predict that once a cat does somehow become dead, it remains dead forever. And that’s because, in the MSQM, the only available mathematical machinery (which has any explanation for the quantum phenomena), in principle, predicts only infinite cycles of superposition–life–superposition–death–superposition–….

The postulates of the MS QM necessarily lead to a forever oscillatory universe! Little wonder they can’t solve the measurement problem!

One consequence of such a state of the MS QM theory is that thinking through any aspect becomes that much harder. It isn’t impossible. But hard, yes, it certainly is, where hard means: “tricky”.

Anyway, since the day before yesterday, it has begun looking like this topic (of linear momentum operator), and to the depth I mentioned above, might get over in a few days’ time. At least, that day 1–day 2–etc. pattern seems to have broken—at least for now!

If things go right at least this time round, then I might be able to finish the linear momentum operator by, say, 15th of March. Or 18th. Or 20th.

Addendum especially for Indians reading this post: No, the oscillatory universe of the MSQM people is not your usual birth-life-death-rebirth cycle as mentioned in the ancient Indian literature. The MSQM kind of “oscillations” aren’t about reincarnations of the same soul but in different bodies. In MSQM, the cat “return”s from being dead with exactly the same physical body. So, it’s not a soul temporarily acquiring one body for a brief while, and then discarding it upon its degeneration, only to get another body eventually (due to “karma” or whatever).

So, the main point is: In MSQM, Schrodinger’s cat not just manages to keep the same body, the physical laws mandate that it be exactly the same body (the same material) too! … And, the MS QM doesn’t talk of a soul anyway; it concerns itself purely with the physical aspects—which is a good thing if you ask me. (Just check the postulates document, and pick up a text book to see their typical implications.)

3. Other major tasks to be done (after the linear momentum operator):

• Write down a brief but sufficiently accurate description of the measurement process following my new approach. This is the easiest task among all the remaining ones, because much of such a description can only be qualitative.
• Translate my ideas for the orbital angular momentum into precise mathematical terms—something to be done, but here I guess that with almost all possible troubles having already shown up right in the linear momentum stage, the angular momentum should proceed relatively smoothly (though it too is going take quite some time).
• And then, the biggest remaining task. Actually, many sub-tasks:
• Study and take notes on the QM spin.
• Think through and integrate my new approach to it.
• Write down as much using quantitative terms as possible.

At this stage, I don’t know how long it’s going to take. However, for now, I’ve decided on the following plan for now…

4. Plan for now:

If there remain some issues with the linear momentum operator (actually, in respect of its multi-faceted usages in the MSQM, and in explaining these from the PoV of my approach including ontology), and if these still remain not satisfactorily resolved even by 15th or 18th of March (roughly, one week from now), then I will take a temporary (but long) break from QM, and instead turn my attention to Data Science.

However, if my description for $\hat{p}()$ (i.e. the linear momentum operator) does go through smoothly during the next week, then I will immediately proceed with the remaining QM-related tasks too (i.e., only those which are listed above).

5. Bottom-line:

Expect a blog post in a week’s time or so, concerning an update with respect to the linear momentum operator and all. (I will try to keep this blog open for the upcoming week, but I guess my Twitter account is best kept closed for now—I just don’t have the time to keep posting updates there.)

In the meanwhile, take care and bye for now.

A song I like:

(Marathi) ती येते आणिक जाते (“tee yete aaNik jaate…”)
Lyrics: Aaratee Prabhu
Music: Pt. Hridaynath Mangeshkar
Singer: Mahendra Kapoor

[ Mahendra Kapoor has sung this song very well (even if he wasn’t a native Marathi speaker). Hridaynath Mangeshkar’s music, as usual, pays real good attention to words, even as also managing to impart an ingenious melodic quality to the tune—something that’s very rare for pop music in any language.

But still, frankly, this song is almost as nothing if you don’t get the lyrics of it.

And, to get the lyrics here, it’s not enough to know Marathi (the language) alone. You also have to “get” what precisely the poet must have meant when he used some word; for instance, the word “ती” (“she”). [Hint: Well, the hint has already been given. …Notice, I said “what”, and not “who”, in the preceding sentence!]

But yes, once you begin to get the subtle shades of the poetry here, then you can also begin to appreciate Hridaynath’s composition even better—you begin to see the more subtle musical phrases, the twists and turns and twirls in the tune which you had missed earlier. So, there’s a kind of a virtuous feedback circle going on here, between poetry and music… And yes, you also appreciate Mahendra Kapoor’s singing better as you go through the circle.

This song originally appeared as a part of a compilation of Aaratee Prabhu’s poems. If I mistake not (speaking purely from memory, and from a distance of several decades), the book in question was जोगवा (“jogawaa”). I had bought a copy of it during my UG days at COEP, out of my pocket-money.

We in fact had used another poem from this book as a part of our dramatics for the Firodiya Karandak. It was included on my insistence; I was a co-author of the script. As to the competition, we did win the first prize, but not so much because of the script. We won mainly because our singing and music team had such a fantastic, outstanding, class to them. Several of them later on went on to make full-time career in music…. The main judge was the late music composer Anand Modak, who later on went to win National awards too, but back then, he was at a fledgling stage of his career. But yes, talking of the script itself, in the informal chat after the prize announcement ceremony, he did mention, unprompted and on his own, that our script was good too! (Yaaaay!!) …Back then, there was no separate prize for the best script, but if there were to be one, then we would’ve probably won it. During that informal chat, the judges hadn’t bothered to even passingly mention any script by any other team!)

…Coming back to the book of poetry (Aaratee Prabhu’s), I think I still have my copy lying somewhere deep in one of the boxes, though by now, due to too many moves and all (I had also taken it to USA the first time I went there), its cover already had got dislodged from the book itself. Then, a couple of weeks ago, I saw only the title page peeping out of some bunch of unrelated and loose papers, and so, looks like, the book by now has reached a more advanced stage of disrepair! … Doesn’t matter; no one else is going to read it anyway!

A good quality audio is here [^].

]

History:
2021.03.10 20:57 IST: Originally published.
2021.03.10 22.45 IST: Added links to the Physics StackExchange threads and the subsequent comments up to the mention of the measurement problem. Other minor editing. Done with this post now!
2021.03.12 18.43 IST: Some further additions, especially in section 2, including the Addendum written for Indian readers. Also, some further additions in the songs section. Some more editing. Now, am really done with this post!

# Yesss! I did it!

Last evening (on 2021.01.13 at around 17:30 IST), I completed the first set of computations for finding the bonding energy of a helium atom, using my fresh new approach to QM.

These calculations still are pretty crude, both by technique and implementation. Reading through the details given below, any competent computational engineer/scientist would immediately see just how crude they are. However, I also hope that he would also see that I can still say that these initial results may be taken as definitely validating my new approach.

It would be impossible to give all the details right away. So, what I give below are some important details and highlights of the model, the method, and the results.

For that matter, even my Python scripts are currently in a pretty disorganized state. They are held together by duct-tape, so to say. I plan to rearrange and clean up the code, write a document, and upload them both. I think it should be possible to do so within a month’s time, i.e., by mid-February. If not, say due to the RSI, then probably by February-end.

Alright, on to the details. (I am giving some indication about some discarded results/false starts too.)

1. Completion of the theory:

As far as development of my new theory goes, there were many tricky issues that had surfaced since I began trying simulating my new approach, which was starting in May–June 2020. The crucially important issues were the following:

• A quantitatively precise statement on how the mainstream QM’s $\Psi$, defined as it is over the $3N$-dimensional configuration space, relates to the $3$-dimensional wavefunctions I had proposed earlier in the Outline document.
• A quantitatively precise statement on how the wavefunction $\Psi$ makes the quantum particles (i.e. their singularity-anchoring positions) move through the physical space. Think of this as the “force law”, and then note that if a wrong statement is made here, then the entire system dynamics/evolution has to go wrong. Repurcussions will exist even in a simplest system having two interacting particles, like the helium atom. The bonding energy calculations of the helium atom are bound to go wrong if the “force law” is wrong. (I don’t actually calculate the forces, but that’s a different matter.)
• Also to be dealt with was this issue: Ensuring that the anti-symmetry property for the indistinguishable fermions (electrons) holds.

I had achieved a good clarity on all these (and similar other) matters by the evening of 5th January 2021. I also tried to do a few simulations but ran into problem. Both these developments were mentioned via an update at iMechanica on the evening of 6th January 2021, here [^].

2. Simulations in $1D$ boxes:

By “box” I mean a domain having infinite potential energy walls at the boundaries, and imposition of the Dirichlet condition of $\Psi(x,t) = 0$ at the boundaries at all times.

I did a rapid study of the problems (mentioned in the iMechanica update). The simulations for this study involved $1D$ boxes from $5$ a.u. to $100$ a.u. lengths. (1 a.u. of length = 1 Bohr radius.) The mesh sizes varied from $5$ nodes to $3000$ nodes. Only regular, structured meshes of uniform cell-sides (i.e., a constant inter-nodal distance, $\Delta x$) were used, not non-uniform meshes (such as $log$-based).

I found that the discretization of the potential energy (PE) term indeed was at the root of the problems. Theoretically, the PE field is singular. I have been using FDM. Since an infinite potential cannot be handled using FDM, you have to implement some policy in giving a finite value for the maximum depth of the PE well.

Initially, I chose the policy of setting the max. depth to that value which would exist at a distance of half the width of the cell. That is to say, $V_S \approx V(\Delta x/2)$, where $V_S$ denotes the PE value at the singularity (theoretically infinite).

The PE was calculated using the Coulomb formula, which is given as $V(r) = 1/r$ when one of the charges is fixed, and as $V_1(r_s) = V_2(r_s) = 1/(2r_s)$ for two interacting and moving charges. Here, $r_s$ denotes the separation between the interacting charges. The rule of half cell-side was used for making the singularity finite. The field so obtained will be referred to as the “hard” PE field.

Using the “hard” field was, if I recall it right, quite OK for the hydrogen atom. It gave the bonding energies (ground-state) ranging from $-0.47$ a.u. to $-0.49$ a.u. or lower, depending on the domain size and mesh refinement (i.e. number of nodes). Note, $1$ a.u. of energy is the same as $1$ hartree. For comparison, the analytical solution gives $-0.5$, exactly. All energy calculations given here refer to only the ground-state energies. However, I also computed and checked up to 10 eigenvalues.

Initially, I tried both dense and sparse eigenvalue solvers, but eventually settled only on the sparse solvers. The results were indistinguishable (at least numerically) . I used SciPy’s wrappings for the various libraries.

I am not quite sure whether using the hard potential was always smooth or not, even for the hydrogen atom. I think not.

However, the hard Coulomb potential always led to problems for the helium atom in a $1D$ box (being modelled using my new approach/theory). The lowest eigen-value was wrong by more than a factor of 10! I verified that the corresponding eigenvector indeed was an eigenvector. So, the solver was giving a technically correct answer, but it was an answer to the as-discretized system, not to the original physical problem.

I therefore tried using the so-called “soft” Coulomb potential, which was new to me, but looks like it’s a well known function. I came to know of its existence via the OctopusWiki [^], when I was searching on some prior code on the helium atom. The “soft” Coulomb potential is defined as:

$V = \dfrac{1}{\sqrt{(a^2 + x^2)}}$, where $a$ is an adjustable parameter, often set to $1$.

I found this potential unsatisfactory for my work, mainly because it gives rise to a more spread-out wavefunction, which in turn implies that the screening effect of one electron for the other electron is not captured well. I don’t recall exactly, but I think that there was this issue of too low ground-state eigenvalues also with this potential (for the helium modeling). It is possible that I was not using the right SciPy function-calls for eigenvalue computations.

Please take the results in this section with a pinch of salt. I am writing about them only after 8–10 days, but I have written so many variations that I’ve lost the track of what went wrong in what scenario.

All in all, I thought that $1D$ box wasn’t working out satisfactorily. But a more important consideration was the following:

My new approach has been formulated in the $3D$ space. If the bonding energy is to be numerically comparable to the experimental value (and not being computed as just a curiosity or computational artifact) then the potential-screening effect must be captured right. Now, here, my new theory says that the screening effect will be captured quantitatively correctly only in a $3D$ domain. So, I soon enough switched to the $3D$ boxes.

3. Simulations of the hydrogen atom in $3D$ boxes:

For both hydrogen and helium, I used only cubical boxes, not parallelpipeds (“brick”-shaped boxes). The side of the cube was usually kept at $20$ a.u. (Bohr radii), which is a length slightly longer than one nanometer ($1.05835$ nm). However, some of my rapid experimentation also ranged from $5$ a.u. to $100$ a.u. domain lengths.

Now, to meshing

The first thing to realize is that with a $3D$ domain, the total number of nodes $M$ scales cubically with the number of nodes $n$ appearing on a side of the cube. That is to say: $M = n^3$. Bad thing.

The second thing to note is worse: The discretized Hamiltonian operator matrix now has the dimensions of $M \times M$. Sparse matrices are now a must. Even then, meshes remain relatively coarse, else computation time increases a lot!

The third thing to note is even worse: My new approach requires computing “instantaneous” eigenvalues at all the nodes. So, the number of times you must give a call to, say eigh() function, also goes as $M = n^3$. … Yes, I have the distinction of having invented what ought to be, provably, the most inefficient method to compute solutions to many-particle quantum systems. (If you are a QC enthusiast, now you know that I am a completely useless fellow.) But more on this, just a bit later.

I didn’t have to write the $3D$ code completely afresh though. I re-used much of the backend code from my earlier attempts from May, June and July 2020. At that time, I had implemented vectorized code for building the Laplacian matrix. However, in retrospect, this was an overkill. The system spends more than $99 %$ of execution time only in the eigenvalue function calls. So, preparation of the discretized Hamiltonian operator is relatively insignificant. Python loops could do! But since the vectorized code was smaller and a bit more easily readable, I used it.

Alright.

The configuration space for the hydrogen atom is small, there being only one particle. It’s “only” $M$ in size. More important, the nucleus being fixed, and there being just one particle, I need to solve the eigenvalue equation only once. So, I first put the hydrogen atom inside the $3D$ box, and verified that the hard Coulomb potential gives cool results over a sufficiently broad range of domain sizes and mesh refinements.

However, in comparison with the results for the $1D$ box, the $3D$ box algebraically over-estimates the bonding energy. Note the word “algebraically.” What it means is that if the bonding energy for a H atom in a $1D$ box is $-0.49$ a.u., then with the same physical domain size (say 20 Bohr radii) and the same number of nodes on the side of the cube (say 51 nodes per side), the $3D$ model gives something like $-0.48$ a.u. So, when you use a $3D$ box, the absolute value of energy decreases, but the algebraic value (including the negative sign) increases.

As any good computational engineer/scientist could tell, such a behaviour is only to be expected.

The reason is this: The discretized PE field is always jagged, and so it only approximately represents a curvy function, especially near the singularity. This is how it behaves in $1D$, where the PE field is a curvy line. But in a $3D$ case, the PE contour surfaces bend not just in one direction but in all the three directions, and the discretized version of the field can’t represent all of them taken at the same time. That’s the hand-waving sort of an “explanation.”

I highlighted this part because I wanted you to note that in $3D$ boxes, you would expect the helium atom energies to algebraically overshoot too. A bit more on this, later, below.

4. Initial simulations of the helium atom in $3D$ boxes:

For the helium atom too, the side of the cube was mostly kept at $20$ a.u. Reason?

In the hydrogen atom, the space part of the ground state $\psi$ has a finite peak at the center, and its spread is significant over a distance of about 5–7 a.u. (in the numerical solutions). Then, for the helium atom, there is going to be a dent in the PE field due to screening. In my approach, this dent physically moves over the entire domain as the screening electron moves. To accommodate both their spreads plus some extra room, I thought, $20$ could be a good choice. (More on the screening effect, later, below.)

As to the mesh: As mentioned earlier, the number of eigenvalue computations required are $M$, and the time taken by each such a call goes significantly up with $M$. So, initially, I kept the number of nodes per side (i.e. $n$) at just $23$. With two extreme planes sacrificed to the deity of the boundary conditions, the actual computations actually took place on a $21 \times 21 \times 21$ mesh. That still means, a system having $9261$ nodes!

At the same time, realize how crude and coarse mesh this one is: Two neighbouring nodes represent a physical distance of almost one Bohr radius! … Who said theoretical clarity must come also with faster computations? Not when it’s QM. And certainly not when it’s my theory! I love to put the silicon chip to some real hard work!

Alright.

As I said, for the reasons that will become fully clear only when you go through the theory, my approach requires $M$ number of separate eigenvalue computation calls. (In “theory,” it requires $M^2$ number of them, but some very simple and obvious symmetry considerations reduce the computational load to $M$.) I then compute the normalized $1$-particle wavefunctions from the eigenvector. All this computation forms what I call the first phase. I then post-process the $1$-particle wavefunctions to get to the final bonding energy. I call this computation the second phase.

OK, so in my first computations, the first phase involved the SciPy’s eigsh() function being called $9261$ number of times. I think it took something like 5 minutes. The second phase is very much faster; it took less than a minute.

The bonding energy I thus got should have been around $-2.1$ a.u. However, I made an error while coding the second phase, and got something different (which I no longer remember, but I think I have not deleted the wrong code, so it should be possible to reproduce this wrong result). The error wasn’t numerically very significant, but it was an error all the same. This status was by the evening of 11th January 2021.

The same error continued also on 12th January 2021, but I think that if the errors in the second phase were to be corrected, the value obtained could have been close to $-2.14$ a.u. or so. Mind you, these are the results with a 20 a.u. box and 23 nodes per side.

In comparison, the experimental value is $-2.9033$ a.u.

As to computations, Hylleraas, back in 1927 a PhD student, used a hand-held mechanical calculator, and still got to $-2.90363$ a.u.! Some 95+ years later, his method and work still remain near the top of the accuracy stack.

Why did my method do so bad? Even more pertinent: How could Hylleraas use just a mechanical calculator, not a computer, and still get to such a wonderfully accurate result?

It all boils down to the methods, tricks, and even dirty tricks. Good computational engineers/scientists know them, their uses and limitations, and do not hesitate building products with them.

But the real pertinent reason is this: The technique Hylleraas used was variational.

5. A bit about the variational techniques:

All variational techniques use a trial function with some undetermined parameters. Let me explain in a jiffy what it means.

A trial function embodies a guess—a pure guess—at what the unknown solution might look like. It could be any arbitrary function.

For example, you could even use a simple polynomial like $y = a_0 + a_1 x_1 + a_2 x_2^2 + a_3 x_3^3$ by way of a trial function.

Now, observe that if you change the values of the $a_0$, $a_1$ etc. coefficients, then the shape of the function changes. Just assign some random values and plot the results using MatPlotLib, and you will know what I mean.

… Yes, you do something similar also in Data Science, but there, the problem formulation is relatively much simpler: You just tweak all the $a_i$ coefficients until the function fits the data. “Curve-fitting,” it’s called.

In contrast, in variational calculus, you don’t do this one-step curve-fitting. You instead take the $y$ function and substitute it into some theoretical equations that have something to do with the total energy of the system. Then you find an expression which tells how the energy, now expressed as a function of $y$, which itself is a function of $a_i$‘s, varies as these unknown coefficients $a_i$ are varied. So, these $a_i$‘s basically act as parameters of the model. Note carefully, the $y$ function is the primary unknown function, but in variational calculus, you do the curve-fitting with respect to some other equation.

So, the difference between simple curve-fitting and variational methods is the following. In simple curve-fitting, you fit the curve to concrete data values. In variational calculus, you fit an expression derived by substituting the curve into some equations (not data), and then derive some further equations that show how some measure like energy changes with variations in the parameters. You then adjust the parameters so as to minimize that abstract measure.

Coming back to the helium atom, there is a nucleus with two protons inside it, and two electrons that go around the nucleus. The nucleus pulls both the electrons, but the two electrons themselves repel each other. (Unlike and like charges.) When one electron strays near the nucleus, it temporarily decreases the effective pull exerted by the nucleus on the other electron. This is called the screening effect. In short, when one electron goes closer to the nucleus, the other electron feels as if the nucleus had discharged a little bit. The effect gets more and more pronounced as the first electron goes closer to the nucleus. The nucleus acts as if it had only one proton when the first electron is at the nucleus. The QM particles aren’t abstractions from the rigid bodies of Newtonian mechanics; they are just singularity conditions in the aetherial fields. So, it’s easily possible that an electron sits at the same place where the two protons of the nucleus are.

One trouble with using the variational techniques for problems like modeling the helium atom is this. It models the screening effect using a numerically reasonable but physically arbitrary trial function. Using this technique can give a very accurate result for bonding energy, provided that the person building the variational model is smart, as Hylleraas sure was. But the trial function is just a guess work. It can’t be said to capture any physics, as such. Let me give an example.

Suppose that some problem from physics is such that a $5$-degree polynomial happens to be the physically accurate form of solution for it. However, you don’t know the analytical solution, not even its form.

Now, the variational technique doesn’t prevent you from using a cubic polynomial as the trial function. That’s because, even if you use a cubic polynomial, you can still get to the same total system energy.

The actual calculations are far more complicated, but just as a fake example to illustrate my main point, suppose for a moment that the area under the solution curve is the target criterion (and not a more abstract measure like energy). Now, by adjusting the height and shape of a cubic polynomial, you can always alter its shape such that it happens to give the right area under the curve. Now, the funny part is this. If the trial function we choose is only cubic, then it is certain to miss, as a matter of a general principle, all the information related to the $3$rd- and $4$th-order derivatives. So, the solution will have a lot of high-order physics deleted from itself. It will be a bland solution; something like a ghost of the real thing. But it can still give you the correct area under the curve. If so, it still fulfills the variational criterion.

Coming back to the use of variational techniques in QM, like Hylleraas’ method:

It can give a very good answer (even an arbitrarily accurate answer) for the energy. But the trial function can still easily miss a lot of physics. In particular, it is known that the wavefunctions (actually, “orbitals”) won’t turn out to be accurate; they won’t depict physical entities.

Another matter: These techniques work not in the physical space but in the configuration space. So, the opportunity of taking what properly belongs to Raam and giving it to Shaam is not just ever-present but even more likely.

Even simpler example is this. Suppose you are given $100$ bricks and asked to build a structure on a given area for a wall on the ground. You can use them to arrange one big tower in the wall, two towers, whatever… There still would be in all $100$ bricks sitting on the same area on the ground. The shapes may differ; the variational technique doesn’t care for the shape. Yet, realize, having accurate atomic orbitals means getting the shape of the wall too right, not just dumping $100$ bricks on the same area.

6. Why waste time on yet another method, when a more accurate method has been around for some nine decades?

“OK, whatever” you might say at this point. “But if the variational technique was OK by Hylleraas, and if it’s been OK for the entire community of physicists for all these years, then why do you still want to waste your time and invent just another method that’s not as accurate anyway?”

Firstly, my method isn’t an invention; it is a discovery. My calculation method directly follows the fundamental principles of physics through and through. Not a single postulate of the mainstream QM is violated or altered; I merely have added some further postulates, that’s all. These theoretical extensions fit perfectly with the mainstream QM, and using them directly solves the measurement problem.

Secondly, what I talked about was just an initial result, a very crude calculation. In fact, I have alrady improved the accuracy further; see below.

Thirdly, I must point out a possibility which your question didn’t at all cover. My point is that this actually isn’t an either-or situation. It’s not either variational technique (like Hylleraas’s) or mine. Indeed, it would very definitely be possible to incorporate the more accurate variational calculations as just parts of my own calculations too. It’s easy to show that. That would mean, combining “the best of both worlds”. At a broader level, the method would still follow my approach and thus be physically meaningful. But within carefully delimited scope, trial-functions could still be used in the calculation procedures. …For that matter, even FDM doesn’t represent any real physics either. Another thing: Even FDM can itself can be seen as just one—arguably the simplest—kind of a variational technique. So, in that sense, even I am already using the variational technique, but only the simplest and crudest one. The theory could easily make use of both meshless and mesh-requiring variational techniques.

I hope that answers the question.

7. A little more advanced simulation for the helium atom in a $3D$ box:

With my computational experience, I knew that I was going to get a good result, even if the actual result was only estimated to be about $-2.1$ a.u.—vs. $-2.9033$ a.u. for the experimentally determined value.

But rather than increasing accuracy for its own sake, on the 12th and 13th January, I came to focus more on improving the “basic infrastructure” of the technique.

Here, I now recalled the essential idea behind the Quantum Monte Carlo method, and proceeded to implement something similar in the context of my own approach. In particular, rather than going over the entire (discretized) configuration space, I implemented a code to sample only some points in it. This way, I could use bigger (i.e. more refined) meshes, and get better estimates.

I also carefully went through the logic used in the second phase, and corrected the errors.

Then, using a box of $35$ a.u. and $71$ nodes per side of the cube (i.e., $328,509$ nodes in the interior region of the domain), and using just $1000$ sampled nodes out of them, I now found that the bonding energy was $-2.67$ a.u. Quite satisfactory (to me!)

8. Finally, a word about the dirty tricks department:

I happened to observe that with some choices of physical box size and computational mesh size, the bonding energy could go as low as $-3.2$ a.u. or even lower.

What explains such a behaviour? There is this range of results right from $-2.1$ a.u. to $-2.67$ a.u. to $-3.2$ a.u. …Note once again, the actual figure is: $-2.90$ a.u.

So, the computational results aren’t only on the higher side or only on the lower side. Instead, they form a band of values on both sides of the actual value. This is both a good news and a bad news.

The good plus bad news is that it’s all a matter of making the right numerical choices. Here, I will mention only 2 or 3 considerations.

As one consideration, to get more consistent results across various domain sizes and mesh sizes, what matters is the physial distance represented by each cell in the mesh. If you keep this mind, then you can get results that fall in a narrow band. That’s a good sign.

As another consideration, the box size matters. In reality, there is no box and the wavefunction extends to infinity. But a technique like FDM requires having to use a box. (There are other numerical techniques that can work with infinite domains too.) Now, if you use a larger box, then the Coulomb well looks just like the letter `T’. No curvature is captured with any significance. With a lot of physical region where the PE portion looks relatively flat, the role played by the nuclear attraction becomes less significant, at least in numerical work. In short, the atom in a box approaches a free-particle-in-a-box scenario! On the other hand, a very small box implies that each electron is screening the nuclear potential at almost all times. In effect, it’s as if you are modelling a H- ion rather than an He atom!

As yet another consideration: The policy for choosing the depth of the potential energy matters. A concrete example might help.

Consider a $1D$ domain of, say, $5$ a.u. Divide it using $6$ nodes. Put a proton at the origin, and compute the electron’s PE. At the distance of $5$ a.u., the PE is $1.0/5.0 = 0.2$ a.u. At the node right next to singularity, the PE is $1$ a.u. What finite value should you give to the PE be at the nucleus? Suppose, following the half-cell side rule, you give it the value of $1.0/0.5 = 2$ a.u. OK.

Now refine the mesh, say by having 10 nodes going over the same physical distance. The physically extreme node retains the same value, viz. $0.2$ a.u. But the node next to the singularity now has a PE of $1.0/0.5 = 2$ a.u., and the half cell-side rule now gives the a value of $1.0/0.25 = 4.0 a.u.$ at the nucleus.

If you plot the two curves using the same scale, the differences are especially is striking. In short, mesh refinement alone (keeping the same domain size) has resulted in keeping the same PE at the boundary but jacking up the PE at the nucleus’ position. Not only that, but the PE field now has a more pronounced curvature over the same physical distance. Eigenvalue problems are markedly sensitive to the curvature in the PE.

Now, realize that tweaking this one parameter alone can make the simulation zoom on to almost any value you like (within a reasonable range). I can always choose this parameter in such a way that even a relatively crude model could come to reproduce the experimental value of $-2.9$ a.u. very accurately—for energy. The wavefunction may remain markedly jagged. But the energy can be accurate.

Every computational engineer/scientist understands such matters, especially those who work with singlarities in fields. For instance, all computational mechanical engineers know how the stress values can change by an order of magnitude or more, depending on how you handle the stress concentrators. Singularities form a hard problem of computational science & engineering.

That’s why, what matters in computational work is not only the final number you produce. What matters perhaps even more are such things as: Whether the method works well in terms of stability; trends in the accuracy values (rather than their absolute values); whether the method can theoretically accomodate some more advanced techniques easily or not; how it scales with the size of the domain and mesh refinement; etc.

If a method does fine on such counts, then the sheer accuracy number by itself does not matter so much. We can still say, with reasonable certainty, that the very theory behind the model must be correct.

And I think that’s what my yesterday’s result points to. It seems to say that my theory works.

9. To wind up…

Despite all my doubts, I always thought that my approach is going to work out, and now I know that it does—nay, it must!

The $3$-dimensional $\Psi$ fields can actually be seen to be pushing the particles, and the trends in the numerical results are such that the dynamical assumptions I introduced, for calculating the motions of the particles, must be correct too. (Another reason for having confidence in the numerical results is that the dynamical assumptions are very simple, and so it’s easy to think how they move the particles!) At the same time, though I didn’t implement it, I can easily see that the anti-symmetrical property of at least $2$-particle system definitely comes out directly. The physical fields are $3$-dimensional, and the configuration space comes out as a mathematical abstraction from them. I didn’t specifically implement any program to show detection probabilities, but I can see that they are going to come to right—at least for $2$-particle systems.

So, the theory works, and that matters.

Of course, I will still have quite some work to do. Working out the remaining aspects of spin, for one thing. A three interacting-particles system would also be nice to work through and to simulate. However, I don’t know which system I could/should pick up. So, if you have any suggestions for simulating a $3$-particle system, some well known results, then do let me know. Yes, there still are chances that I might still need to tweak the theory a little bit here and little bit there. But the basic backbone of the theory, I am now quite confident, is going to stand as is.

OK. One last point:

The physical fields of $\Psi$, over the physical $3$-dimensional space, have primacy. Due to the normalization constraint, in real systems, there are no Dirac’s delta-like singularities in these wavefunctions. The singularities of the Coulomb field do enter the theory, but only as devices of calculations. Ontologically, they don’t have a primacy. So, what primarily exist are the aetherial, complex-valued, wavefunctions. It’s just that they interact with each other in such a way that the result is as if the higher-level $V$ term were to have a singularity in it. Indeed, what exists is only a single $3$-dimensional wavefunction; it is us who decompose it variously for calculational purposes.

That’s the ontological picture which seems to be emerging. However, take this point with the pinch of the salt; I still haven’t pursued threads like these; been too busy just implementing code, debugging it, and finding and comparing results. …

Enough. I will start writing the theory document some time in the second half of the next week, and will try to complete it by mid-February. Then, everything will become clear to you. The cleaned up and reorganized Python scripts will also be provided at that time. For now, I just need a little break. [BTW, if in my …err…“exuberance” online last night, if I have offended someone, my apologies…]

For obvious reasons, I think that I will not be blogging for at least two weeks…. Take care, and bye for now.

A song I like:

(Western, pop): “Lay all your love on me”
Band: ABBA

[A favourite since my COEP (UG) times. I think I looked up the words only last night! They don’t matter anyway. Not for this song, and not to me. I like its other attributes: the tune, the orchestration, the singing, and the sound processing.]

History:
— 2021.01.14 21:01 IST: Originally published
— 2021.01.15 16:17 IST: Very few, minor, changes overall. Notably, I had forgotten to type the powers of the terms in the illustrative polynomial for the trial function (in the section on variational methods), and now corrected it.