# Finding a cozy n comfy enough a spot…

Update on 2021.02.02:

I have made a couple of inline updates in the sections 5. and 7. below.

I have already begun cleaning up and reorganizing the code, and writing the boring document. The work so far has come along pretty well. I have by now achieved a satisfactory level of consistency in the numerical results for the hydrogen atom in a $3D$ box.

As indicated in my last post here, I had found that it’s more useful to focus on the cell-side of the mesh rather than on the physical size of the box or the number of nodes per side of the cube.

Now, given below are some of the details of certain further, systematic, trials which I conducted, in order to arrive at optimum ranges for numerical parameters.

Since the analytical solution is available only for the hydrogenic atoms (i.e. systems with a positively charged nucleus and just one electron, e.g. the hydrogen atom and the He+ ion), these systematic studies were conducted only for them.

If you came here expecting that I might have something to say about the reproducibility for the helium atom, then well, you will have to wait for some 2–3 weeks. The nature of the issues themselves is like that. You can’t hurry things like these too much, as the studies below sure bring out.

So, anyway, here are the highlights of the systematic studies I conducted and some of the representative results.

All the results reported in this post are in the atomic units.

1. Finitization policy to replace the singularity of the Coulomb field:

In my last post here, I had mentioned that in FDM, we have to use a finite value in place of the $-\infty$ at the nucleus. As a necessary consequence, we have to adopt some policy for this finitization.

While conducting my studies, now I found that it is better to frame this policy not in terms of a chosen fraction of the cell-side, but in terms of a certain relevant datum and a multiplying factor. The better procedure is this:

Whatever be the degree of mesh refinement, having constructed the mesh, calculate the always-finite PE value at the FDM node right next to the singularity. Then, multiply this PE value by a certain factor, and use it in place of the theoretically $-\infty$ value at the singular node. Let’s give a name to this multiplying factor; let’s call it the Coulomb Field’s Singularity Finitization Factor (“CFSF” for short).

Notice that using this terminology, a CFSF value of $2.0$ turns out to be exactly the same as using the half cell-side rule. However, framing the finitization policy in terms of the CFSF factor has the advantage that it makes it easier to compare the differences in the relative sharpness of the potential well at different mesh-refinement levels.

OK.

I then found that while using FDM, the eigenvalue solver is very sensitive to even small variations to the value of CFSF.

If you use a CFSF value of $1.8$, then it turns out that the PE well does not go down enough in the neighbourhood of the singularity, and therefore, the reported ground-state eigenvalues can easily go to about $-0.45$, $-0.2$, or even worse. Refining the mesh doesn’t help—within the domain and mesh sizes which are both practicable on my laptop and relevant for the H and He atom modelling. Note, the analytical solution is: $-0.5$, exactly.

Conclusion: Using even a slightly lower CFSF spoils the results.

OTOH, if you use a CFSF of $2.2$ to $2.5$, then the ground-state energy can go lower than the exact value of $-0.5$. Now, this is a sure-shot indication that your numerical modelling has gone wrong.

In general, with FDM, you would expect that with mesh refinement, the convergence in the energy values would be not just monotonic but also one-sided, and also that the convergence would occur from “above” (because the energy values here are negative). In other words, if my understanding of the theory of numerical analysis is correct, then a properly done (meaningful) numerical simulation cannot produce energies below the analytical solution of $-0.5$.

So, clearly, using a CFSF value even slightly greater than $2.0$ is bad for the health of the numerical simulations.

In the earlier trials reported in the last post, I had simply guessed that the value of $2.0$ might be good enough for my initial trials. Now it turns out that my computational modeller’s intuition was pretty much on target—or at least, that I was plain lucky! The CFSF value of $2.0$ indeed happens to be quite the best value to choose, given the rest of the parameters like the cell-side, the domain size, and the rest of the details of this problem (viz., the strength of the Coulomb singularity, the nature of the Schrodinger equation, the use of uniform and structured meshes, the FDM discretization, etc.).

2. Simple-minded mesh refinement doesn’t produce consistent results:

Suppose you keep the domain size fixed at, say, $20.0$, and vary the mesh refinement levels.

Now, your naive expectation might be that as you refine the mesh by increasing the number of nodes per side of the cube, you should get more and more accurate results. That’s our usual experience with problems like diffusion in continua, and even for problems like convection in fluids.

However, the category of the QM problems is different! Here, we have an eigenvalue problem that must be solved with a singular potential field. The naive expectations built on simple problems like the Poisson-Laplace equation or the diffusion equation, go for a toss. Harmonic analysis might still apply in some form (frankly I don’t even know if it does!), but the singularity sure plays tricks!

This is an ugly fact, and frankly, I had not foreseen it. But it’s there. I had to keep myself reminding of the different nature of the eigenvalue problem together with the singular fields.

As you refine the mesh too much, then the absolute value of the PE at a node right next to the point of singularity increases without bound! This fact mandates that the finite value we (more or less) arbitrarily chose to use in place of the actually infinite value for the singular point, has itself to increase further too.

But, for some reasons not known to me (but which by now do feel vaguely reasonable!) the eigenvalue solver begins to experience difficulties with such increases in the absolute value of the PE value at the singularity. Roughly, the trouble begins to happen as the minimum potential energy (at the singular node) goes below $-20$ or so. In fact, I even found that a highly refined mesh might actually report a positive value for the ground-state energy—no bonding but, on the contrary, a repulsion of the electron!

3. Wavefunction fields are important in my new approach, but they don’t always converge to the analytical solution very well!:

With a reasonable level of mesh refinement, the ground-state energy does monotonically approach the exact figure of $-0.5$ However, I’ve found that a convergence in energy is not necessarily accompanied also by a good convergence trend in the absolute values of the wavefunction!

In the H-atom, for the ground-state analytical solution, the absolute value of the wavefunction has its maximum right at the nucleus; the wavefunction field forms a cusp at the nucleus, in fact. The analytical value for $\psi(x)$-max goes like: $0.564189584\dots$. (That’s because in the atomic units, the Bohr radius $a_0$ is chosen to be exactly equal to $1$, and so, at $r = 0$, the ground-state wavefunction for the H-atom becomes $\psi(x_{\text{at nucleus}}) = 1/\sqrt{\pi}$.)

With mesh refinement, even as the energy is nicely converging to something like $-0.4938884$ (against $-0.5$), the $\psi$-max might still be lingering around a lower figure like $0.516189$. The $\psi$-max values converge more slowly, and their convergence shows opposite trend!

For relatively coarse meshes (i.e. high $\Delta x$ of the FDM mesh), the $\psi$-max value is actually way much higher than the analytical solution; it even becomes as bad as $3.276834$ or $1.393707$. As you refine the mesh, they do begin to fall down and approach the analytical solution.

However, with further mesh refinement, the $\psi$-max values continue to fall down! They cross the analytical solution level of $0.564189584$ too, and still continue to fall further! And, this behaviour occurs even as energy result is still approaching the exact solution in a nice-and-expected monotonic manner.

So, the trouble is: Using the right mesh size is actually a trade-off! You have to sacrifice some convergence on the energy number, so as to have a good (reliable) value for the $\psi$-max measure.

The trouble doesn’t stop there; see the next section.

4. Energies for the excited-states don’t always come out very well:

With appropriately high levels of mesh-refinement, the ground-state energy might be showing good convergence trends. Even the $\psi$-max values might be good enough (like $0.52$ or so). But the energy and/or $\psi$-max for the first excited state still easily give trouble.

The energy for the first excited state for the hydrogen atom is, by analytical solution, $-0.125$, exactly.

The numerical values, when the simulation is working right, could be like $-0.11$, or even better, say $-0.123$, or thereabout. But that happens only when the mesh is of the intermediate refinement (the cell-side is neither too small nor too large).

However, with a more refined mesh (smaller cell-sides), the PE well can remain more or less rightly shaped for the ground-state energy, but it can still become too deep for the first-excited state energy! The first excited state energy can suddenly get degraded to a value like $-0.04471003$.

Indeed, there seems to be some kind of a numerical compensation going on in between the $\psi$-max values and the energy values, especially for the first-excited state energies. The ground-state energies remain much better, in relative terms. (If the mesh refinement is very high, even the ground-state energy goes off the track to something like $-0.2692952$ or even positive values. That’s what I meant by “appropriately” high levels of mesh refinement.)

I didn’t compare the numerical results with the analytical solutions for energies or $\psi$-max values for second-excited states or higher. Computation of the bonding energy makes reference only to the ground state, and so, I stopped my exploration of this side of the FDM + eigenvalue solver behaviour at this stage.

5. Atomic sizes reported by the numerical modeling show very good trends:

Another important consideration in my new approach has to do with the atomic radius of the atoms being modelled (hydrogen and helium).

After optimizing the mesh refinement (i.e., effectively, the cell-side), I conducted a series of numerical trials using different domain sizes (from $7.5$ through $40.0$ ), and implemented a rough-and-ready code to estimate the following measure:

The side of the nucleus-centered sub-cube in which roughly $95 \%$ (or $99 \%$) of the probability cloud is contained.

This size can be taken as a good measure for the atomic size.

In the above working definition, I say roughly $95 \%$, because I didn’t care to interpolate the wavefunction fields in between their nodal values. What this means is that the side of the sub-cube effectively changes only in the integer steps, and therefore, the percentage of the sub-cube contained may not be exactly $95 \%$; it could be $97.5 \%$ for one domain size, and $95.3 \%$ for another domain size, just to pick up some numbers.

But even while using this rough and ready measure (and implementation), I found that the results were quite meaningfully consistent.

But why conduct these trials?

Well, realize that (1) the simulation box has a finite size, and (2) the Dirichlet conditions are being imposed at all the boundary nodes. Given these two constraints, the solution is going to show boundary-/edge-effects, i.e., the solution is going to depend on the domain size.

Now, in my approach, the spread of the probability cloud enters the calculations in a crucial manner. Numerically “extracting” the size of the simulated atom was, therefore, an important part of optimizing the simulations.

The expected behaviour of the above mentioned “size effect” was that as the domain size increases, the calculated atomic size, too, should increase. The question was: Were these differences in the numerically determined sizes important enough? did they vary too much? if yes, how much? The following is what I found:

First, I fixed the domain size (cube side) at $10.0$, and varied the mesh refinement (from roughly $41$ nodes per side to $121$ and $131$). I found that the calculated atomic sizes for the hydrogen atom varied but in a relatively small range—which was a big and happy surprise to me. The calculated size went from $5.60$ while using a coarse mesh (requiring eigenvalue computation time of about $10$ seconds) to a value like $5.25$ for an intermediate refinement of the mesh (exe. time 2 min. 32 seconds i.e. 152 seconds), to $5.23$ for as fine a mesh as my machine can handle ($131 \times 131 \times 131$, which required an exe. time of about 20 minutes i.e. 1200 seconds, for each eigenvalue computation call). Remember, all these results were for a domain size of $10.0$.

Next, I changed the domain cube side to $15.0$, and repeated the trials, for various levels of mesh refinements. Then, ditto, for the domain side of $20.0$ and $40.0$.

Collecting the results together:

• $10.0$
• coarse: $5.60$
• intermediate: $5.25$
• fine: $5.23$
• $15.0$
• coarse: $6.0$
• intermediate: $5.62$
• $20.0$
• coarse: $6.40$
• intermediate: $5.50$
• fine: $5.67$
• $40.0$
• coarse: $4.0$
• intermediate: $5.5$
• fine: $6.0$
• very fine: $6.15$

You might be expecting very clear-cut trends and it’s not the case here. However, remember, due to the trickiness of the eigenvalue solver in the presence of a “singular” PE well, not to mention the roughness of the size-estimation procedure (only integer-sized sub-cubes considered, and no interpolations of $\psi$ to internodal values), a monotonic sort of behaviour is simply not to be expected here.

Indeed, if you ask me, these are pretty good trends, even if they are only for the hydrogen atom.

Note, for the helium atom, my new approach would require giving eigenvalue computation calls thousands of times. So, at least on this count of atomic radius computation, the fact that even the coarse or mid-level mesh refinement results didn’t vary much (they were in the range of $5.25$ to $5.6$) was very good. Meaning, I don’t have to sacrifice a lot of accuracy due to this one factor taken by itself.

For comparison, the atomic size (diameter) for the hydrogen atom is given in the literature (Wiki), when translated into atomic units, comes out variously as: (1) $0.94486306$ using some “empirical” curve-fitting to some indirect properties of gases; (2) $4.5353427$ while using the van der Waal criterion, and (3) $2.0031097$ using “calculations” (whose basis or criteria I do not know in detail).

Realize, the van der Waal measure is closest to the criterion used by me above. Also, it is only expected that when using FDM, due to the numerical approximations, just the way the FDM ground-state energy values should come out algebraically greater (they do, say $-0.49$ vs. the exact datum of $-0.5$), the FDM $\psi$-max measure should come out smaller (it does, say $0.52$ vs. the analytical solution of $\approx 0.56$), similarly, for the same reasons, the rough-and-ready estimated atomic size should come out as greater (it does, say $5.25$ to $5.67$ as the domain size increases from $10.0$ to $40.0$, the. van der Waal value being $4.54$ ).

Inline update on 2021.02.02 19:26 IST: After the publication of this post, I compared the above-mentioned results with the analytical solution. I now find that the sizes of the sub-cubes found using FDM, and using the analytical solution for the hydrogen atom, come out as identical!  This is a very happy news. In other words, making comparisons with the van der Waal size and the other measure was not so relevant anyway; I should have compared the atomic sizes (found using the sub-cubes method) with the best datum, which is, the analytical solution! To put this finding in some perspective, realize that the FDM-computed wavefunctions still do differ a good deal from the analytical solution, but the volume integral for an easy measure like $95 \%$ does turn out be the same. The following proviso’s apply for this finding: The good match between the analytical solution and the FDM solution are valid only for (i) the range of the domain sizes considered here (roughly, $10$ to $40$), not for the smaller box sizes (though the two solution would match even better for bigger boxes), and (ii) only when using the Simpson procedure for numerically evaluating the volume integrals. I might as well also note that the Simpson procedure is, relatively, pretty crude. As the sizes of the sub-cubes go on increasing, the Simpson procedure can give volume integrals in excess of $1.0$ for both the FDM and the analytical solutions. Inline update over.

These results are important because now I can safely use even a small sized domain like a $10.0$-side cube, which implies that I can use a relatively crude mesh of just $51$ nodes per side too—which means a sufficiently small run-time for each eigenvalue function call. Even then, I would still remain within a fairly good range on all the important parameters.

Of course, it is already known with certainty that the accuracy for the bonding energy for the helium atom is thereby going to get affected adversely. The accuracy will suffer, but the numerical results would be on the basis of a sweet-zone of all the numerical parameters of relevance—when validated against hydrogen atom. So, the numerical results, even for the helium atom, should have greater reliability.

Considerations like conformance to expected behaviour in convergence, stability, and reliability are far more important considerations in numerical work of this nature. As to sheer accuracy itself, see the next section too.

6. Putting the above results in perspective:

All in all, for the convergence behaviour for this problem (eigenvalue-eigenvector with singular potentials) there are no easy answers. Not even for just the hydrogen atom. There are trade-offs to be made.

However, for computation of bonding energy using my new approach, it’s OK even if a good trade-off could be reached only for the ground-state.

On this count, my recent numerical experimentation seems to suggest that using a mesh cell-side of $0.2$ or $0.25$ should give the most consistent results across a range of physical domain sizes (from $7.5$ through $30.0$ ). The atomic size extracted from the simulations also show good behaviour.

Yes, all these results are only for the hydrogen atom. But it was important that I understand the solver behaviour well enough. It’s this understanding which will come in handy while optimizing for the helium atom—which will be my next step on the simulation side.

The trends for the hydrogen atom would be used in judging the results for the the bonding energy for the helium atom.

7. The discussed “optimization” of the numerical parameters is strictly for my laptop:

Notice, if I were employed in a Western university or even at an IIT (or in an Indian government/private research lab), I would have easy access to supercomputers. In that case, much of this study wouldn’t be so relevant.

The studies regarding the atomic size determination, in particular, would still be necessary, but the results are quite stable there. And it is these results which tell me that, had I have access to powerful computational resources, I could have used larger boxes (which would minimize the edge-/size- effect due to the finite size of the box), and I could have used much, much bigger meshes, while still maintaining the all-important mesh cell-side parameter near the sweet spot of about $0.20$ to $0.25$. So, yes, optimization would still be required. But I would be doing it at a different level, and much faster. And, with much better accuracy levels to report for the helium atom calculations.

Inline update on 2021.02.02 19:36 IST: Addendum: I didn’t write this part very well, and a misleading statement crept in. The point is this: If my computational resources allow me to use very big meshes, and then I would also explore cell-sides that are smaller than the sweet-spot of $0.20$ to $0.25$. I’ve been having a hunch that the eigenvalue solver would still not show up the kind of degeneracy due to very deep PE well, provided that the physical domain size also were to be made much bigger. In short, if very big meshes are permissible, then there is a possibility that another sweet-spot at smaller cell-sizes could be reached too. There is nothing physical about the $0.20$ to $0.25$ range alone, that’s the point. Inline update over.

The specifics of the study mentioned in this post was largely chosen keeping in the mind the constraint of working within the limits of my laptop.

Whatever accuracy levels I do eventually end up getting for the helium atom using my laptop, I’ll be using it not just for my planned document but also for my very first arXiv-/journal- paper. The reader of the paper would, then, have to make a mental note that my machine could only support a mesh size of only $131$ nodes at its highest end. For FDM computations, that still is a very crude mesh.

And, indeed, for the reasons given above, I would in fact be reporting the helium atom results for meshes in between $41$ to $81$ nodes per side of the cube, not even $131$ nodes. All the rest of the choices of the parameters were made keeping in view this limitation.

8. “When do you plan to ship the code?”

I should be uploading the code eventually. It may not be possible to upload the “client-side” scripts for all the trials reported here (simply because once you upload some code, the responsibility to maintain it comes too!). However, exactly the same “server”- or “backend”- side code will sure be distributed, in its entirety. I will also be giving some indication of the kind of code-snippets I used in order to implement the above mentioned studies. So, all in all, it should be possible for you to conduct the same/similar trials and verify the above given trends.

I plan to clean up and revise the code for the hydrogen atom a bit further, finalize it, and upload it to my GitHub account within, say, a week’s time. The cleaned up and revised version of the helium-atom code will take much longer, may be 3–4 weeks. But notice, the helium-atom code would be giving calls to exactly the same library as that for the hydrogen atom.

All in all, you should have a fairly good amount of time to go through the code for the $3D$ boxes (something which I have never uploaded so far), run it, run the above kind of studies on the solid grounds of the hydrogen atom, and perhaps even spot bugs or suggest better alternatives to me. The code for the helium atom would arrive by the time you run through this gamut of activities.

So, hold on just a while, may be just a week or even less, for the first code to be put on the GitHub.

On another note, I’ve almost completed compiling a document on the various set of statements for the postulates of QM. I should be uploading it soon too.

OK, so look for an announcement here and on my Twitter thread, regarding the shipping of the basic code library and the user-script for the hydrogen atom, say, within a week’s time. (And remember, this all comes to you without any charge to you! (For that matter, I am not even in any day-job.))

A song I like:

(Hindi) दिल कहे रुक जा रे रुक जा (“dil kahe ruk jaa re ruk jaa”)
Lyrics: Sahir Ludhiyanvi
Music: Laxmikant-Pyarelal
Singer: Mohammed Rafi

[Another favourite right from my high-school days… A good quality audio is here [^]. Many would like the video too. A good quality video is here [^], but the aspect-ratio has gone awry, as usual! ]

History:
— 2020.01.30 17:09 IST: First published.
— 2021.02.02 20:04 IST: Inline updates to sections 5. and 7 completed. Also corrected a couple of typos and streamlined just a few sentences. Now leaving this post in whatever shape it is in.

# Yesss! I did it!

Last evening (on 2021.01.13 at around 17:30 IST), I completed the first set of computations for finding the bonding energy of a helium atom, using my fresh new approach to QM.

These calculations still are pretty crude, both by technique and implementation. Reading through the details given below, any competent computational engineer/scientist would immediately see just how crude they are. However, I also hope that he would also see that I can still say that these initial results may be taken as definitely validating my new approach.

It would be impossible to give all the details right away. So, what I give below are some important details and highlights of the model, the method, and the results.

For that matter, even my Python scripts are currently in a pretty disorganized state. They are held together by duct-tape, so to say. I plan to rearrange and clean up the code, write a document, and upload them both. I think it should be possible to do so within a month’s time, i.e., by mid-February. If not, say due to the RSI, then probably by February-end.

Alright, on to the details. (I am giving some indication about some discarded results/false starts too.)

1. Completion of the theory:

As far as development of my new theory goes, there were many tricky issues that had surfaced since I began trying simulating my new approach, which was starting in May–June 2020. The crucially important issues were the following:

• A quantitatively precise statement on how the mainstream QM’s $\Psi$, defined as it is over the $3N$-dimensional configuration space, relates to the $3$-dimensional wavefunctions I had proposed earlier in the Outline document.
• A quantitatively precise statement on how the wavefunction $\Psi$ makes the quantum particles (i.e. their singularity-anchoring positions) move through the physical space. Think of this as the “force law”, and then note that if a wrong statement is made here, then the entire system dynamics/evolution has to go wrong. Repurcussions will exist even in a simplest system having two interacting particles, like the helium atom. The bonding energy calculations of the helium atom are bound to go wrong if the “force law” is wrong. (I don’t actually calculate the forces, but that’s a different matter.)
• Also to be dealt with was this issue: Ensuring that the anti-symmetry property for the indistinguishable fermions (electrons) holds.

I had achieved a good clarity on all these (and similar other) matters by the evening of 5th January 2021. I also tried to do a few simulations but ran into problem. Both these developments were mentioned via an update at iMechanica on the evening of 6th January 2021, here [^].

2. Simulations in $1D$ boxes:

By “box” I mean a domain having infinite potential energy walls at the boundaries, and imposition of the Dirichlet condition of $\Psi(x,t) = 0$ at the boundaries at all times.

I did a rapid study of the problems (mentioned in the iMechanica update). The simulations for this study involved $1D$ boxes from $5$ a.u. to $100$ a.u. lengths. (1 a.u. of length = 1 Bohr radius.) The mesh sizes varied from $5$ nodes to $3000$ nodes. Only regular, structured meshes of uniform cell-sides (i.e., a constant inter-nodal distance, $\Delta x$) were used, not non-uniform meshes (such as $log$-based).

I found that the discretization of the potential energy (PE) term indeed was at the root of the problems. Theoretically, the PE field is singular. I have been using FDM. Since an infinite potential cannot be handled using FDM, you have to implement some policy in giving a finite value for the maximum depth of the PE well.

Initially, I chose the policy of setting the max. depth to that value which would exist at a distance of half the width of the cell. That is to say, $V_S \approx V(\Delta x/2)$, where $V_S$ denotes the PE value at the singularity (theoretically infinite).

The PE was calculated using the Coulomb formula, which is given as $V(r) = 1/r$ when one of the charges is fixed, and as $V_1(r_s) = V_2(r_s) = 1/(2r_s)$ for two interacting and moving charges. Here, $r_s$ denotes the separation between the interacting charges. The rule of half cell-side was used for making the singularity finite. The field so obtained will be referred to as the “hard” PE field.

Using the “hard” field was, if I recall it right, quite OK for the hydrogen atom. It gave the bonding energies (ground-state) ranging from $-0.47$ a.u. to $-0.49$ a.u. or lower, depending on the domain size and mesh refinement (i.e. number of nodes). Note, $1$ a.u. of energy is the same as $1$ hartree. For comparison, the analytical solution gives $-0.5$, exactly. All energy calculations given here refer to only the ground-state energies. However, I also computed and checked up to 10 eigenvalues.

Initially, I tried both dense and sparse eigenvalue solvers, but eventually settled only on the sparse solvers. The results were indistinguishable (at least numerically) . I used SciPy’s wrappings for the various libraries.

I am not quite sure whether using the hard potential was always smooth or not, even for the hydrogen atom. I think not.

However, the hard Coulomb potential always led to problems for the helium atom in a $1D$ box (being modelled using my new approach/theory). The lowest eigen-value was wrong by more than a factor of 10! I verified that the corresponding eigenvector indeed was an eigenvector. So, the solver was giving a technically correct answer, but it was an answer to the as-discretized system, not to the original physical problem.

I therefore tried using the so-called “soft” Coulomb potential, which was new to me, but looks like it’s a well known function. I came to know of its existence via the OctopusWiki [^], when I was searching on some prior code on the helium atom. The “soft” Coulomb potential is defined as:

$V = \dfrac{1}{\sqrt{(a^2 + x^2)}}$, where $a$ is an adjustable parameter, often set to $1$.

I found this potential unsatisfactory for my work, mainly because it gives rise to a more spread-out wavefunction, which in turn implies that the screening effect of one electron for the other electron is not captured well. I don’t recall exactly, but I think that there was this issue of too low ground-state eigenvalues also with this potential (for the helium modeling). It is possible that I was not using the right SciPy function-calls for eigenvalue computations.

Please take the results in this section with a pinch of salt. I am writing about them only after 8–10 days, but I have written so many variations that I’ve lost the track of what went wrong in what scenario.

All in all, I thought that $1D$ box wasn’t working out satisfactorily. But a more important consideration was the following:

My new approach has been formulated in the $3D$ space. If the bonding energy is to be numerically comparable to the experimental value (and not being computed as just a curiosity or computational artifact) then the potential-screening effect must be captured right. Now, here, my new theory says that the screening effect will be captured quantitatively correctly only in a $3D$ domain. So, I soon enough switched to the $3D$ boxes.

3. Simulations of the hydrogen atom in $3D$ boxes:

For both hydrogen and helium, I used only cubical boxes, not parallelpipeds (“brick”-shaped boxes). The side of the cube was usually kept at $20$ a.u. (Bohr radii), which is a length slightly longer than one nanometer ($1.05835$ nm). However, some of my rapid experimentation also ranged from $5$ a.u. to $100$ a.u. domain lengths.

Now, to meshing

The first thing to realize is that with a $3D$ domain, the total number of nodes $M$ scales cubically with the number of nodes $n$ appearing on a side of the cube. That is to say: $M = n^3$. Bad thing.

The second thing to note is worse: The discretized Hamiltonian operator matrix now has the dimensions of $M \times M$. Sparse matrices are now a must. Even then, meshes remain relatively coarse, else computation time increases a lot!

The third thing to note is even worse: My new approach requires computing “instantaneous” eigenvalues at all the nodes. So, the number of times you must give a call to, say eigh() function, also goes as $M = n^3$. … Yes, I have the distinction of having invented what ought to be, provably, the most inefficient method to compute solutions to many-particle quantum systems. (If you are a QC enthusiast, now you know that I am a completely useless fellow.) But more on this, just a bit later.

I didn’t have to write the $3D$ code completely afresh though. I re-used much of the backend code from my earlier attempts from May, June and July 2020. At that time, I had implemented vectorized code for building the Laplacian matrix. However, in retrospect, this was an overkill. The system spends more than $99 %$ of execution time only in the eigenvalue function calls. So, preparation of the discretized Hamiltonian operator is relatively insignificant. Python loops could do! But since the vectorized code was smaller and a bit more easily readable, I used it.

Alright.

The configuration space for the hydrogen atom is small, there being only one particle. It’s “only” $M$ in size. More important, the nucleus being fixed, and there being just one particle, I need to solve the eigenvalue equation only once. So, I first put the hydrogen atom inside the $3D$ box, and verified that the hard Coulomb potential gives cool results over a sufficiently broad range of domain sizes and mesh refinements.

However, in comparison with the results for the $1D$ box, the $3D$ box algebraically over-estimates the bonding energy. Note the word “algebraically.” What it means is that if the bonding energy for a H atom in a $1D$ box is $-0.49$ a.u., then with the same physical domain size (say 20 Bohr radii) and the same number of nodes on the side of the cube (say 51 nodes per side), the $3D$ model gives something like $-0.48$ a.u. So, when you use a $3D$ box, the absolute value of energy decreases, but the algebraic value (including the negative sign) increases.

As any good computational engineer/scientist could tell, such a behaviour is only to be expected.

The reason is this: The discretized PE field is always jagged, and so it only approximately represents a curvy function, especially near the singularity. This is how it behaves in $1D$, where the PE field is a curvy line. But in a $3D$ case, the PE contour surfaces bend not just in one direction but in all the three directions, and the discretized version of the field can’t represent all of them taken at the same time. That’s the hand-waving sort of an “explanation.”

I highlighted this part because I wanted you to note that in $3D$ boxes, you would expect the helium atom energies to algebraically overshoot too. A bit more on this, later, below.

4. Initial simulations of the helium atom in $3D$ boxes:

For the helium atom too, the side of the cube was mostly kept at $20$ a.u. Reason?

In the hydrogen atom, the space part of the ground state $\psi$ has a finite peak at the center, and its spread is significant over a distance of about 5–7 a.u. (in the numerical solutions). Then, for the helium atom, there is going to be a dent in the PE field due to screening. In my approach, this dent physically moves over the entire domain as the screening electron moves. To accommodate both their spreads plus some extra room, I thought, $20$ could be a good choice. (More on the screening effect, later, below.)

As to the mesh: As mentioned earlier, the number of eigenvalue computations required are $M$, and the time taken by each such a call goes significantly up with $M$. So, initially, I kept the number of nodes per side (i.e. $n$) at just $23$. With two extreme planes sacrificed to the deity of the boundary conditions, the actual computations actually took place on a $21 \times 21 \times 21$ mesh. That still means, a system having $9261$ nodes!

At the same time, realize how crude and coarse mesh this one is: Two neighbouring nodes represent a physical distance of almost one Bohr radius! … Who said theoretical clarity must come also with faster computations? Not when it’s QM. And certainly not when it’s my theory! I love to put the silicon chip to some real hard work!

Alright.

As I said, for the reasons that will become fully clear only when you go through the theory, my approach requires $M$ number of separate eigenvalue computation calls. (In “theory,” it requires $M^2$ number of them, but some very simple and obvious symmetry considerations reduce the computational load to $M$.) I then compute the normalized $1$-particle wavefunctions from the eigenvector. All this computation forms what I call the first phase. I then post-process the $1$-particle wavefunctions to get to the final bonding energy. I call this computation the second phase.

OK, so in my first computations, the first phase involved the SciPy’s eigsh() function being called $9261$ number of times. I think it took something like 5 minutes. The second phase is very much faster; it took less than a minute.

The bonding energy I thus got should have been around $-2.1$ a.u. However, I made an error while coding the second phase, and got something different (which I no longer remember, but I think I have not deleted the wrong code, so it should be possible to reproduce this wrong result). The error wasn’t numerically very significant, but it was an error all the same. This status was by the evening of 11th January 2021.

The same error continued also on 12th January 2021, but I think that if the errors in the second phase were to be corrected, the value obtained could have been close to $-2.14$ a.u. or so. Mind you, these are the results with a 20 a.u. box and 23 nodes per side.

In comparison, the experimental value is $-2.9033$ a.u.

As to computations, Hylleraas, back in 1927 a PhD student, used a hand-held mechanical calculator, and still got to $-2.90363$ a.u.! Some 95+ years later, his method and work still remain near the top of the accuracy stack.

Why did my method do so bad? Even more pertinent: How could Hylleraas use just a mechanical calculator, not a computer, and still get to such a wonderfully accurate result?

It all boils down to the methods, tricks, and even dirty tricks. Good computational engineers/scientists know them, their uses and limitations, and do not hesitate building products with them.

But the real pertinent reason is this: The technique Hylleraas used was variational.

5. A bit about the variational techniques:

All variational techniques use a trial function with some undetermined parameters. Let me explain in a jiffy what it means.

A trial function embodies a guess—a pure guess—at what the unknown solution might look like. It could be any arbitrary function.

For example, you could even use a simple polynomial like $y = a_0 + a_1 x_1 + a_2 x_2^2 + a_3 x_3^3$ by way of a trial function.

Now, observe that if you change the values of the $a_0$, $a_1$ etc. coefficients, then the shape of the function changes. Just assign some random values and plot the results using MatPlotLib, and you will know what I mean.

… Yes, you do something similar also in Data Science, but there, the problem formulation is relatively much simpler: You just tweak all the $a_i$ coefficients until the function fits the data. “Curve-fitting,” it’s called.

In contrast, in variational calculus, you don’t do this one-step curve-fitting. You instead take the $y$ function and substitute it into some theoretical equations that have something to do with the total energy of the system. Then you find an expression which tells how the energy, now expressed as a function of $y$, which itself is a function of $a_i$‘s, varies as these unknown coefficients $a_i$ are varied. So, these $a_i$‘s basically act as parameters of the model. Note carefully, the $y$ function is the primary unknown function, but in variational calculus, you do the curve-fitting with respect to some other equation.

So, the difference between simple curve-fitting and variational methods is the following. In simple curve-fitting, you fit the curve to concrete data values. In variational calculus, you fit an expression derived by substituting the curve into some equations (not data), and then derive some further equations that show how some measure like energy changes with variations in the parameters. You then adjust the parameters so as to minimize that abstract measure.

Coming back to the helium atom, there is a nucleus with two protons inside it, and two electrons that go around the nucleus. The nucleus pulls both the electrons, but the two electrons themselves repel each other. (Unlike and like charges.) When one electron strays near the nucleus, it temporarily decreases the effective pull exerted by the nucleus on the other electron. This is called the screening effect. In short, when one electron goes closer to the nucleus, the other electron feels as if the nucleus had discharged a little bit. The effect gets more and more pronounced as the first electron goes closer to the nucleus. The nucleus acts as if it had only one proton when the first electron is at the nucleus. The QM particles aren’t abstractions from the rigid bodies of Newtonian mechanics; they are just singularity conditions in the aetherial fields. So, it’s easily possible that an electron sits at the same place where the two protons of the nucleus are.

One trouble with using the variational techniques for problems like modeling the helium atom is this. It models the screening effect using a numerically reasonable but physically arbitrary trial function. Using this technique can give a very accurate result for bonding energy, provided that the person building the variational model is smart, as Hylleraas sure was. But the trial function is just a guess work. It can’t be said to capture any physics, as such. Let me give an example.

Suppose that some problem from physics is such that a $5$-degree polynomial happens to be the physically accurate form of solution for it. However, you don’t know the analytical solution, not even its form.

Now, the variational technique doesn’t prevent you from using a cubic polynomial as the trial function. That’s because, even if you use a cubic polynomial, you can still get to the same total system energy.

The actual calculations are far more complicated, but just as a fake example to illustrate my main point, suppose for a moment that the area under the solution curve is the target criterion (and not a more abstract measure like energy). Now, by adjusting the height and shape of a cubic polynomial, you can always alter its shape such that it happens to give the right area under the curve. Now, the funny part is this. If the trial function we choose is only cubic, then it is certain to miss, as a matter of a general principle, all the information related to the $3$rd- and $4$th-order derivatives. So, the solution will have a lot of high-order physics deleted from itself. It will be a bland solution; something like a ghost of the real thing. But it can still give you the correct area under the curve. If so, it still fulfills the variational criterion.

Coming back to the use of variational techniques in QM, like Hylleraas’ method:

It can give a very good answer (even an arbitrarily accurate answer) for the energy. But the trial function can still easily miss a lot of physics. In particular, it is known that the wavefunctions (actually, “orbitals”) won’t turn out to be accurate; they won’t depict physical entities.

Another matter: These techniques work not in the physical space but in the configuration space. So, the opportunity of taking what properly belongs to Raam and giving it to Shaam is not just ever-present but even more likely.

Even simpler example is this. Suppose you are given $100$ bricks and asked to build a structure on a given area for a wall on the ground. You can use them to arrange one big tower in the wall, two towers, whatever… There still would be in all $100$ bricks sitting on the same area on the ground. The shapes may differ; the variational technique doesn’t care for the shape. Yet, realize, having accurate atomic orbitals means getting the shape of the wall too right, not just dumping $100$ bricks on the same area.

6. Why waste time on yet another method, when a more accurate method has been around for some nine decades?

“OK, whatever” you might say at this point. “But if the variational technique was OK by Hylleraas, and if it’s been OK for the entire community of physicists for all these years, then why do you still want to waste your time and invent just another method that’s not as accurate anyway?”

Firstly, my method isn’t an invention; it is a discovery. My calculation method directly follows the fundamental principles of physics through and through. Not a single postulate of the mainstream QM is violated or altered; I merely have added some further postulates, that’s all. These theoretical extensions fit perfectly with the mainstream QM, and using them directly solves the measurement problem.

Secondly, what I talked about was just an initial result, a very crude calculation. In fact, I have alrady improved the accuracy further; see below.

Thirdly, I must point out a possibility which your question didn’t at all cover. My point is that this actually isn’t an either-or situation. It’s not either variational technique (like Hylleraas’s) or mine. Indeed, it would very definitely be possible to incorporate the more accurate variational calculations as just parts of my own calculations too. It’s easy to show that. That would mean, combining “the best of both worlds”. At a broader level, the method would still follow my approach and thus be physically meaningful. But within carefully delimited scope, trial-functions could still be used in the calculation procedures. …For that matter, even FDM doesn’t represent any real physics either. Another thing: Even FDM can itself can be seen as just one—arguably the simplest—kind of a variational technique. So, in that sense, even I am already using the variational technique, but only the simplest and crudest one. The theory could easily make use of both meshless and mesh-requiring variational techniques.

I hope that answers the question.

7. A little more advanced simulation for the helium atom in a $3D$ box:

With my computational experience, I knew that I was going to get a good result, even if the actual result was only estimated to be about $-2.1$ a.u.—vs. $-2.9033$ a.u. for the experimentally determined value.

But rather than increasing accuracy for its own sake, on the 12th and 13th January, I came to focus more on improving the “basic infrastructure” of the technique.

Here, I now recalled the essential idea behind the Quantum Monte Carlo method, and proceeded to implement something similar in the context of my own approach. In particular, rather than going over the entire (discretized) configuration space, I implemented a code to sample only some points in it. This way, I could use bigger (i.e. more refined) meshes, and get better estimates.

I also carefully went through the logic used in the second phase, and corrected the errors.

Then, using a box of $35$ a.u. and $71$ nodes per side of the cube (i.e., $328,509$ nodes in the interior region of the domain), and using just $1000$ sampled nodes out of them, I now found that the bonding energy was $-2.67$ a.u. Quite satisfactory (to me!)

8. Finally, a word about the dirty tricks department:

I happened to observe that with some choices of physical box size and computational mesh size, the bonding energy could go as low as $-3.2$ a.u. or even lower.

What explains such a behaviour? There is this range of results right from $-2.1$ a.u. to $-2.67$ a.u. to $-3.2$ a.u. …Note once again, the actual figure is: $-2.90$ a.u.

So, the computational results aren’t only on the higher side or only on the lower side. Instead, they form a band of values on both sides of the actual value. This is both a good news and a bad news.

The good plus bad news is that it’s all a matter of making the right numerical choices. Here, I will mention only 2 or 3 considerations.

As one consideration, to get more consistent results across various domain sizes and mesh sizes, what matters is the physial distance represented by each cell in the mesh. If you keep this mind, then you can get results that fall in a narrow band. That’s a good sign.

As another consideration, the box size matters. In reality, there is no box and the wavefunction extends to infinity. But a technique like FDM requires having to use a box. (There are other numerical techniques that can work with infinite domains too.) Now, if you use a larger box, then the Coulomb well looks just like the letter `T’. No curvature is captured with any significance. With a lot of physical region where the PE portion looks relatively flat, the role played by the nuclear attraction becomes less significant, at least in numerical work. In short, the atom in a box approaches a free-particle-in-a-box scenario! On the other hand, a very small box implies that each electron is screening the nuclear potential at almost all times. In effect, it’s as if you are modelling a H- ion rather than an He atom!

As yet another consideration: The policy for choosing the depth of the potential energy matters. A concrete example might help.

Consider a $1D$ domain of, say, $5$ a.u. Divide it using $6$ nodes. Put a proton at the origin, and compute the electron’s PE. At the distance of $5$ a.u., the PE is $1.0/5.0 = 0.2$ a.u. At the node right next to singularity, the PE is $1$ a.u. What finite value should you give to the PE be at the nucleus? Suppose, following the half-cell side rule, you give it the value of $1.0/0.5 = 2$ a.u. OK.

Now refine the mesh, say by having 10 nodes going over the same physical distance. The physically extreme node retains the same value, viz. $0.2$ a.u. But the node next to the singularity now has a PE of $1.0/0.5 = 2$ a.u., and the half cell-side rule now gives the a value of $1.0/0.25 = 4.0 a.u.$ at the nucleus.

If you plot the two curves using the same scale, the differences are especially is striking. In short, mesh refinement alone (keeping the same domain size) has resulted in keeping the same PE at the boundary but jacking up the PE at the nucleus’ position. Not only that, but the PE field now has a more pronounced curvature over the same physical distance. Eigenvalue problems are markedly sensitive to the curvature in the PE.

Now, realize that tweaking this one parameter alone can make the simulation zoom on to almost any value you like (within a reasonable range). I can always choose this parameter in such a way that even a relatively crude model could come to reproduce the experimental value of $-2.9$ a.u. very accurately—for energy. The wavefunction may remain markedly jagged. But the energy can be accurate.

Every computational engineer/scientist understands such matters, especially those who work with singlarities in fields. For instance, all computational mechanical engineers know how the stress values can change by an order of magnitude or more, depending on how you handle the stress concentrators. Singularities form a hard problem of computational science & engineering.

That’s why, what matters in computational work is not only the final number you produce. What matters perhaps even more are such things as: Whether the method works well in terms of stability; trends in the accuracy values (rather than their absolute values); whether the method can theoretically accomodate some more advanced techniques easily or not; how it scales with the size of the domain and mesh refinement; etc.

If a method does fine on such counts, then the sheer accuracy number by itself does not matter so much. We can still say, with reasonable certainty, that the very theory behind the model must be correct.

And I think that’s what my yesterday’s result points to. It seems to say that my theory works.

9. To wind up…

Despite all my doubts, I always thought that my approach is going to work out, and now I know that it does—nay, it must!

The $3$-dimensional $\Psi$ fields can actually be seen to be pushing the particles, and the trends in the numerical results are such that the dynamical assumptions I introduced, for calculating the motions of the particles, must be correct too. (Another reason for having confidence in the numerical results is that the dynamical assumptions are very simple, and so it’s easy to think how they move the particles!) At the same time, though I didn’t implement it, I can easily see that the anti-symmetrical property of at least $2$-particle system definitely comes out directly. The physical fields are $3$-dimensional, and the configuration space comes out as a mathematical abstraction from them. I didn’t specifically implement any program to show detection probabilities, but I can see that they are going to come to right—at least for $2$-particle systems.

So, the theory works, and that matters.

Of course, I will still have quite some work to do. Working out the remaining aspects of spin, for one thing. A three interacting-particles system would also be nice to work through and to simulate. However, I don’t know which system I could/should pick up. So, if you have any suggestions for simulating a $3$-particle system, some well known results, then do let me know. Yes, there still are chances that I might still need to tweak the theory a little bit here and little bit there. But the basic backbone of the theory, I am now quite confident, is going to stand as is.

OK. One last point:

The physical fields of $\Psi$, over the physical $3$-dimensional space, have primacy. Due to the normalization constraint, in real systems, there are no Dirac’s delta-like singularities in these wavefunctions. The singularities of the Coulomb field do enter the theory, but only as devices of calculations. Ontologically, they don’t have a primacy. So, what primarily exist are the aetherial, complex-valued, wavefunctions. It’s just that they interact with each other in such a way that the result is as if the higher-level $V$ term were to have a singularity in it. Indeed, what exists is only a single $3$-dimensional wavefunction; it is us who decompose it variously for calculational purposes.

That’s the ontological picture which seems to be emerging. However, take this point with the pinch of the salt; I still haven’t pursued threads like these; been too busy just implementing code, debugging it, and finding and comparing results. …

Enough. I will start writing the theory document some time in the second half of the next week, and will try to complete it by mid-February. Then, everything will become clear to you. The cleaned up and reorganized Python scripts will also be provided at that time. For now, I just need a little break. [BTW, if in my …err…“exuberance” online last night, if I have offended someone, my apologies…]

For obvious reasons, I think that I will not be blogging for at least two weeks…. Take care, and bye for now.

A song I like:

(Western, pop): “Lay all your love on me”
Band: ABBA

[A favourite since my COEP (UG) times. I think I looked up the words only last night! They don’t matter anyway. Not for this song, and not to me. I like its other attributes: the tune, the orchestration, the singing, and the sound processing.]

History:
— 2021.01.14 21:01 IST: Originally published
— 2021.01.15 16:17 IST: Very few, minor, changes overall. Notably, I had forgotten to type the powers of the terms in the illustrative polynomial for the trial function (in the section on variational methods), and now corrected it.

# The singularities closest to you

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].

0. Preamble/Preface/Prologue/Preliminaries/Whatever Pr… (but neither probability nor public relations):

Natalie Wolchover writes an article in the Quanta Magazine: “Why gravity is not like the other forces” [^].

Motl mentions this piece in his, err.. “text” [^], and asks right in the first para.:

“…the first question should be whether gravity is different, not why [it] is different”

Great point, Lubos, err… Luboš!

Having said that, I haven’t studied relativity, and so, I only cursorily went through the rest of both these pieces.

But I want to add. (Hey, what else is a blog for?)

1. Singularities in classical mechanics:

1.1 Newtonian mechanics:

Singularity is present even in the Newtonian mechanics. If you consider the differential equation for gravity in Newtonian mechanics, it basically applies to point-particles, and so, there is a singularity in this 300+ years old theory too.

It’s a different matter that Newton got rid of the singularities by integrating gravity forces inside massive spheres (finite objects), using his shells-based argument. A very ingenious argument that never ceases to impress me. Anyway, this procedure, invented by Newton, is the reason why we tend to think that there were no singularities in his theory.

1.2 Electrostatics and electrodynamics:

Coulomb et al. couldn’t get rid of the point-ness of the point-charges the way Newton could, for gravity. No electrical phenomenon was found that changed the behaviour at experimentally accessible small enough separations between two charges. In electrostatics, the inverse-square law holds through and through—on the scales on which experiments have been performed. Naturally, the mathematical manner to capture this behaviour is to not be afraid of singularities, and to go ahead, incorporate them in the mathematical formulations of the physical theory. Remember, differential laws themselves are arrived at after applying suitable limiting processes.

So, electrostatics has point singularities in the electrostatic fields.

Ditto, for classical electro-dynamics (i.e. the Maxwellian EM, as recast by Hendrik A. Lorentz, the second Nobel laureate in physics).

Singularities exist at electric potential energy locations in all of classical EM.

Lesson: Singularities aren’t specific to general relativity. Singularities predate relativity by decades if not by centuries.

2. Singularities in quantum mechanics:

2.1 Non-relativistic quantum mechanics:

You might think that non-relativistic QM has no singularities, because the $\Psi$ field must be at least $C^0$ continuous everywhere, and also not infinite anywhere even within a finite domain—else, it wouldn’t be square-normalizable. (It’s worth reminding that even in infinite domains, Sommerfeld’s radiation condition still applies, and Dirac’s delta distribution most extremely violates this condition.)

Since wavefunctions cannot be infinite anywhere, you might think that any singularities present in the physics have been burnished off due to the use of the wavefunction formalism of quantum mechanics. But of course, you would be wrong!

What the super-smart MSQM folks never tell you is this part (and they don’t take care to highlight it to their own students either): The only way to calculate the $\Psi$ fields is by specifying a potential energy field (if you want to escape the trivial solution that all wavefunctions are zero everywhere), and crucially, in a fundamental quantum-mechanical description, the PE field to specify has to be that produced by the fundamental electric charges, first and foremost. (Any other description, even if it involves complex-valued wavefunctions, isn’t fundamental QM; it’s merely a workable approximation to the basic reality. For examples, even the models like PIB, and quantum harmonic oscillator aren’t fundamental descriptions. The easiest and fundamentally correct model is the hydrogen atom.)

Since the fundamental electric charges remain point-particles, the non-relativistic QM has not actually managed to get rid of the underlying electrical singularities.

It’s something like this. I sell you a piece of a land with a deep well. I have covered the entire field with a big sheet of green paper. I show you the photograph and claim that there is no well. Would you buy it—my argument?

The super-smart MSQM folks don’t actually make such a claim. They merely highlight the green paper so much that any mention of the well must get drowned out. That’s their trick.

2.2 OK, how about the relativistic QM?

No one agrees on what a theory of GR (General Relativity) + QM (Quantum Mechanics) looks like. Nothing is settled about this issue. In this piece let’s try to restrict ourselves to the settled science—things we know to be true.

So, what we can talk about is only this much: SR (Special Relativity) + QM. But before setting to marry them off, let’s look at the character of SR. (We already saw the character of QM above.)

3. Special relativity—its origins, scope, and nature:

3.1 SR is a mathematically repackaged classical EM:

SR is a mathematical reformulation of the classical EM, full-stop. Nothing more, nothing less—actually, something less. Let me explain. But before going to how SR is a bit “less” than classical EM, let me emphasize this point:

Just because SR begins to get taught in your Modern Physics courses, it doesn’t mean that by way of its actual roots, it’s a non-classical theory. Every bit of SR is fully rooted in the classical EM.

3.2 Classical EM has been formulated at two different levels: Fundamental, and Homogenized:

The laws of classical EM, at the most fundamental level, describe reality in terms of the fundamental massive charges. These are point-particles.

Then, classical EM also says that a very similar-looking set of differential equations applies to the “everyday” charges—you know, pieces of paper crowding near a charged comb, or paper-clips sticking to your fridge-door magnets, etc. This latter version of EM is not the most fundamental. It comes equipped with a lot of fudges, most of them having to do with the material (constitutive) properties.

3.3 Enter super-smart people:

Some smart people took this later version of the classical EM laws—let’s call it the homogenized continuum-based theory—and recast them to bring out certain mathematical properties which they exhibited. In particular, the Lorentz invariance.

Some super-smart people took the invariance-related implications of this (“homogenized continuum-based”) theory as the most distinguished character exhibited by… not the fudges-based theory, but by physical reality itself.

In short, they not only identified a certain validity (which is there) for a logical inversion which treats an implication (viz. the invariance) as the primary; they blithely also asserted that such an inverted conceptual view was to be regarded as more fundamental. Why? Because it was mathematically convenient.

These super-smart people were not concerned about the complex line of empirical and conceptual reasoning which was built patiently and integrated together into a coherent theory. They were not concerned with the physical roots. The EM theory had its roots in the early experiments on electricity, whose piece-by-piece conclusions finally came together in Maxwell’s mathematical synthesis thereof. The line culminated with Lorentz’s effecting a reduction in the entire cognitive load by reducing the number of sub-equations.

The relativistic didn’t care for these roots. Indeed, sometimes, it appears as if many of them were gloating to cut off the maths from its physical grounding. It’s these super-smart people who put forth the arbitrary assertion that the relativistic viewpoint is more fundamental than the inductive base from which it was deduced.

3.4 What is implied when you assert fundamentality to the relativistic viewpoint?

To assert fundamentality to a relativistic description is to say that the following two premises hold true:

(i) The EM of homogenized continuaa (and not the EM of the fundamental point particles) is the simplest and hence most fundamental theory.

(ii) One logical way of putting it—in terms of invariance—is superior to the other logical way of putting it, which was: a presentation of the same set of facts via inductive reasoning.

The first premise is clearly a blatant violation of method of science. As people who have done work in multi-scale physics would know, you don’t grant greater fundamentality to a theory of a grossed out effect. Why?

Well, a description in terms of grossed out quantities might be fine in the sense the theory often becomes exponentially simpler to use (without an equal reduction in percentage accuracy). Who would advocate not using Hooke’s law as in the linear formulation of elasticity, but insist on computing motions of $10^23$ atoms?

However, a good multi-scaling engineer / physicist also has the sense to keep in mind that elasticity is not the final word; that there are layers and layers of rich phenomenology lying underneath it: at the meso-scale, micro-scale, nano-scale, and then, even at the atomic (or sub-atomic) scales. Schrodinger’s equation is more fundamental than Hooke’s law. Hooke’s law, projected back to the fine-grained scale, does not hold.

This situation is somewhat like this: Your $100 \times 100$ photograph does not show all the features of your face the way they come out in the original $4096 \times 4096$ image. The finer features remain lost even if you magnify the $100 \times 100$ image to the $4096 \times 4096$ size, and save it at that size. The fine-grained features remain lost. However, this does not mean that $100 \times 100$ is useless. A $28 \times 28$ pixels image is enough for the MNIST benchmark problem.

So, what is the intermediate conclusion? A “fudged” (homogenized) theory cannot be as fundamental—let alone be even more fundamental—as compared to the finer theory from which it was homogenized.

Poincaré must have thought otherwise. The available evidence anyway says that he said, wrote, and preached to the effect that a logical inversion of a homogenized theory was not only acceptable as an intellectually satisfying exercise, but that it must be seen as being a more fundamental description of physical reality.

Einstein, initially hesitant, later on bought this view hook, line and sinker. (Later on, he also became a superposition of an Isaac Asimov of the relativity theory, a Merilyn Monroe of the popular press, and a collage of the early 20th century Western intellectuals’ notions of an ancient sage. But this issue, seen in any basis—components-wise or in a new basis in which the superposition itself is a basis—takes us away from the issues at hand.)

The view promulgated by these super-smart people, however, cannot qualify to be called the most fundamental description.

3.5 Why is the usual idea of having to formulate a relativistic quantum mechanics theory a basic error?

It is an error to expect that the potential energy fields in the Schroedinger equation ought to obey the (special) relativistic limits.

The expectation rests on treating the magnetic field at a par with the static electric field.

However, there are no monopoles in the classical EM, and so, the electric charges enjoy a place of greater fundamentality. If you have kept your working epistemology untarnished by corrupt forms of methods and content, you should have no trouble seeing this point. It’s very simple.

It’s the electrons which produce the electric fields; every electric field that can at all exist in reality can always be expressed as a linear superposition of elementary fields each of which has a singularity in it—the point identified for the classical position of the electron.

We compress this complex line of thought by simply saying:

Point-particles of electrons produce electric fields, and this is the only way any electric field can at all be produced.

Naturally, electric fields don’t change anywhere at all, unless the electrons themselves move.

The only way a magnetic field can be had at any point in physical space is if the electric field at that point changes in time. Why do we say “the only way”? Because, there are no magnetic monopoles to create these magnetic fields.

So, the burden of creating any and every magnetic field completely rests on the motions of the electrons.

And, the electrons, being point particles, have singularities in them.

So, you see, in the most fundamental description, EM of finite objects is a multi-scaled theory of EM of point-charges. And, EM of finite objects was, historically, first formulated before people could plain grab the achievement, recast it into an alternative form (having a different look but the same physical scope), and then run naked in the streets shouting “Relativity!”, “Relativity!!”.

Another way to look at the conceptual hierarchy is this:

If you solve the problem of an electron in a magnetic field quantum mechanically, did you use the most basic QM? Or was it a multi-scale-wise grossed out (and approximate) QM description that you used?

Hint: The only way a magnetic field can at all come into existence is when some or the other electron is accelerating somewhere or the other in the universe.

For the layman: The situation here is like this: A man has a son. The son plays with another man, say the boy’s uncle. Can you now say that because there is an interaction between the nephew and the uncle, therefore, they are what all matters? that the man responsible for creating this relationship in the first place, namely, the son’s father cannot ever enter any fundamental or basic description?

Of course, this viewpoint also means that the only fundamentally valid relativistic QM would be one which is completely couched in terms of the electric fields only. No magnetic fields.

3.6. How to incorporate the magnetic fields in the most fundamental QM description?

I don’t know. (Neither do I much care—it’s not my research field.) But sure, I can put forth a hypothetical way of looking at it.

Think of the magnetic field as a quantum mechanical effect. That is to say, the electrostatic fields (which implies, the positions of electrons’ respective singularities) and the wavefunctions produced in the aether in correspondence with these electrostatic fields, together form a complete description. (Here, the wavefunction includes the spin.)

You can then abstractly encapsulate certain kinds of changes in these fundamental entities, and call the abstraction by the name of magnetic field.

You can then realize that the changes in magnetic and electric fields imply the $c$ constant, and then trace back the origins of the $c$ as being rooted in the kind of changes in the electrostatic fields (PE) and wavefunction fields (KE) which give rise to the higher-level of phenomenon of $c$.

But in no case can you have the hodge-podge favored by Einstein (and millions of his devotees).

To the layman: This hodge-podge consists of regarding the play (“interactions”) between the boy and the uncle as primary, without bothering about the father. You would avoid this kind of a hodge-podge if what you wanted was a basic consistency.

3.7 Singularities and the kind of relativistic QM which is needed:

So, you see, what is supposed to be the relativistic QM itself has to be reformulated. Then it would be easy to see that:

There are singularities of electric point-charges even in the relativistic QM.

In today’s formulation of relativistic QM, since it takes SR as if SR itself was the most basic ground truth (without looking into the conceptual bases of SR in the classical EM), it does take an extra special effort for you to realize that the most fundamental singularity in the relativistic QM is that of the electrons—and not of any relativistic spacetime contortions.

4. A word about putting quantum mechanics and gravity together:

Now, a word about QM and gravity—Wolchover’s concern for her abovementioned report. (Also, arguably, one of the concerns of the physicists she interviewed.)

Before we get going, a clarification is necessary—the one which concerns with mass of the electron.

4.1 Is charge a point-property in the classical EM? how about mass?

It might come as a surprise to you, but it’s a fact that in the fundamental classical EM, it does not matter whether you ascribe a specific location to the attribute of the electric charge, or not.

In particular, You may take the position (1) that the electric charge exists at the same point where the singularity in the electron’s field is. Or, alternatively, you may adopt the position (2) that the charge is actually distributed all over the space, wherever the electric field exists.

Realize that whether you take the first position or the second, it makes no difference whatsoever either to the concepts at the root of the EM laws or the associated calculation procedures associated with them.

However, we may consider the fact that the singularity indeed is a very distinguished point. There is only one such a point associated with the interaction of a given electron with another given electron. Each electron sees one and only one singular point in the field produced by the other electron.

Each electron also has just one charge, which remains constant at all times. An electron or a proton does not possess two charges. They do not possess complex-valued charges.

So, based on this extraneous consideration (it’s not mandated by the basic concepts or laws), we may think of simplifying the matters, and say that

the charge of an electron (or the other fundamental particle, viz., proton) exists only at the singular point, and nowhere else.

All in all, we might adopt the position that the charge is where the singularity is—even if there is no positive evidence for the position.

Then, continuing on this conceptually alluring but not empirically necessitated viewpoint, we could also say that the electron’s mass is where its electrostatic singularity is.

Now, a relatively minor consideration here also is that ascribing the mass only to the point of singularity also suggests an easy analogue to the Newtonian particle-mechanics. I am not sure how advantageous this analogue is. Even if there is some advantage, it would still be a minor advantage. The reason is, the two theories (NM and EM) are, hierarchically, at highly unequal levels—and it is this fact which is far more important.

All in all, we can perhaps adopt this position:

With all the if’s and the but’s kept in the context, the mass and the charge may be regarded as not just multipliers in the field equations; they may be regarded to have a distinguished location in space too; that the charge and mass exist at one point and no other.

We could say that. There is no experiment which mandates that we adopt this viewpoint, but there also is no experiment—or conceptual consideration—which goes against it. And, it seems to be a bit easier on the mind.

4.2 How quantum gravity becomes ridiculous simple:

If we thus adopt the viewpoint that the mass is where the electrostatic singularity is, then the issue of quantum gravity becomes ridiculously simple… assuming that you have developed a theory to multi-scale-wise gross out classical magnetism from the more basic QM formalism, in the first place.

Why would it make the quantum gravity simple?

Gravity is just a force between two point particles of electrons (or protons), and, you could directly include it in your QM if your computer’s floating point arithmetic allows you to deal with it.

As an engineer, I wouldn’t bother.

But, basically, that’s the only physics-wise relevance of quantum gravity.

4.3 What is the real relevance of quantum gravity?

The real reason behind the attempts to build a theory of quantum gravity (by following the track of the usual kind of the relativistic QM theory) is not based in physics or nature of reality. The reasons are, say “social”.

The socially important reason to pursue quantum gravity is that it keeps physicists in employment.

Naturally, once they are employed, they talk. They publish papers. Give interviews to the media.

All this can be fine, so long as you bear in your mind the real reason at all times. A field such as quantum gravity was invented (i.e. not discovered) only in order to keep some physicists out of unemployment. There is no other reason.

Neither Wolchover nor Motl would tell you this part, but it is true.

5. So, what can we finally say regarding singularities?:

Simply this much:

Next time you run into the word “singularity,” think of those small pieces of paper and a plastic comb.

Don’t think of those advanced graphics depicting some interstellar space-ship orbiting around a black-hole, with a lot of gooey stuff going round and round around a half-risen sun or something like that. Don’t think of that.

Singularities is far more common-place than you’ve been led to think.

Your laptop or cell-phone has of the order of $10^23$ number of singularities, all happily running around mostly within that small volume, and acting together, effectively giving your laptop its shape, its solidity, its form. These singularities is what gives your laptop the ability to brighten the pixels too, and that’s what ultimately allows you to read this post.

Finally, remember the definition of singularity:

A singularity is a distinguished point in an otherwise finite field where the field-strength approaches (positive or negative) infinity.

This is a mathematical characterization. Given that infinities are involved, physics can in principle have no characterization of any singularity. It’s a point which “falls out of”, i.e., is in principle excluded from, the integrated body of knowledge that is physics. Singularity is defined not on the basis of its own positive merits, but by negation of what we know to be true. Physics deals only with that which is true.

It might turn out that there is perhaps nothing interesting to be eventually found at some point of some singularity in some physics theory—classical or quantum. Or, it could also turn out that the physics at some singularity is only very mildly interesting. There is no reason—not yet—to believe that there must be something fascinating going on at every point which is mathematically described by a singularity. Remember: Singularities exist only in the abstract (limiting processes-based) mathematical characterizations, and that these abstractions arise from the known physics of the situation around the so distinguished point.

We do know a fantastically great deal of physics that is implied by the physics theories which do have singularities. But we don’t know the physics at the singularity. We also know that so long as the concept involves infinities, it is not a piece of valid physics. The moment the physics of some kind of singularities is figured out, the field strengths there would be found to be not infinities.

So, what’s singularity? It’s those pieces of paper and the comb.

Even better:

You—your body—itself carries singularities. Approx. $100 \times 10^23$ number of them, in the least. You don’t have to go looking elsewhere for them. This is an established fact of physics.

Remember that bit.

6. To physics experts:

Yes, there can be a valid theory of non-relativistic quantum mechanics that incorporates gravity too.

It is known that such a theory would obviously give erroneous predictions. However, the point isn’t that. The point is simply this:

Gravity is not basically wedded to, let alone be an effect of, electromagnetism. That’s why, it simply cannot be an effect of the relativistic reformulations of the multi-scaled grossed out view of what actually is the fundamental theory of electromagnetism.

Gravity is basically an effect shown by massive objects.

Inasmuch as electrons have the property of mass, and inasmuch as mass can be thought of as existing at the distinguished point of electrostatic singularities, even a non-relativistic theory of quantum gravity is possible. It would be as simple as adding the Newtonian gravitational potential energy into the Hamiltonian for the non-relativistic quantum mechanics.

You are not impressed, I know. Doesn’t matter. My primary concern never was what you think; it always was (and is): what the truth is, and hence, also, what kind of valid conceptual structures there at all can be. This has not always been a concern common to both of us. Which fact does leave a bit of an impression about you in my mind, although it is negative-valued.

A song I like:

(Hindi) ओ मेरे दिल के चैन (“O mere, dil ke chain”)
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Majrooh Sultanpuri

[

I think I have run the original version by Kishore Kumar here in this section before. This time, it’s time for Lata’s version.

Lata’s version came as a big surprise to me; I “discovered” it only a month ago. I had heard other young girls’ versions on the YouTube, I think. But never Lata’s—even if, I now gather, it’s been around for some two decades by now. Shame on me!

To the $n$-th order approximation, I can’t tell whether I like Kishor’s version better or Lata’s, where $n$ can, of course, only be a finite number though it already is the case that $n > 5$.

… BTW, any time in the past (i.e., not just in my youth) I could have very easily betted a very good amount of money that no other singer would ever be able to sing this song. A female singer, in particular, wouldn’t be able to even begin singing this song. I would have been right. When it comes to the other singers, I don’t even complete their, err, renderings. For a popular case in point, take the link provided after this sentence, but don’t bother to return if you stay with it for more than, like, 30 seconds [^].

Earlier, I would’ve expected that even Lata is going to fail at the try.

But after listening to her version, I… I don’t know what to think, any more. May be it’s the aforementioned uncertainty which makes all thought cease! And thusly, I now (shamelessly and purely) enjoy Lata’s version, too. Suggestion: If you came back from the above link within 30 seconds, you follow me, too.

]