Yesss! I did it!

Last evening (on 2021.01.13 at around 17:30 IST), I completed the first set of computations for finding the bonding energy of a helium atom, using my fresh new approach to QM.

These calculations still are pretty crude, both by technique and implementation. Reading through the details given below, any competent computational engineer/scientist would immediately see just how crude they are. However, I also hope that he would also see that I can still say that these initial results may be taken as definitely validating my new approach.

It would be impossible to give all the details right away. So, what I give below are some important details and highlights of the model, the method, and the results.

For that matter, even my Python scripts are currently in a pretty disorganized state. They are held together by duct-tape, so to say. I plan to rearrange and clean up the code, write a document, and upload them both. I think it should be possible to do so within a month’s time, i.e., by mid-February. If not, say due to the RSI, then probably by February-end.

Alright, on to the details. (I am giving some indication about some discarded results/false starts too.)


1. Completion of the theory:

As far as development of my new theory goes, there were many tricky issues that had surfaced since I began trying simulating my new approach, which was starting in May–June 2020. The crucially important issues were the following:

  • A quantitatively precise statement on how the mainstream QM’s \Psi, defined as it is over the 3N-dimensional configuration space, relates to the 3-dimensional wavefunctions I had proposed earlier in the Outline document.
  • A quantitatively precise statement on how the wavefunction \Psi makes the quantum particles (i.e. their singularity-anchoring positions) move through the physical space. Think of this as the “force law”, and then note that if a wrong statement is made here, then the entire system dynamics/evolution has to go wrong. Repurcussions will exist even in a simplest system having two interacting particles, like the helium atom. The bonding energy calculations of the helium atom are bound to go wrong if the “force law” is wrong. (I don’t actually calculate the forces, but that’s a different matter.)
  • Also to be dealt with was this issue: Ensuring that the anti-symmetry property for the indistinguishable fermions (electrons) holds.

I had achieved a good clarity on all these (and similar other) matters by the evening of 5th January 2021. I also tried to do a few simulations but ran into problem. Both these developments were mentioned via an update at iMechanica on the evening of 6th January 2021, here [^].


2. Simulations in 1D boxes:

By “box” I mean a domain having infinite potential energy walls at the boundaries, and imposition of the Dirichlet condition of \Psi(x,t) = 0 at the boundaries at all times.

I did a rapid study of the problems (mentioned in the iMechanica update). The simulations for this study involved 1D boxes from 5 a.u. to 100 a.u. lengths. (1 a.u. of length = 1 Bohr radius.) The mesh sizes varied from 5 nodes to 3000 nodes. Only regular, structured meshes of uniform cell-sides (i.e., a constant inter-nodal distance, \Delta x) were used, not non-uniform meshes (such as log-based).

I found that the discretization of the potential energy (PE) term indeed was at the root of the problems. Theoretically, the PE field is singular. I have been using FDM. Since an infinite potential cannot be handled using FDM, you have to implement some policy in giving a finite value for the maximum depth of the PE well.

Initially, I chose the policy of setting the max. depth to that value which would exist at a distance of half the width of the cell. That is to say, V_S \approx V(\Delta x/2), where V_S denotes the PE value at the singularity (theoretically infinite).

The PE was calculated using the Coulomb formula, which is given as V(r) = 1/r when one of the charges is fixed, and as V_1(r_s) = V_2(r_s) = 1/(2r_s) for two interacting and moving charges. Here, r_s denotes the separation between the interacting charges. The rule of half cell-side was used for making the singularity finite. The field so obtained will be referred to as the “hard” PE field.

Using the “hard” field was, if I recall it right, quite OK for the hydrogen atom. It gave the bonding energies (ground-state) ranging from -0.47 a.u. to -0.49 a.u. or lower, depending on the domain size and mesh refinement (i.e. number of nodes). Note, 1 a.u. of energy is the same as 1 hartree. For comparison, the analytical solution gives -0.5, exactly. All energy calculations given here refer to only the ground-state energies. However, I also computed and checked up to 10 eigenvalues.

Initially, I tried both dense and sparse eigenvalue solvers, but eventually settled only on the sparse solvers. The results were indistinguishable (at least numerically) . I used SciPy’s wrappings for the various libraries.

I am not quite sure whether using the hard potential was always smooth or not, even for the hydrogen atom. I think not.

However, the hard Coulomb potential always led to problems for the helium atom in a 1D box (being modelled using my new approach/theory). The lowest eigen-value was wrong by more than a factor of 10! I verified that the corresponding eigenvector indeed was an eigenvector. So, the solver was giving a technically correct answer, but it was an answer to the as-discretized system, not to the original physical problem.

I therefore tried using the so-called “soft” Coulomb potential, which was new to me, but looks like it’s a well known function. I came to know of its existence via the OctopusWiki [^], when I was searching on some prior code on the helium atom. The “soft” Coulomb potential is defined as:

V = \dfrac{1}{\sqrt{(a^2 + x^2)}}, where a is an adjustable parameter, often set to 1.

I found this potential unsatisfactory for my work, mainly because it gives rise to a more spread-out wavefunction, which in turn implies that the screening effect of one electron for the other electron is not captured well. I don’t recall exactly, but I think that there was this issue of too low ground-state eigenvalues also with this potential (for the helium modeling). It is possible that I was not using the right SciPy function-calls for eigenvalue computations.

Please take the results in this section with a pinch of salt. I am writing about them only after 8–10 days, but I have written so many variations that I’ve lost the track of what went wrong in what scenario.

All in all, I thought that 1D box wasn’t working out satisfactorily. But a more important consideration was the following:

My new approach has been formulated in the 3D space. If the bonding energy is to be numerically comparable to the experimental value (and not being computed as just a curiosity or computational artifact) then the potential-screening effect must be captured right. Now, here, my new theory says that the screening effect will be captured quantitatively correctly only in a 3D domain. So, I soon enough switched to the 3D boxes.


3. Simulations of the hydrogen atom in 3D boxes:

For both hydrogen and helium, I used only cubical boxes, not parallelpipeds (“brick”-shaped boxes). The side of the cube was usually kept at 20 a.u. (Bohr radii), which is a length slightly longer than one nanometer (1.05835 nm). However, some of my rapid experimentation also ranged from 5 a.u. to 100 a.u. domain lengths.

Now, to meshing

The first thing to realize is that with a 3D domain, the total number of nodes M scales cubically with the number of nodes n appearing on a side of the cube. That is to say: M = n^3. Bad thing.

The second thing to note is worse: The discretized Hamiltonian operator matrix now has the dimensions of M \times M. Sparse matrices are now a must. Even then, meshes remain relatively coarse, else computation time increases a lot!

The third thing to note is even worse: My new approach requires computing “instantaneous” eigenvalues at all the nodes. So, the number of times you must give a call to, say eigh() function, also goes as M = n^3. … Yes, I have the distinction of having invented what ought to be, provably, the most inefficient method to compute solutions to many-particle quantum systems. (If you are a QC enthusiast, now you know that I am a completely useless fellow.) But more on this, just a bit later.

I didn’t have to write the 3D code completely afresh though. I re-used much of the backend code from my earlier attempts from May, June and July 2020. At that time, I had implemented vectorized code for building the Laplacian matrix. However, in retrospect, this was an overkill. The system spends more than 99 % of execution time only in the eigenvalue function calls. So, preparation of the discretized Hamiltonian operator is relatively insignificant. Python loops could do! But since the vectorized code was smaller and a bit more easily readable, I used it.

Alright.

The configuration space for the hydrogen atom is small, there being only one particle. It’s “only” M in size. More important, the nucleus being fixed, and there being just one particle, I need to solve the eigenvalue equation only once. So, I first put the hydrogen atom inside the 3D box, and verified that the hard Coulomb potential gives cool results over a sufficiently broad range of domain sizes and mesh refinements.

However, in comparison with the results for the 1D box, the 3D box algebraically over-estimates the bonding energy. Note the word “algebraically.” What it means is that if the bonding energy for a H atom in a 1D box is -0.49 a.u., then with the same physical domain size (say 20 Bohr radii) and the same number of nodes on the side of the cube (say 51 nodes per side), the 3D model gives something like -0.48 a.u. So, when you use a 3D box, the absolute value of energy decreases, but the algebraic value (including the negative sign) increases.

As any good computational engineer/scientist could tell, such a behaviour is only to be expected.

The reason is this: The discretized PE field is always jagged, and so it only approximately represents a curvy function, especially near the singularity. This is how it behaves in 1D, where the PE field is a curvy line. But in a 3D case, the PE contour surfaces bend not just in one direction but in all the three directions, and the discretized version of the field can’t represent all of them taken at the same time. That’s the hand-waving sort of an “explanation.”

I highlighted this part because I wanted you to note that in 3D boxes, you would expect the helium atom energies to algebraically overshoot too. A bit more on this, later, below.


4. Initial simulations of the helium atom in 3D boxes:

For the helium atom too, the side of the cube was mostly kept at 20 a.u. Reason?

In the hydrogen atom, the space part of the ground state \psi has a finite peak at the center, and its spread is significant over a distance of about 5–7 a.u. (in the numerical solutions). Then, for the helium atom, there is going to be a dent in the PE field due to screening. In my approach, this dent physically moves over the entire domain as the screening electron moves. To accommodate both their spreads plus some extra room, I thought, 20 could be a good choice. (More on the screening effect, later, below.)

As to the mesh: As mentioned earlier, the number of eigenvalue computations required are M, and the time taken by each such a call goes significantly up with M. So, initially, I kept the number of nodes per side (i.e. n) at just 23. With two extreme planes sacrificed to the deity of the boundary conditions, the actual computations actually took place on a 21 \times 21 \times 21 mesh. That still means, a system having 9261 nodes!

At the same time, realize how crude and coarse mesh this one is: Two neighbouring nodes represent a physical distance of almost one Bohr radius! … Who said theoretical clarity must come also with faster computations? Not when it’s QM. And certainly not when it’s my theory! I love to put the silicon chip to some real hard work!

Alright.

As I said, for the reasons that will become fully clear only when you go through the theory, my approach requires M number of separate eigenvalue computation calls. (In “theory,” it requires M^2 number of them, but some very simple and obvious symmetry considerations reduce the computational load to M.) I then compute the normalized 1-particle wavefunctions from the eigenvector. All this computation forms what I call the first phase. I then post-process the 1-particle wavefunctions to get to the final bonding energy. I call this computation the second phase.

OK, so in my first computations, the first phase involved the SciPy’s eigsh() function being called 9261 number of times. I think it took something like 5 minutes. The second phase is very much faster; it took less than a minute.

The bonding energy I thus got should have been around -2.1 a.u. However, I made an error while coding the second phase, and got something different (which I no longer remember, but I think I have not deleted the wrong code, so it should be possible to reproduce this wrong result). The error wasn’t numerically very significant, but it was an error all the same. This status was by the evening of 11th January 2021.

The same error continued also on 12th January 2021, but I think that if the errors in the second phase were to be corrected, the value obtained could have been close to -2.14 a.u. or so. Mind you, these are the results with a 20 a.u. box and 23 nodes per side.

In comparison, the experimental value is -2.9033 a.u.

As to computations, Hylleraas, back in 1927 a PhD student, used a hand-held mechanical calculator, and still got to -2.90363 a.u.! Some 95+ years later, his method and work still remain near the top of the accuracy stack.

Why did my method do so bad? Even more pertinent: How could Hylleraas use just a mechanical calculator, not a computer, and still get to such a wonderfully accurate result?

It all boils down to the methods, tricks, and even dirty tricks. Good computational engineers/scientists know them, their uses and limitations, and do not hesitate building products with them.

But the real pertinent reason is this: The technique Hylleraas used was variational.


5. A bit about the variational techniques:

All variational techniques use a trial function with some undetermined parameters. Let me explain in a jiffy what it means.

A trial function embodies a guess—a pure guess—at what the unknown solution might look like. It could be any arbitrary function.

For example, you could even use a simple polynomial like y = a_0 + a_1 x_1 + a_2 x_2^2 + a_3 x_3^3 by way of a trial function.

Now, observe that if you change the values of the a_0, a_1 etc. coefficients, then the shape of the function changes. Just assign some random values and plot the results using MatPlotLib, and you will know what I mean.

… Yes, you do something similar also in Data Science, but there, the problem formulation is relatively much simpler: You just tweak all the a_i coefficients until the function fits the data. “Curve-fitting,” it’s called.

In contrast, in variational calculus, you don’t do this one-step curve-fitting. You instead take the y function and substitute it into some theoretical equations that have something to do with the total energy of the system. Then you find an expression which tells how the energy, now expressed as a function of y, which itself is a function of a_i‘s, varies as these unknown coefficients a_i are varied. So, these a_i‘s basically act as parameters of the model. Note carefully, the y function is the primary unknown function, but in variational calculus, you do the curve-fitting with respect to some other equation.

So, the difference between simple curve-fitting and variational methods is the following. In simple curve-fitting, you fit the curve to concrete data values. In variational calculus, you fit an expression derived by substituting the curve into some equations (not data), and then derive some further equations that show how some measure like energy changes with variations in the parameters. You then adjust the parameters so as to minimize that abstract measure.

Coming back to the helium atom, there is a nucleus with two protons inside it, and two electrons that go around the nucleus. The nucleus pulls both the electrons, but the two electrons themselves repel each other. (Unlike and like charges.) When one electron strays near the nucleus, it temporarily decreases the effective pull exerted by the nucleus on the other electron. This is called the screening effect. In short, when one electron goes closer to the nucleus, the other electron feels as if the nucleus had discharged a little bit. The effect gets more and more pronounced as the first electron goes closer to the nucleus. The nucleus acts as if it had only one proton when the first electron is at the nucleus. The QM particles aren’t abstractions from the rigid bodies of Newtonian mechanics; they are just singularity conditions in the aetherial fields. So, it’s easily possible that an electron sits at the same place where the two protons of the nucleus are.

One trouble with using the variational techniques for problems like modeling the helium atom is this. It models the screening effect using a numerically reasonable but physically arbitrary trial function. Using this technique can give a very accurate result for bonding energy, provided that the person building the variational model is smart, as Hylleraas sure was. But the trial function is just a guess work. It can’t be said to capture any physics, as such. Let me give an example.

Suppose that some problem from physics is such that a 5-degree polynomial happens to be the physically accurate form of solution for it. However, you don’t know the analytical solution, not even its form.

Now, the variational technique doesn’t prevent you from using a cubic polynomial as the trial function. That’s because, even if you use a cubic polynomial, you can still get to the same total system energy.

The actual calculations are far more complicated, but just as a fake example to illustrate my main point, suppose for a moment that the area under the solution curve is the target criterion (and not a more abstract measure like energy). Now, by adjusting the height and shape of a cubic polynomial, you can always alter its shape such that it happens to give the right area under the curve. Now, the funny part is this. If the trial function we choose is only cubic, then it is certain to miss, as a matter of a general principle, all the information related to the 3rd- and 4th-order derivatives. So, the solution will have a lot of high-order physics deleted from itself. It will be a bland solution; something like a ghost of the real thing. But it can still give you the correct area under the curve. If so, it still fulfills the variational criterion.

Coming back to the use of variational techniques in QM, like Hylleraas’ method:

It can give a very good answer (even an arbitrarily accurate answer) for the energy. But the trial function can still easily miss a lot of physics. In particular, it is known that the wavefunctions (actually, “orbitals”) won’t turn out to be accurate; they won’t depict physical entities.

Another matter: These techniques work not in the physical space but in the configuration space. So, the opportunity of taking what properly belongs to Raam and giving it to Shaam is not just ever-present but even more likely.

Even simpler example is this. Suppose you are given 100 bricks and asked to build a structure on a given area for a wall on the ground. You can use them to arrange one big tower in the wall, two towers, whatever… There still would be in all 100 bricks sitting on the same area on the ground. The shapes may differ; the variational technique doesn’t care for the shape. Yet, realize, having accurate atomic orbitals means getting the shape of the wall too right, not just dumping 100 bricks on the same area.


6. Why waste time on yet another method, when a more accurate method has been around for some nine decades?

“OK, whatever” you might say at this point. “But if the variational technique was OK by Hylleraas, and if it’s been OK for the entire community of physicists for all these years, then why do you still want to waste your time and invent just another method that’s not as accurate anyway?”

My answer:

Firstly, my method isn’t an invention; it is a discovery. My calculation method directly follows the fundamental principles of physics through and through. Not a single postulate of the mainstream QM is violated or altered; I merely have added some further postulates, that’s all. These theoretical extensions fit perfectly with the mainstream QM, and using them directly solves the measurement problem.

Secondly, what I talked about was just an initial result, a very crude calculation. In fact, I have alrady improved the accuracy further; see below.

Thirdly, I must point out a possibility which your question didn’t at all cover. My point is that this actually isn’t an either-or situation. It’s not either variational technique (like Hylleraas’s) or mine. Indeed, it would very definitely be possible to incorporate the more accurate variational calculations as just parts of my own calculations too. It’s easy to show that. That would mean, combining “the best of both worlds”. At a broader level, the method would still follow my approach and thus be physically meaningful. But within carefully delimited scope, trial-functions could still be used in the calculation procedures. …For that matter, even FDM doesn’t represent any real physics either. Another thing: Even FDM can itself can be seen as just one—arguably the simplest—kind of a variational technique. So, in that sense, even I am already using the variational technique, but only the simplest and crudest one. The theory could easily make use of both meshless and mesh-requiring variational techniques.

I hope that answers the question.


7. A little more advanced simulation for the helium atom in a 3D box:

With my computational experience, I knew that I was going to get a good result, even if the actual result was only estimated to be about -2.1 a.u.—vs. -2.9033 a.u. for the experimentally determined value.

But rather than increasing accuracy for its own sake, on the 12th and 13th January, I came to focus more on improving the “basic infrastructure” of the technique.

Here, I now recalled the essential idea behind the Quantum Monte Carlo method, and proceeded to implement something similar in the context of my own approach. In particular, rather than going over the entire (discretized) configuration space, I implemented a code to sample only some points in it. This way, I could use bigger (i.e. more refined) meshes, and get better estimates.

I also carefully went through the logic used in the second phase, and corrected the errors.

Then, using a box of 35 a.u. and 71 nodes per side of the cube (i.e., 328,509 nodes in the interior region of the domain), and using just 1000 sampled nodes out of them, I now found that the bonding energy was -2.67 a.u. Quite satisfactory (to me!)


8. Finally, a word about the dirty tricks department:

I happened to observe that with some choices of physical box size and computational mesh size, the bonding energy could go as low as -3.2 a.u. or even lower.

What explains such a behaviour? There is this range of results right from -2.1 a.u. to -2.67 a.u. to -3.2 a.u. …Note once again, the actual figure is: -2.90 a.u.

So, the computational results aren’t only on the higher side or only on the lower side. Instead, they form a band of values on both sides of the actual value. This is both a good news and a bad news.

The good plus bad news is that it’s all a matter of making the right numerical choices. Here, I will mention only 2 or 3 considerations.

As one consideration, to get more consistent results across various domain sizes and mesh sizes, what matters is the physial distance represented by each cell in the mesh. If you keep this mind, then you can get results that fall in a narrow band. That’s a good sign.

As another consideration, the box size matters. In reality, there is no box and the wavefunction extends to infinity. But a technique like FDM requires having to use a box. (There are other numerical techniques that can work with infinite domains too.) Now, if you use a larger box, then the Coulomb well looks just like the letter `T’. No curvature is captured with any significance. With a lot of physical region where the PE portion looks relatively flat, the role played by the nuclear attraction becomes less significant, at least in numerical work. In short, the atom in a box approaches a free-particle-in-a-box scenario! On the other hand, a very small box implies that each electron is screening the nuclear potential at almost all times. In effect, it’s as if you are modelling a H- ion rather than an He atom!

As yet another consideration: The policy for choosing the depth of the potential energy matters. A concrete example might help.

Consider a 1D domain of, say, 5 a.u. Divide it using 6 nodes. Put a proton at the origin, and compute the electron’s PE. At the distance of 5 a.u., the PE is 1.0/5.0 = 0.2 a.u. At the node right next to singularity, the PE is 1 a.u. What finite value should you give to the PE be at the nucleus? Suppose, following the half-cell side rule, you give it the value of 1.0/0.5 = 2 a.u. OK.

Now refine the mesh, say by having 10 nodes going over the same physical distance. The physically extreme node retains the same value, viz. 0.2 a.u. But the node next to the singularity now has a PE of 1.0/0.5 = 2 a.u., and the half cell-side rule now gives the a value of 1.0/0.25 = 4.0 a.u. at the nucleus.

If you plot the two curves using the same scale, the differences are especially is striking. In short, mesh refinement alone (keeping the same domain size) has resulted in keeping the same PE at the boundary but jacking up the PE at the nucleus’ position. Not only that, but the PE field now has a more pronounced curvature over the same physical distance. Eigenvalue problems are markedly sensitive to the curvature in the PE.

Now, realize that tweaking this one parameter alone can make the simulation zoom on to almost any value you like (within a reasonable range). I can always choose this parameter in such a way that even a relatively crude model could come to reproduce the experimental value of -2.9 a.u. very accurately—for energy. The wavefunction may remain markedly jagged. But the energy can be accurate.

Every computational engineer/scientist understands such matters, especially those who work with singlarities in fields. For instance, all computational mechanical engineers know how the stress values can change by an order of magnitude or more, depending on how you handle the stress concentrators. Singularities form a hard problem of computational science & engineering.

That’s why, what matters in computational work is not only the final number you produce. What matters perhaps even more are such things as: Whether the method works well in terms of stability; trends in the accuracy values (rather than their absolute values); whether the method can theoretically accomodate some more advanced techniques easily or not; how it scales with the size of the domain and mesh refinement; etc.

If a method does fine on such counts, then the sheer accuracy number by itself does not matter so much. We can still say, with reasonable certainty, that the very theory behind the model must be correct.

And I think that’s what my yesterday’s result points to. It seems to say that my theory works.


9. To wind up…

Despite all my doubts, I always thought that my approach is going to work out, and now I know that it does—nay, it must!

The 3-dimensional \Psi fields can actually be seen to be pushing the particles, and the trends in the numerical results are such that the dynamical assumptions I introduced, for calculating the motions of the particles, must be correct too. (Another reason for having confidence in the numerical results is that the dynamical assumptions are very simple, and so it’s easy to think how they move the particles!) At the same time, though I didn’t implement it, I can easily see that the anti-symmetrical property of at least 2-particle system definitely comes out directly. The physical fields are 3-dimensional, and the configuration space comes out as a mathematical abstraction from them. I didn’t specifically implement any program to show detection probabilities, but I can see that they are going to come to right—at least for 2-particle systems.

So, the theory works, and that matters.

Of course, I will still have quite some work to do. Working out the remaining aspects of spin, for one thing. A three interacting-particles system would also be nice to work through and to simulate. However, I don’t know which system I could/should pick up. So, if you have any suggestions for simulating a 3-particle system, some well known results, then do let me know. Yes, there still are chances that I might still need to tweak the theory a little bit here and little bit there. But the basic backbone of the theory, I am now quite confident, is going to stand as is.

OK. One last point:

The physical fields of \Psi, over the physical 3-dimensional space, have primacy. Due to the normalization constraint, in real systems, there are no Dirac’s delta-like singularities in these wavefunctions. The singularities of the Coulomb field do enter the theory, but only as devices of calculations. Ontologically, they don’t have a primacy. So, what primarily exist are the aetherial, complex-valued, wavefunctions. It’s just that they interact with each other in such a way that the result is as if the higher-level V term were to have a singularity in it. Indeed, what exists is only a single 3-dimensional wavefunction; it is us who decompose it variously for calculational purposes.

That’s the ontological picture which seems to be emerging. However, take this point with the pinch of the salt; I still haven’t pursued threads like these; been too busy just implementing code, debugging it, and finding and comparing results. …


Enough. I will start writing the theory document some time in the second half of the next week, and will try to complete it by mid-February. Then, everything will become clear to you. The cleaned up and reorganized Python scripts will also be provided at that time. For now, I just need a little break. [BTW, if in my …err…“exuberance” online last night, if I have offended someone, my apologies…]

For obvious reasons, I think that I will not be blogging for at least two weeks…. Take care, and bye for now.


A song I like:

(Western, pop): “Lay all your love on me”
Band: ABBA

[A favourite since my COEP (UG) times. I think I looked up the words only last night! They don’t matter anyway. Not for this song, and not to me. I like its other attributes: the tune, the orchestration, the singing, and the sound processing.]


History:
— 2021.01.14 21:01 IST: Originally published
— 2021.01.15 16:17 IST: Very few, minor, changes overall. Notably, I had forgotten to type the powers of the terms in the illustrative polynomial for the trial function (in the section on variational methods), and now corrected it.

Some comments on QM and CM—Part 2: Without ontologies, “classical” mechanics can get very unintuitive too. (Also, a short update.)

We continue from the last post. If you haven’t read and understood it, it can be guaranteed that you won’t understand anything from this one! [And yes, this post is not only long but also a bit philosophical.]


The last time, I gave you a minimal list of different ontologies for physics theories. I also shared a snap of my hurriedly jotted (hand-written) note. In this post, I will come to explain what I meant by that note.


1. In the real world, you never get to see the objects of “classical” mechanics:

OK, let’s first take a couple of ideas from Newtonian mechanics.

1.1. Point-particles:

The Newtonian theory uses a point particle. But your perceptual field never holds the evidence for any such an object. The point particle is an abstraction. It’s an idealized (conceptual-level) description of a physical object, a description that uses the preceding mathematical ideas of limits (in particular, the idea of the vanishingly small size).

The important point to understand here isn’t that the point-particle is not visible. The crucial point here is: it cannot be visible (or even made visible, using any instrument) because it does not exist as a metaphysically separate object in the first place!

1.2. Rigid bodies:

It might come as a surprise to many, esp. to mechanical engineers, but something similar can also be said for the rigid body. A rigid body is a finite-sized object that doesn’t deform (and unless otherwise specified, doesn’t change any of its internal fields like density or chemical composition). Further, it never breaks, and all its parts react instantaneously to any forces exerted on any part of it. Etc.

When you calculate the parabolic trajectory of a cricket ball (neglecting the air resistance), you are not working with any entity that can ever be seen/ touched etc.—in principle. In your calculations, in your theory, you are only working with an idea, an abstraction—that of a rigid body having a center of mass.

Now, it just so happens that the concepts from the Newtonian ontologies are so close to what is evident to you in your perceptual field, that you don’t even notice that you are dealing with any abstractions of perceptions. But this fact does not mean that they cease to be abstract ideas.


2. Metaphysical locus of physics abstractions, and epistemology of how you use them:

2.1. Abstractions do exist—but only in the mind:

In general, what’s the metaphysics of abstractions? What is the metaphysical locus of its existence?

An abstraction exists as a unit of mental integration—as a concept. It exists in your mind. A concept doesn’t have an existence apart from, or independent of, the men who know and hold that concept. A mental abstraction doesn’t exist in physical reality. It has no color, length, weight, temperature, location, speed, momentum, energy, etc. It is a non-material entity. But it still exists. It’s just that it exists in your mind.

In contrast, the physical objects to which the abstractions of objects make a reference, do exist in the physical reality out there.

2.2. Two complementary procedures (or conceptual processings):

Since the metaphysical locus of the physical objects and the concepts referring to them are different, there have to be two complementary and separate procedures, before a concept of physics (like the ideal rigid body) can be made operational, say in a physics calculation:

2.2.1. Forming the abstraction:

First, you have to come to know that concept—you either learn it, or if you are an original scientist, you discover/invent it. Next, you have to hold this knowledge, and also be able recall and use it as a part of any mental processing related to that concept. Now, since the concept of the rigid body belongs to the science of physics, its referents must be part of the physical aspects of existents.

2.2.2. Applying the abstraction in a real-world situation:

In using a concept, then, you have to be able to consider a perceptual concrete (like a real cricket ball) as an appropriate instance of the already formed concept. Taking this step means: even if a real ball is deformable or breakable, you silently announce to yourself that in situations where such things can occur, you are not going to apply the idea of the rigid body.

The key phrases here are: “inasmuch as,” “to that extent,” and “is a.” The mental operation of regarding a concrete object as an instance of a concept necessarily involves you silently assuming this position: “inasmuch as this actual object (from the perceptual field) shows the same characteristics, in the same range of “sizes”, as for what I already understand by the concept XYZ, therefore, to that extent, this actual object “is a” XYZ.

2.2.3. Manipulation of concepts at a purely abstract level is possible (and efficient!):

As the next step, you have to be able to directly manipulate the concept as a mere unit from some higher-level conceptual perspective. For example, as in applying the techniques of integration using Newton’s second law, etc.

At this stage, your mind isn’t explicitly going over the defining characteristics of the concept, its relation to perceptual concretes, its relation to other concepts, etc.

Without all such knowledge at the center of your direct awareness, you still are able to retain a background sense of all the essential properties of the objects subsumed by the concept you are using. Such a background sense also includes the ideas, conditions, qualifications, etc., governing its proper usage. That’s the mental faculty automatically working for you when you are born a human.

You only have to will, and the automatic aspects of your mind get running. (More accurately: Something or the other is always automatically present at the background of your mind; you are born with such a faculty. But it begins serving your purpose when you begin addressing some specific problem.)

All in all: You do have to direct the faculty which supplies you the background context, but you can do it very easily, just by willing that way. You actually begin thinking on something, and the related conceptual “material” is there in the background. So, free will is all that it takes to get the automatic sense working for you!

2.2.4. Translating the result of a calculation into physical reality:

Next, once you are done with working ideas at the higher-level conceptual level, you have to be able to “translate the result back to reality”. You have to be able to see what perceptual-level concretes are denoted by the concepts related to the result of calculation, its size, its units, etc. The key phrase here again are: “inasmuch as” and “to that extent”.

For example: “Inasmuch as the actual cricket ball is a rigid body, after being subjected to so much force, by the laws governing rigid bodies (because the laws concern themselves only with the rigid bodies, not with cricket balls), a rigid body should be precisely at 100.0 meter after so much time. Inasmuch as the cricket ball can also be said to have an exact initial position (as for a rigid body used in the calculations), its final position should be exactly 100 meter away. Inasmuch as a point on the ground can be regarded as being exactly 100 meter away (in the right direction), the actual ball can also be expected, to that extent, to be at [directly pointing out] that particular spot after that much time. Etc.

2.3: A key take-away:

So, an intermediate but big point I’ve made is:

Any theory of classical mechanics too makes use of abstractions. You have to undertake procedures involving the mappings between concretes and abstractions, in classical mechanics too.

2.4. Polemics:

You don’t see a rigid body. You see only a ball. You imagine a rigid body in the place of the given ball, and then decide to do the intermediate steps only with this instance of the imagination. Only then can you invoke the physics theory of Newtonian mechanics. Thus, the theory works purely at the mental abstractions level.

A theory of physics is not an album of photographs; an observation being integrated in a theory is not just a photograph. On the other hand, a sight of a ball is not an abstraction; it is just a concretely real object in your perceptual field. It’s your mind that makes the connection between the two. Only can then any conceptual knowledge be acquired or put to use. Acquisition of knowledge and application of knowledge are two sides of the same coin. Both involve seeing a concrete entity as an instance subsumed under a concept or a mental perspective.

2.5. These ideas have more general applicability:

What we discussed thus far is true for any physics theory: whether “classical” mechanics (CM) or quantum mechanics (QM).

It’s just that the first three ontologies from the last post (i.e. the three ontologies with “Newtonian” in their name) have such abstractions that it’s very easy to establish the concretes-to-abstractions correspondence for them.

These theories have become, from a hindsight of two/three centuries and absorption of its crucial integrative elements into the very culture of ours, so easy for us to handle, they seem to be so close to “the ground” that we have to think almost nothing to regard a cricket ball as a rigid body. Doesn’t matter. The requirement of you willingly having to establish the correspondenc between the concretes and abstractions (and vice versa) still exists.

Another thing: The typical application of all the five pre-quantum ontologies also typically fall in the limited perceptual range of man, though this cannot be regarded as the distinguishing point of “classical” mechanics. This is an important point so let me spend a little time on it.

Trouble begins right from Fourier’s theory.


3. “Classical” mechanics is not without tricky issues:

3.1. Phenomenological context for the Fourier theory is all “classical”:

In its original form, Fourier’s theory dealt with very macroscopic or “every day” kind of objects. The phenomenological context which gave rise to Fourier’s theory was: the transmission of heat from the Sun by diffusion into the subterranean layers of the earth, making it warm. That was the theoretical problem which Fourier was trying to solve, when he invented the theory that goes by his name.

Actually, that was a bit more complicated problem. A simpler formulation of the same problem would be: quantitatively relating the thermal resistance offered by wood vs. metal, etc. The big point I want to note here is: All these (the earth, a piece of wood or metal) are very, very “everyday” objects. You wouldn’t hesitate saying that they are objects of “classical” physics.

3.2. But the Fourier theory makes weird predictions in all classical physics too:

But no matter how classical these objects look, an implication is this:

The Fourier theory ends up predicting infinite velocity for signal propagation for “classical” objects too.

This is a momentous implication. Make sure you understand it right. Pop-sci writers never highlight this point. But it’s crucial. The better you understand it, the less mysterious QM gets!

In concrete terms, what the Fourier theory says is this:

If you pour a cup of warm water on ground at the North pole, no doubt the place will get warmer for some time. But this is not the only effect your action would have. Precisely and exactly at the same instant, the South pole must also get warmer, albeit to a very small extent. Not only the South Pole, every object at every place on the earth, including the cell phone of your friend sitting in some remote city also must get warmer. [Stretching the logic, and according a conduction mode also to the intergalactic dust: Not just that, every part of the most distant galaxies too must get warmer—in the same instant.] Yes, the warming at remote places might be negligibly small. But in principle, it is not zero.

And that’s classical physics of ordinary heat conduction for you.

3.3. Quantum entanglement and Heisenberg’s uncertainty principle are direct consequences of the same theory:

Now, tell me, how intuitive was Fourier’s predictions?

My answer: Exactly as unintuitive as is the phenomenon of quantum entanglement—and, essentially, for exactly the same ontological-physical-mathematical reasons!

Quantum entanglement is nothing but just another application of the Fourier theory. And so is Heisenberg’s uncertainty principle. It too is a direct consequence of the Fourier theory.

3.4. Another key take-away:

So, the lesson is:

Not all of “classical” mechanics is as “intuitive” as you were led to believe.

3.5. Why doesn’t any one complain?

If classical physics too is that unintuitive, then how come that no one goes around complaining about it?

The reason is this:

Classical mechanics involves and integrates a conceptually smaller range of phenomena. Most of its application scenarios too are well understood—even if not by you, and then at least by some learned people, and they have taken care to explain all these scenarios to you.

For instance, if I ask you to work out how the Coriolis force works for two guys sitting diametrically opposite on a rotating disco floor and throwing balls at each other, I am willing to take a good bet that you won’t be able to work out everything on your own using vector analysis and Newton’s laws. So, this situation should actually be non-intuitive to you. It in fact is: Without searching on the ‘net, be quick and tell me whether the ball veers in the direction of rotation or opposite it? See? It’s just that no pop-sci authors highlight issues like this, and so, no philosophers take notice. (And, as usual, engineers don’t go about mystifying anything.)

So, what happens in CM is that some expert works out the actual solution, explains to you. You then snatch some bits and pieces, may be just a few clues from his explanation, and memorize them. Slowly, as the number of such use-cases increases, you get comfortable enough with CM. Then you begin to think that CM is intuitive. And then, the next time when your grandma asks you how come that motorcyclist spinning inside the vertical well doesn’t fall off, you say that he sticks to the wall due to the centrifugal force. Very intuitive! [Hint, hint: Is it actually centrifugal or centripetal?]

OK, now let’s go over to QM.


4. The abstract-to-concretes mappings are much more trickier when it comes to QM:

4.1. The two-fold trouble:

The trouble with QM is two-fold.

First of all, the range of observations (or of phenomenology) underlying it is not just a superset of CM, it’s a much bigger superset.

Second: Physicists have not been able to work out a consistent ontology for QM. (Most often, they have not even bothered to do that. But I was talking about reaching an implicit understanding to that effect.)

So, they are reduced to learning (and then teaching) QM in reference to mathematical quantities and equations as the primary touch-stones.

4.2. Mathematical objects refer to abstract mental processes alone, not to physical objects:

Now, mathematical concepts have this difference. They are not only higher-level abstractions (on top of physical concepts), but their referents too in themselves are invented and not discovered. So, it’s all in the mind!

It’s true that physics abstractions, qua mental entities, don’t exist in physical reality. However, it also is true that the objects (including their properties/characteristics/attributes/acctions) subsumed under physics concepts do have a physical existence in the physical world out there.

For instance, a rigid body does not exist physically. But highly rigid things like stones and highly pliable or easily deformable things like a piece of jelly or an easily fluttering piece of cloth, do exist physically. So, observing them all, we can draw the conclusion that stones have much higher rigidity than the fluttering flag. Then, according an imaginary zero deformability to an imaginary object, we reach the abstraction of the perfectly rigid body. So, while the rigid body itself does not exist, rigidity as such definitely is part of the natural world (I mean, of its physical aspects).

But not so with the mathematical abstractions. You can say that two (or three or n number of) stones exist in a heap. But what actually exists are only stones, not the number 2, 3, or n. You can say that a wire-frame has edges. But you don’t thereby mean that its edges are geometrical lines, i.e., objects with only length and no thickness.

4.3. Consequence: How physicists hold, and work with, their knowledge of the QM phenomena:

Since physicists could not work out a satisfactory ontology for QM, and since concepts of maths do not have direct referents in the physical reality as apart from the human consciousness processing it size-wise, their understanding of QM does tend to be a lot more shaky (the comparison being with their understanding of the pre-quantum physics, esp. the first three ontologies).

As a result, physicists have to develop their understanding of QM via a rather indirect route: by applying the maths to even more number of concrete cases of application, verifying that the solutions are borne out by the experiments (and noting in what sense they are borne out), and then trying to develop some indirect kind of a intuitive feel, somehow—even if the objects that do the quantum mechanical actions aren’t clear to them.

So, in a certain sense, the most talented quantum physicists (including Noble laureates) use exactly the same method as you and me use when we are confronted with the Coriolios forces. That, more or less, is the situation they find themselves in.

The absence of a satisfactory ontology has been the first and foremost reason why QM is so extraordinarily unintuitive.

It also is the reason why it’s difficult to see CM as an abstraction from QM. Ask any BS in physics. Chances are 9 out of 10 that he will quote something like Planck’s constant going to zero or so. Not quite.

4.4. But why didn’t any one work out an ontology for QM?

But what were the reasons that physicists could not develop a consistent ontology when it came to QM?

Ah. That’s too complicated. At least 10 times more complicated than all the epistemology and physics I’ve dumped on you so far. That’s because, now we get into pure philosophy. And you know where the philosophers sit? They all sit on the Humanities side of the campus!

But to cut a long story short, very short, so short that it’s just a collage-like thingie: There are two reasons for that. One simple and one complicated.

4.4.1. The simple reason is this: If you don’t bother with ontologies, and then, if you dismiss ideas like the aether, and go free-floating towards ever higher and still higher abstractions (especially with maths), then you won’t be able to get even EM right. The issue of extracting the “classical” mechanical attributes, variables, quantities, etc. from the QM theory simply cannot arise in such a case.

Indeed, physicists don’t recognize the very fact that ontologies are more basic to physics theories. Instead, they whole-heartedly accept and vigorously teach and profess the exact opposite: They say that maths is most fundamental, even more fundamental than physics.

Now, since QM maths is already available, they argue, it’s only a question of going about looking for a correct “interpretation” for this maths. But since things cannot be very clear with such an approach, they have ended up proposing some 14+ (more than fourteen) different interpretations. None works fully satisfactorily. But some then say that the whole discussion about interpretation is bogus. In effect, as Prof. David Mermin characterized it: “Shut up and calculate!”

That was the simple reason.

4.4.2. The complicated reason is this:

The nature of the measurement problem itself is like that.

Now, here, I find myself in a tricky position. I think I’ve cracked this problem. So, even if I think it was a very difficult problem to crack, please allow me to not talk a lot more about it here; else, doing so runs the risk of looking like blowing your own tiny piece of work out of all proportion.

So, to appreciate why the measurement problem is complex, refer to what others have said about this problem. Coleman’s paper gives some of the most important references too (e.g., von Neumann’s process 1 vs. process 2 description) though he doesn’t cover the older references like the 1927 Bohr-Einstein debates etc.

Then there are others who say that the measurement problem does not exist; that we have to just accept a probabilistic OS at the firmware level by postulation. How to answer them? That’s a homework left for you.


5. A word about Prof. Coleman’s lecture:

If Prof. Coleman’s lecture led you to conclude that everything was fine with QM, you got it wrong. In case this was his own position, then, IMO, he too got it wrong. But no, his lecture was not worthless. It had a very valuable point.

If Coleman were conversant with the ontological and epistemological points we touched on (or hinted at), then he would have said something to the following effect:

All physics theories presuppose a certain kind of ontology. An ontology formulates and explains the broad nature of objects that must be assumed to exist. It also puts forth the broad nature of causality (objects-identities-actions relations) that must be assumed to be operative in nature. The physics theory then makes detailed, quantitative, statements about how such objects act and interact.

In nature, physical phenomena differ very radically. Accordingly, the phenomenological contexts assumed in different physical theories also are radically different. Their radical distinctiveness also get reflected in the respective ontologies. For instance, you can’t explain the electromagnetic phenomena using the pre-EM ontologies; you have to formulate an entirely new ontology for the EM phenomena. Then, you may also show how the Newtonian descriptions may be regarded as abstractions from the EM descriptions.

Similarly, we must assume an entirely new kind of ontological nature for the objects if the maths of QM is to make sense. Trying to explain QM phenomena in terms of pre-quantum ontological ideas is futile. On the other hand, if you have a right ontological description for QM, then with its help, pre-QM physics may be shown as being a higher-level, more abstract, description of reality, with the most basic level description being in terms of QM ontology and physics.

Of course, Coleman wasn’t conversant with philosophical and ontological issues. So, he made pretty vague statements.


6. Update on the progress in my new approach. But RSI keeps getting back again and again!

I am by now more confident than ever that my new approach is going to work out.

Of course, I still haven’t conducted simulations, and this caveat is going to be there until I conduct them. A simulation is a great way to expose the holes in your understanding.

So take my claim with a pinch of salt, though I must also hasten to note that with each passing fortnight (if not week), the quantity of the salt which you will have to take has been, pleasantly enough (at least for me), decreasing monotonically (even if not necessarily always exponentially).

I had written a preliminary draft for this post about 10 days ago, right when I wrote my last post. RSI had seemed to have gone away at that time. I had also typed a list of topics (sections) to write to cover my new approach. It carried some 35+ sections.

However, soon after posting the last blog entry here, RSI began growing back again. So, I have not been able to make any substantial progress since the last post. About the only things I could add were: some 10–15 more section or topic names.

The list of sections/topics includes programs too. However, let me hasten to add: Programs can’t be written in ink—not as of today, anyway. They have to be typed in. So, the progress is going to be slow. (RSI.)

All in all, I expect to have some programs and documentation ready by the time Q1 of 2021 gets over. If the RSI keeps hitting back (as it did the last week), then make it end-Q2 2021.

OK. Enough for this time round.


A song I like:

[When it comes to certain music directors, esp. from Hindi film music, I don’t like the music they composed when they were in their elements. For example, Naushad. For example, consider the song: मोहे पनघट पे (“mohe panghat pe”). I can sometimes appreciate the typical music such composers have produced, but only at a somewhat abstract level—it never quite feels like “my kind of music” to me. Something similar, for the songs that Madan Mohan is most famous for. Mohan was a perfectionist, and unlike Naushad, IMO, he does show originality too. But, somehow, his sense of life feels like too sad/ wistful/ even fatalistic to me. Sadness is OK, but a sense of inevitability (or at least irromovability) of suffering is what gets in the way. There are exceptions of course. Like, the present song by Naushad. And in fact, all songs from this move, viz. साथी (“saathi”). These are so unlike Naushad!

I have run another song from this movie a while ago (viz. मेरे जीवन साथी, कली थी मै तो प्यासी (“mere jeevan saathee, kalee thee main to pyaasee”).

That song had actually struck me after a gap of years (may be even a decade or two), when I was driving my old car on the Mumbai-Pune expressway. The air-conditioner of my car is almost never functional (because I almost never have the money to get it repaired). In any case, the a/c was neither working nor even necessary, on that particular day late in the year. So, the car windows were down. It was pretty early in the morning; there wasn’t much traffic on the expressway; not much wind either. The sound of the new tires made a nice background rhythm of sorts. The sound was very periodic, because of the regularity of the waviness that comes to occur on cement-concrete roads after a while.

That waviness? It’s an interesting problem from mechanics. Take a photo of a long section of the railway tracks while standing in the middle, especially when the sun is rising or setting, and you see the waviness that has developed on the rail-tracks too—they go up and down. The same phenomenon is at work in both cases. Broadly, it’s due to vibrations—a nonlinear interaction between the vehicle, the road and the foundation layers underneath. (If I recall it right, in India, IIT Kanpur had done some sponsored research on this problem (and on the related NDT issues) for Indian Railways.)

So, anyway, to return to the song, it was that rhythmical sound of the new tires on the Mumbai-Pune Expressway which prompted something in my mind, and I suddenly recalled the above mentioned song (viz. मेरे जीवन साथी, कली थी मै तो प्यासी (“mere jeevan saathee, kalee thee main to pyaasee”). Some time later, I ran it here on this blog. (PS: My God! The whole thing was in 2012! See the songs section, and my the then comments on Naushad, here [^])

OK, cutting back to the present: Recently, I recalled the songs from this movie, and began wondering about the twin questions: (1) How come I did end up liking anything by Naushad, and (2) How could Naushad compose anything that was so much out of his box (actually, the box of all his traditional classical music teachers). Then, a quick glance at the comments section of some song from the same film enlightened me. (I mean at YouTube.) I came to know a new name: “Kersi Lord,” and made a quick search on it.

Turns out, Naushad was not alone in composing the music for this film: साथी (“saathee”). He had taken assistance from Kersi Lord, a musician who was quite well-versed with the Western classical and Western pop music. (Usual, for a Bawa from Bombay, those days!) The official credits don’t mention Kersi Lord’s name, but just a listen is enough to tell you how much he must have contributed to the songs of this collaboration (this movie). Yes, Naushad’s touch is definitely there. (Mentally isolate Lata’s voice and compare to मोहे पनघट पे (“mohe panghat pe”).) But the famous Naushad touch is so subdued here that I actually end up liking this song too!

So, here we go, without further ado (but with a heartfelt appreciation to Kersi Lord):

(Hindi) ये काैन आया, रोशन हो गयी (“yeh kaun aayaa, roshan ho gayee)
Singer: Lata Mangeshkar
Music: [Kersi Lord +] Naushad
Lyrics: Majrooh Sultanpuri

A good quality audio is here [^].

]


PS: May be one little editing pass tomorrow?

History:
— 2020.12.19 23:57 IST: First published
— 2020.12.20 19:50 IST and 2020.12.23 22:15 IST: Some very minor (almost insignificant) editing / changes to formatting. Done with this post now.

 

 

A general update. Links.

I. A general update regarding my on-going research work (on my new approach to QM):

1.1 How the development is actually proceeding:

I am working through my new approach to QM. These days, I write down something and/or implement some small and simple Python code snippets (< 100 LOC Python code) every day. So, it’s almost on a daily basis that I am grasping something new.

The items of understanding are sometimes related to my own new approach to QM, and at other times, just about the mainstream QM itself. Yes, in the process of establishing a correspondence of my ideas with those of the mainstream QM, I am getting to learn the ideas and procedures from the mainstream QM too, to a better depth. … At other times, I learn something about the correspondence of both the mainstream QM and my approach, with the classical mechanics.

Yes, at times, I also spot some inconsistencies within my own framework! It too happens! I’ve spotted several “misconceptions” that I myself have had—regarding my own approach!

You see, when you are ab initio developing a new theory, it’s impossible to pursue the development of the theory very systematically. It’s impossible to be right about every thing, right from the beginning. That’s because the very theory itself is not fully known to you while you are still developing it! The neatly worked out structure, its best possible presentations, the proper hierarchical relations… all of these emerge only some time later.

Yes, you do have some overall, “vaguish” idea(s) about the major themes that are expected to hold the new theory together. You do know many elements that must be definitely there.

In my case, such essential themes or theoretical elements go, for example, like: the energy conservation principle, the reality of some complex-valued field, the specific (natural) form of the non-linearity which I have proposed, my description of the measurement process and of Born’s postulate, the role that the Eulerian (fixed control volume-based) formulations play in my theorization, etc.

But all these are just elements. Even when tied together, they still amount to only an initial framework. Many of these elements may eventually turn out to play an over-arching role in the finished theory. But during the initial stages (including the stage I am in), you can’t even tell which element is going to play a greater role. All the elements are just loosely (or flexibly) held together in your mind. Such a loosely held set does not qualify to be called a theory. There are lots and lots (and lots) of details that you still don’t even know exist. You come to grasp these only on the fly, only as you are pursuing the “fleshing out” of the “details”.

1.2. Multiple threads of seemingly haphazard threads of thoughts

Once the initial stage gets over, and you are going through the fleshing out stage, the development has a way of progressing on multiple threads of thought, simultaneously.

There are insights or minor developments, or simply new validations of some earlier threads, which occur almost on a daily basis. Each is a separate piece of a small little development; it makes sense to you; and all such small little pieces keep adding up—in your mind and in your notebooks.

Still, there is not much to share with others, simply because in the absence of a knowledge of all that’s going through your mind, any pieces you share are simply going to look as if they were very haphazard, even “random”.

1.3. At this stage, others can easily misunderstand what you mean:

Another thing. There is also a danger that someone may misread you.

For example, because he himself is not clear on many other points which you have not noted explicitly.

Or, may be, you have noted your points somewhere, but he hasn’t yet gone through them. In my case, it is the entirety of my Ontologies series [^]. … Going by the patterns of hits at this blog, I doubt whether any single soul has ever read through them all—apart from me, that is. But this entire series is very much alive in my mind when I note something here or there, including on the Twitter too.

Or, sometimes, there is a worse possibility too: The other person may read what you write quite alright, but what you wrote down itself was somewhat misleading, perhaps even wrong!

Indeed, recently, something of this sort happened when I had a tiny correspondence with someone. I had given a link to my Outline document [^]. He went through it, and then quoted from it in his reply to me. I had said, in the Outline document, that the electrons and protons are classical point-particles. His own position was that they can’t possibly be. … How possibly could I reply him? I actually could not. So, I did not!

I distinctly remember that right when I was writing this point in the Outline document, I had very much hesitated precisely at it. I knew that the word “classical” was going to create a lot of confusions. People use it almost indiscriminately: (i) for the ontology of Newtonian particles, (ii) for the ontology of Newtonian gravity, (iii) for ontology of the Fourier theory (though very few people think of this theory in the context of ontologies), (iv) for ontology of EM as implied by Maxwell, (v) for ontology of EM as Lorentz was striving to get at and succeeded brilliantly in so many essential respects (but not all, IMO), etc.

However, if I were to spend time on getting this portion fully clarified (first to myself, and then for the Outline document), then I also ran the risk of missing out on noting many other important points which also were fairly nascent to me (in the sense, I had not noted them down in a LaTeX document). These points had to be noted on priority, right in the Outline document.

Some of these points were really crucial—the V(x,t) field as being completely specified in reference to the elementary charges alone (i.e. no arbitrary PE fields), the non-linearity in \Psi(x,t), the idea that it is the Instrument’s (or Detector’s) wavefunction which undergoes a catastrophic change—and not the wavefunction of the particle being measured, etc. A lot of such points. These had to be noted, without wasting my time on what precisely I meant when I used the word “classical” for the point-particle of the electron etc.

Yes, I did identify that I the elementary particles were to be taken as conditions in the aether. I did choose the word “background object” merely in order to avoid any confusion with Maxwell’s idea of a mechanical aether. But I myself wasn’t fully clear on all aspects of all the ideas. For instance, I still was not familiar with the differences of Lorentz’ aether from Maxwell’s.

All in all, a document like the Outline document had to be an incomplete document; it had to come out in the nature of a hurried job. In fact, it was so. And I identified it as such.

I myself gained a fuller clarity on many of these issues only while writing the Ontologies series, which happened some 7 months later, after putting out the Outline document online. And, it was even as recently as in the last month (i.e., about 1.5 years after the Outline document) that I was still further revising my ideas regarding the correspondence between QM and CM. … Indeed, this still remains a work in progress… I am maintaining handwritten notes and LaTeX files too (sort of like “journal”s or “diaries”).

All in all, sharing a random snapshot of a work-in-progress always carries such a danger. If you share your ideas too early, while they still are being worked out, you might even end up spreading some wrong notions! And when it comes to theoretical work, there is no product-recall mechanism here—at all! Detrimental to your goals, after all!

1.3 How my blogging is going to go, in the next few weeks:

So, though I am passing through a very exciting phase of development these days, and though I do feel like sharing something or the other on an almost daily basis, when I sit down and think of writing a blog post, unfortunately, I find that there is very little that I can actually share.

For this very reason, my blogging is going to be sparse over the coming weeks.

However, in the meanwhile, I might post some brief entries, especially regarding papers/notes/etc. by others. As in this post.

OTOH, if you want something bigger to think about, see the Q&A answers from my last post here. That material is enough to keep you occupied for a couple of decades or more… I am not joking. That’s what’s happened to others; it has happened to me; and I can guarantee you that it would happen to you too, so long as you keep forgetting whatever you’ve read about my new approach. You could then very easily spend decades and decades (and decades)…

Anyway, coming back to some recent interesting pieces by others…


II. Links:


2.1. Luboš Motl on TerraPower, Inc.:

Dr. Luboš Motl wrote a blog-post of the title “Green scientific illiteracy enters small nuclear reactors, too” [^]. This piece is a comment on TerraPower’s proposal. In case you didn’t know, TerraPower is a pet project of Bill Gates’.

My little note (on the local HDD), upon reading this post, had said something like, “The critics of this idea are right, from an engineering/technological viewpoint.”

In particular, I have too many apprehensions about using liquid sodium. Further, given the risk involved in distributing the sensitive nuclear material over all those geographically dispersed plants, this idea does become, err…, stupid.

In the above post, Motl makes reference to another post of his, one from 2019, regarding the renewable energies like the solar and the wind. The title of this earlier post read: “Bill Gates: advocates of dominant wind & solar energy are imbeciles” [^]. Make sure to go through this one too. The calculation given in it is of a back-of-the-envelop kind, but it also is very impeccable. You can’t find flaw with the calculation itself.

Of course, this does not mean that research on renewable energies should not be pursued. IMO, it should be!

It’s just that I want to point out a few things: (i) Motl chooses the city of Tokyo for his calculation, which IMO would be an extreme case. Tokyo is a very highly dense city—both population-wise and on the count of geographical density of industries (and hence, of industrial power consumption). There can easily be other places where the density of power consumption, and the availability of the natural renewable resources, are better placed together. (ii) Even then, calculations such as that performed by Motl must be included in all analyses—and, the cost of renewable energy must be calculated without factoring in the benefit of government subsidies. … Yes, research on renewable energy would still remain justified. (iii) Personally, I find the idea of converting the wind/solar electricity into hydrogen more attractive. See my 2018 post [^] which had mentioned the idea of using the hydrogen gas as a “flywheel” of sorts, in a distributed system of generation (i.e. without transporting the wind-generated hydrogen itself, over long distances).


2.2. Demonstrations on coupled oscillations and resonance at Harvard:

See this page [^]; the demonstrations are neat.

As to the relevance of this topic to my new approach to QM: The usual description of resonance proceeds by first stating a homogeneous differential equation, and then replacing the zero on the right hand-side with a term that stands for an oscillating driving force [^]. Thus, we specify a force-term for the driver, but the System under study is still being described with the separation vector (i.e. a displacement) as the primary unknown.

Now, just take the driver part of the equation, and think of it as a multi-scaled effect of a very big assemblage of particles whose motions themselves are fundamentally described using exactly the same kind of terms as those for the particles in the System, i.e., using displacements as the primary unknown. It is the multi-scaling procedure which transforms a fundamentally displacement-based description to a basically force-primary description. Got it? Hint below.

[Hint: In the resonance equation, it is assumed that form of the driving force remains exactly the same at all times: with exactly the same F_0, m, and \omega. If you replace the driving part with particles and springs, none of the three parameters characterizing the driving force will remain constant, especially \omega. They all will become functions of time. But we want all the three parameters to stay constant in time. …Now, the real hint: Think of the exact sinusoidal driving force as an abstraction, and multi-scaling as a means of reaching that abstraction.]


2.3 Visualization of physics at the University of St. Andrews:

Again, very neat [^]. The simulations here have very simple GUI, but the design of the applets has been done thoughtfully. The scenarios are at a level more advanced than the QM simulations at PhET, University of Colorado [^].


2.4. The three-body problem:

The nonlinearity in \Psi(x,t) which I have proposed is, in many essential ways, similar to the classical N-body problem.

The simplest classical N-body problem is the 3-body problem. Rhett Allain says that the only way to solve the 3-body problem is numerically [^]. But make sure to at least cursorily note the special solutions mentioned in the Wiki [^]. This Resonance article (.PDF) [^] seems quite comprehensive, though I haven’t gone through it completely. Related, with pictures: A recent report with simulations, for search on “choreographies” (which is a technical term; it refers to trajectories that repeat) [^].

Sure there could be trajectories that repeat for some miniscule number of initial conditions. But the general rule is that the 3-body problem already shows sensitive dependence on initial conditions. Search the ‘net for 4-body, 5-body problems. … In QM, we have 10^{23} particles. Cool, no?


2.5. Academic culture in India:

2.5.1: Max Born in IISc Bangalore:

Check out a blog post/article by Karthik Ramaswamy, of the title “When Raman brought Born to Bangalore” [^]. (H/t Luboš Motl [^].)

2.5.2: Academic culture in India in recent times—a personal experience involving the University of Pune, IIT Bombay, IIT Madras, and IISc Bangalore:

After going through the above story, may I suggest that you also go through my posts on the Mechanical vs. Metallurgy “Branch Jumping” issue. This issue decidedly came up in 2002 and 2003, when I went to IIT Bombay for trying admission to PhD program in Mechanical department. I tried multiple times. They remained adamant throughout the 2002–2003 times. An associate professor from the Mechanical department was willing to become my guide. (We didn’t know each other beforehand.) He fought for me in the department meeting, but unsucessfully. (Drop me a line to know who.) One professor from their CS department, too, sympathetically listened to me. He didn’t understand the Mechanical department’s logic. (Drop me a line to know who.)

Eventually, in 2003, three departments at IISc Bangalore showed definite willingness to admit me.

One was a verbal offer that the Chairman of the SERC made to me, but in the formal interview (after I had on-the-spot cleared their written tests—I didn’t know they were going to hold these). He even offered me a higher-than-normal stipend (in view of my past experience), but he said that the topic of research would have to be one from some 4–5 ongoing research projects. I declined on the spot. (He did show a willingness to wait for a little while, but I respectfully declined it too, because I knew I wanted to develop my own ideas.)

At IISc, there also was a definite willingness to admit me by both their Mechanical and Metallurgy departments. That is, during my official interviews with them (which once again happened after I competitively cleared their separate written tests, being short-listed to within 15 or 20 out of some 180 fresh young MTech’s in Mechanical branch from IISc and IITs—being in software, I had forgotten much of my core engineering). Again, it emerged during my personal interviews with the departmental committees, that I could be in (yes, even in their Mechanical department), provided that I was willing to work on a topic of their choice. I protested a bit, and indicated the loss of my interest right then and there, during both these interviews.

Finally, at around the same time (2003), at IIT Madras, the Metallurgical Engg. department also made an offer to me (after yet another written test—which I knew was going to be held—and an interview with a big committee). They gave me the nod. That is, they would let me pursue my own ideas for my PhD. … I was known to many of them because I had done my MTech right from the same department, some 15–17 years earlier. They recalled, on their own, the hard work which I had put in during my MTech project work. They were quite confident that I could deliver on my topic even if they at that time they (and I!) had only a minimal idea about it.

However, soon enough, Prof. Kajale at COEP agreed to become my official guide at University of Pune. Since it would be convenient for me to remain in Pune (my mother was not keeping well, among other things), I decided to do my PhD from Pune, rather than broach the topic once again at SERC, or straight-away join the IIT Madras program.

Just thought of jotting down the more recent culture at these institutes (at IIT Bombay, IIT Madras, and IISc Bangalore), in COEP, and of course, in the University of Pune. I am sure it’s just a small slice in the culture, just one sample, but it still should be relevant…

Also relevant is this part: Right until I completely left academia for good a couple of years ago, COEP professors and the University of Pune (not to mention UGC and AICTE) continued barring me from becoming an approved professor of mechanical engineering. (It’s the same small set of professors who keep chairing interview processes in all the colleges, even universities. So, yes, the responsibility ultimately lies with a very small group of people from IIT Bombay’s Mechanical department—the Undisputed and Undisputable Leader, and with COEP and University of Pune—the  Faithful Followers of the former).

2.5.3. Dirac in India:

BTW, in India, there used to a monthly magazine called “Science Today.” I vaguely recall that my father used to have a subscription for it right since early 1970s or so. We would eagerly wait for each new monthly issue, especially once I knew enough English (and physics) to be able to more comfortably go through the contents. (My schooling was in Marathi medium, in rural areas.) Of course, my general browsing of this magazine had begun much earlier. [“Science Today” would be published by the Times of India group. Permanently gone are those days!]

I now vaguely remember that one of the issues of “Science Today” had Paul Dirac prominently featured in it. … I can’t any longer remember much anything about it. But by any chance, was it the case that Prof. Dirac was visiting India, may be TIFR Bombay, around that time—say in mid or late 1970s, or early 1980’s? … I tried searching for it on the ‘net, but could not find anything, not within the first couple of pages after a Google search. So, may be, likely, I have confused things. But would sure appreciate pointers to it…

PS: Yes, I found this much:

“During 1973 and 1975 Dirac lectured on the problems of cosmology in the Physical Engineering Institute in Leningrad. Dirac also visited India.’‘ [^].

… Hmm… Somehow, for some odd reason, I get this feeling that the writer of this piece, someone at Vigyan Prasar, New Delhi, must have for long been associated with IIT Bombay (or equivalent thereof). Whaddaya think?


2.6. Jim Baggott’s new book: “Quantum Reality”:

I don’t have the money to buy any books, but if I were to, I would certainly buy three books by Jim Baggott: The present book of the title “Quantum Reality,” as well as a couple of his earlier books: the “40 moments” book and the “Quantum Cookbook.” I have read a lot of pages available at the Google books for all of these three books (may be almost all of the available pages), and from what I read, I am fully confident that buying these books would be money very well spent indeed.

Dr. Sabine Hossenfelder has reviewed this latest book by Baggott, “Quantum Reality,” at the Nautil.us; see “Your guide to the many meanings of quantum mechanics,” here [^]. … I am impressed by it—I mean this review. To paraphrase Hossenfelder herself: “There is nothing funny going on here, in this review. It just, well, feels funny.”

Dr. Peter Woit, too, has reviewed “Quantum Reality” at his blog [^] though in a comparatively brief manner. Make sure to go through the comments after his post, especially the very first comment, the one which concerns classical mechanics, by Matt Grayson [^]. PS: Looks like Baggott himself is answering some of the comments too.

Sometime ago, I read a few blog posts by Baggott. It seemed to me that he is not very well trained in philosophy. It seems that he has read philosophy deeply, but not comprehensively. [I don’t know whether he has read the Objectivist metaphysics and epistemology or not; whether he has gone through the writings/lectures by Ayn Rand, Dr. Leonard Peikoff, Dr. Harry Binswanger and David Harriman or not. I think not. If so, I think that he would surely benefit by this material. As always, you don’t have to agree with the ideas. But yes, the material that I am pointing out is by all means neat enough that I can surely recommend it.]

Coming back to Baggott: I mean to say, he delivers handsomely when (i) he writes books, and (ii) sticks to the physics side of the topics. Or, when he is merely reporting on others’ philosophic positions. (He can condense down their positions in a very neat way.) But in his more leisurely blog posts/articles, and sometimes even in his comments, he does show a tendency to take some philosophic point in a something of a wrong direction, and to belabour on it unnecessarily. That is to say, he does show a certain tendency towards pedantry, as it were.  But let me hasten to add: He seems to show this tendency only in some of his blog-pieces. Somehow, when it comes to writing books, he does not at all show this tendency—well, at least not in the three books I’ve mentioned above.

So, the bottomline is this:

If you have an interest in QM, and if you want a comprehensive coverage of all its interpretations, then this book (“Quantum Reality”) is for you. It is meant for the layman, and also for philosophers.

However, if what you want is a very essentialized account of most all of the crucial moments in the development of QM (with a stress on physics, but with some philosophy also touched on, and with almost no maths), then go buy his “40 Moments” book.

Finally, if you have taken a university course in QM (or are currently taking it), then do make sure to buy his “Cookbook” (published in January this year). From what I have read, I can easily tell: You would be doing yourself a big favour by buying this book. I wish the Cookbook was available to me at least in 2015 if not earlier. But the point is, even after developing my new approach, I am still going to buy it. It achieves a seemingly impossible combination: Something that makes for an easy reading (if you already know the QM) but it will also serve as a permanent reference, something which you can look up any time later on. So, I am going to buy it, once I have the money. Also, “Quantum Reality”, the present book for the layman. Indeed all the three books I mentioned.

(But I am not interested in relativity theory, or QFT, standard model, etc. etc. etc., and so, I will not even look into any books on these topics, written by any one.)


OK then, let me turn back to my work… May be I will come back with some further links in the next post too, may be after 10–15 days. Until then, take care, and bye for now…


A song I like:

(Marathi) घन घन माला नभी दाटल्या (“ghan ghan maalaa nabhee daaTalyaa”)
Singer: Manna Dey
Lyrics: G. D. Madgulkar
Music: Vasant Pawar

[A classic Marathi song. Based on the (Sanskrit, Marathi) राग मल्हार (“raaga” called “Malhaara”). The best quality audio is here [^]. Sung by Manna Dey, a Bengali guy who was famous for his Hindi film songs. … BTW, it’s been a marvellous day today. Clear skies in the morning when I thought of doing a blog post today and was wondering if I should add this song or not. And, by the time I finish it, here are strong showers in all their glory! While my song selection still remains more or less fully random (on the spur of the moment), since I have run so many songs already, there has started coming in a bit of deliberation too—many songs that strike me have already been run!

Since I am going to be away from blogging for a while, and since many of the readers of this blog don’t have the background to appreciate Marathi songs, I may come back and add an additional song, a non-Marathi song, right in this post. If so, the addition would be done within the next two days or so. …Else, just wait until the next post, please! Done, see the song below]


(Hindi) बोल रे पपीहरा (“bol re papiharaa”)
Singer: Vani Jairam
Music: Vasant Desai
Lyrics: Gulzar

[I looked up on the ‘net to see if I can get some Hindi song that is based on the same “raaga”, i.e., “Malhaar” (in general). I found this one, among others. Comparing these two songs should give you some idea about what it means when two songs are said to share the same “raaga”. … As to this song, I should also add that the reason for selecting it had more to do with nostalgia, really speaking. … You can find a good quality audio here [^].

Another thing (that just struck me, on the fly): Somehow, I also thought of all those ladies and gentlemen from the AICTE New Delhi, UGC New Delhi, IIT Bombay’s Mechanical Engg. department, all the professors (like those on R&R committees) from the University of Pune (now called SPPU), and of course, the Mechanical engg. professors from COEP… Also, the Mechanical engineering professors from many other “universities” from the Pune/Mumbai region. … पपीहरा… (“papiharaa”) Aha!… How apt are words!… Excellence! Quality! Research! Innovation! …बोल रे, पपीहरा ऽऽऽ (“bol re papiharaa…”). … No jokes, I had gone jobless for 8+ years the last time I counted…

Anyway, see if you like the song… I do like this song, though, probably, it doesn’t make it to my topmost list. … It has more of a nostalgia value for me…

Anyway, let’s wrap up. Take care and bye for now… ]


History:
— First published: 2020.09.05 18:28 IST.
— Several significant additions revisions till 2020.09.06 01:27 IST.
— Much editing. Added the second song. 2020.09.06 21:40 IST. (Now will leave this post in whatever shape it is in.)