# Ontologies in physics—10: Objects in QM. Aetherial fields in QM. Particle-in-a-box.

0. Prologue:

The last time we saw the context for, and the scheme of the inductive derivation of, the Schrodinger equation. In this post, we will see the ontology which it demands—the kind of ontological objects there have to be, so that the physical meaning of the Schrodinger equation can be understood correctly.

I wrote down at least 2 or 3 different ways of presentations of the topics for this post. However, either the points weren’t clear enough, or the discussion was going too far away, and I was losing the focus on ontology per say.

That’s why, I have decided to first present the ontology of QM without any justification, and only then to explain why assuming this particular ontology, rather than any other, makes sense. In justifying this ontology, we will have to note the salient peculiarities regarding the mathematical nature of Schrodinger’s equation, as also many relevant quantum mechanical features.

In this post, we will deal with only one-particle quantum systems.

So, let’s get going with the actual ontology first.

1. Our overall view of the QM ontology:

1.1. Introductory remarks:

To specify an ontology of physics is to state the basic types of objects there have to exist in the physical reality, and the basic ways in which they interact, so that the given theory of physics makes sense—the physical phenomena the theory subsumes are identified with appropriate concepts, causal relations, laws, and so, an understanding can be developed for applications, for building new systems that make use of the subsumed phenomena. The basic purpose of physics is to develop understanding so that it can be put to use to build better systems—structures, engines, machines, circuits, devices, gadgets, etc.

Accordingly, we will first give a list of the type of objects that must exist in the physical world so that the quantum mechanical phenomena can be completely described using them. The theory we will assume is Schrodinger’s non-relativistic quantum mechanics of multiple particles, including phenomena like entanglement, but without including the quantum mechanical spin. However, in this post, we will cover those aspects that can be understood (or at least touched upon) using only the single-particle quantum systems.

1.2. The list of objects in our QM ontology:

The list of our QM ontological objects is this:

• The EC Objects of electrons and protons.
• A special category of objects called neutrons.
• The aether filling all of the $3D$ space where other objects are not, and certain field-conditions present in it; the all-connecting aspect of the physical universe.
• The photon as a certain kind of a transient condition in the aether, i.e., a virtual object.

Let’s see all of them in detail, one by one, but beginning with the aether first.

2. The aether:

Explaining the concept of the aether and its necessity are full-fledged topics by themselves, and we have already said a lot about the ontology of this background object in the previous posts. So, we will note just a few indicative characteristics of the aether here.

Our idea of the QM aether is exactly the same as that of the EM aether of Lorentz. The only difference is that the aether, when used in QM, the aether is seen as supporting not only the electrostatic fields but also one more type of a field, the complex-valued quantum mechanical field.

To note some salient points about the aether:

• The aether has no such inertia that it shows up in the electrostatic or quantum-mechanical phenomena. So, in this sense, the aether is non-inertial in nature.
• It exists in all parts of space where the other QM ontological objects (of electrons, protons and neutrons) are not.
• It exchanges electrostatic as well as additional quantum-mechanical forces with the electrons and protons, but always by direct contact alone.
• Apart from the electrostatic and quantum-mechanical forces, there are no other forces that enter into our ontological description. Thus, there is no drag-force exerted by the aether on the electrons, protons or neutrons (basically because the Lorentz aether is not a mechanical aether; it is not an NM-Ontological object). In the non-relativistic QM, we also ignore fields like magnetic, gravitational, etc.
• All parts of the aether always remains stationary, i.e., no CV of itself translates in space at any time. Even if there is any actual translation going on in the aether, the quantum mechanical phenomena are unable to capture it, and so, a capacity to translate does not enter our ontology.
• However, unlike in the EM theory, when it comes to QM, we have to assume that there are other motions in aether. In QM, the aether does come to carry a kinetic energy too, whereas in EM, the kinetic energy is a feature of only the massive EC Objects. So, the aether is stationary—but that’s only translation-wise. Yet, even in the absence of net displacements, it does force (and is forced by) the elementary charged objects of the electrons and protons.

We will note further details regarding the fields in the aether as we progress.

3. Electrons and protons:

The view of electrons and protons which we take in the QM ontology is exactly the same as that in the ontology of electrostatics; so see the previous posts in this series for details not again repeated here.

Electrons and protons are seen as elementary point-particles having, within the algebraic sign, the same amount of electrostatic charge $e$. They set up certain $3D$ field conditions in the non-inertial aether, but acting in pairs. We may sometimes informally call them as point-charges, but it is to be kept in mind that, strictly speaking, in our view, we do not regard the charge to be an attribute of the point-particle, but only of the aether.

For two arbitrary EC objects (electrons or protons) $q_i$ and $q_j$ forming a pair, there are two fields which simultaneously exist in the $3D$ aether. None can exist without the other. These fields may be characterized as force-fields or as potential energy fields.

In the interest of clarity in the multi-particle situations, we will now expand on the notation presented earlier in this series. Accordingly,

$\vec{\mathcal{F}}(q_i|q_j)$ is the $3D$ force field which exists everywhere in the aether. It gives the Coulomb force that $q_j$ experiences from the aether at its instantaneous position $\vec{r}_j$ via direct contact (between the aether and itself). Thus, in this notation, $q_j$ is the forced charge, and $q_i$ is the field-producing charge. Quantitatively, this force-field is given by Coulomb’s law:

$\vec{\mathcal{F}}(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i q_A}{r_{iA}^2} \hat{r}_{iA}$, where $q_A = q_j$.

Similarly, $\vec{\mathcal{F}}(q_i|q_j)$ is the aetherial force-field set up by $q_j$ and felt by $q_i$ in the same pair, and is given as:

$\vec{\mathcal{F}}(q_j|q_i) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_j q_A}{r_{jA}^2} \hat{r}_{jA}$, where $q_A = q_i$.

The fields are singular at the location of the forcing charge, but not at the location of the forced charge. Due to the divergence theorem, a given charge does not experience its own field.

There is no self-interaction problem either, because the EC Object (the point-charge) is ontologically a different object from both the aether and the NM objects. Only an NM Object could possibly explode under the self-field, primarily, because an NM Object is a composite. However, an EC Object (of an electron or a proton) is not an NM Object—it is elementary, not composite.

Notice that the specific forces at the positions of the $q_i$ and $q_j$ are equal in magnitude and opposite in directions. However, these two vectors act on two different objects, and therefore they don’t cancel each other. The two vectors also act at two different locations. In any case, in going from these two vectors to the two vector fields, it’s misleading to keep thinking in terms of one force-field as being the opposite of the other! Their respective anchoring locations (i.e. the two singularities) themselves are different, and they have the same signs too!! They are the same $1/(r^2)$ fields, but spatially shifted so as to anchor into the two charges of a pair.

When there are $N$ number of elementary charged particles in a system, then a given charge $q_j$ will experience the force fields produced by all the other $(N-1)$ number of charges at its position. We can list them all before the pipe $|$ symbol. For instance, $\vec{\mathcal{F}}(q_1, q_3, q_4|q_2)$ is the net field that $q_2$ feels at its position $\vec{r}_2$; it equals the sum of the three force-fields produced by the other three charges because of the three pairs in which they act:
$\vec{\mathcal{F}}(q_1, q_3, q_4|q_2) = \vec{\mathcal{F}}(q_1|q_2) + \vec{\mathcal{F}}(q_3|q_2) + \vec{\mathcal{F}}(q_4|q_2)$.

The charges always act pairs-wise; hence there always are pairs of fields; a single field cannot exist. Therefore, any analysis that has only one field (e.g., as in the quantum harmonic oscillator problem or the H atom problem), must be regarded as only a mathematical abstraction, not an existent.

The two fields of a given specific pair both are of the same algebraic sign: both $+$ or both $-$. However, a given charge $q_j$ may come to experience fields of arbitrary signs—depending on the signs of the other $q_i$‘s forming those particular pairs.

The electrons and protons thus affect each other via the intervening aether.

In electrostatics as well as in non-relativistic QM, the interaction between charges are via direct contact. However, the two fields of any arbitrary pair of charges shift instantaneously in space—the entirety of a field “moves” when the singular point where it is anchored, moves. Thus, there is no action-at-a-distance in this ontology. However, there are instantaneous changes everywhere in space.

A relativistic theory of QM would include magentic fields and their interactions with the electric fields. It is these interactions which together impose the relativistic speed limit of $v < c$ for all material particles. However, such speed-limiting interaction are absent in the non-relativistic QM theory.

The electron and protons have the same magnitude of charge, but different masses.

The Coulombic force should result in accelerations of both the charges in a pair. However, since the proton is approx. $1846$ times more massive than the electron, the actual accelerations (and hence net displacements over a finite time interval) undergone by them are vastly different.

There is a viewpoint (originally put forth by Lorentz, I guess) which says that since the entire interaction proceeds through the aether, there is no need to have massive particles of charge at all. This argument in essence says: We took the attribute of the electric charge away from the particle and re-attributed it to the aether. Why not do the same for the mass?

Here we observe that mass can be regarded as an attribute of the interactions of two *singular* fields in the aether. We tentatively choose to keep the instantaneous location of the attribute of the mass only at the distinguished point of the singularity. In short, we have both particles and the aether. If the need be, we will revisit this aspect of our ontology later on.

The electrostatic aetherial fields can also be expressed via two physically equivalent but mathematically different formulations: vector force-fields, and scalar energy-fields—also called the “potential” energy fields in the Schrodinger QM.

Notation: The potential energy field seen by $q_j$ due to $q_i$ is now on noted, and given, as:

$V(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i\,q_A}{r_{iA}}$,

where $q_A = q_j$, and similarly for the other field of the pair, viz., $V(q_j|q_i)$

See the previous posts from this series for a certain reservation we have for calling them the potential energy fields (and not just internal energy fields). In effect, what we seem to have here is an interesting scenario:

When we have a pair of charges in the physical $3D$ space (say an infinite domain), then we have two singular fields existing simultaneously, as noted above. Moving the two charges from their definite positions “to” infinity makes the system devoid of all energy. When they are present at definite positions, their singular fields of $V$ noted above imply an infinite amount of energy within the volume of the system. However, since the system-boundaries for a system of charged point-particles can be defined only at the point-locations where they are present, the work that can be extracted from the system is finite—even if the total energy content is infinite. In short, we have a situation that addition of two infinities results in a finite quantity.

Does this way of looking at the things provide a clue to solve the problem of cancelling infinities in the re-normalization problem? If yes, and if none has put forth a comparably clear view, please cite this work.

4. Neutrons:

Neutrons are massive objects that do not participate in electrostatic interactions.

From very basic, ontological, viewpoint, they could have presented very tricky situations to deal with.

For instance: When an EC Object (i.e., an electron or a proton) moves through the aether, there is no force over and above the one exerted by the Coulombic field on it. But EC Objects are massive particles. So, a tempting conclusion might be to say that the aether exerts no drag force at all on any massive object, and hence, there should be no drag force on the motion of a free neutron either.

I am not clear on such points. But I have certain reservations and apprehensions about it.

It is clear that the aforementioned tempting conclusion does not carry. It is known that the aether does not exert drag on the EC Objects. But an EC Object is different from a chargeless object of the neutron. Even a forced EC Object still has a field singularly anchored in its own position; it is just that in experiencing the forces by the field, the component of its own singular field plays no part (due to the divergence theorem). But the neutron, being chargeless object, has no singular field anchored in its position at all. It doesn’t have a field that is “silent” for its own motions. Since for a forced particle, the forces are exerted by the aether in its vicinity, I am not clear if the neutron should behave the same. May be, we could associate a pair of equal and opposite (positive and negative) fields anchored in the neutron’s position (of arbitrary $q_N$ strength, not elementary), so that it again is chargeless, but can be seen to be interacting with the aether. If so, then the neutron could be seen as a special kind of an EC Object—one which has two equal and opposite aetherial-fields associated with it. In that case, we can be consistent and say that the neutron will not experience a drag force from the aether for the same reason the electron or the proton does not. I am not clear if I should be adopting such a position. I have to think further about it.

So, overall, my choice is to ignore all such issues altogether, and regard the neutrons, in the non-relativistic QM, as being present only in the atomic nucleus at all times. The nucleus itself is regarded, abstractly, as a charged point-particle in its own right.

Thus, effectively, we come regard the nuclear neutrons as just additions of constant masses to the total mass of the protons, and consider this extra-massive positively charged composite as the point-particle of the nucleus.

5. In QM, there is an aetherial field for the kinetic energy:

As stated previously, in addition to the electrostatic fields (mathematically expressed as force-fields or as energy-fields), in QM, the aether also comes to carry a certain time-varying field. The energy associated with these fields is kinetic in nature. That is to say, there should be some motion within the aether which corresponds to this part of the total energy.

We will come to characterize these motions with the complex-valued $\Psi(x,t)$ field. However, as the discussion below will clarify, the wavefunction is only a mathematically isolated attribute of the physically existing kinetic energy field.

We will see that the motion associated with the quantum mechanical kinetic energy does not result in the net displacement of a CV. (It may be regarded as the motion of time-varying strain-fields.)

In our ontology, the kinetic energy field (and hence the field that is the wavefunction) primarily “lives” in the physical $3D$ space.

However, when the same physics is seen from a higher-level, abstract, mathematical viewpoint, the same field may also be seen as “living” in an abstract $3ND$ configuration space. Adopting such an abstract view has its advantages in simplifying some of the mathematical manipulations at a more abstract level. However, make a note that doing so also risks losing the richness of the concept of the physical fields, and with it, the opportunity to tackle the unusual features of the quantum mechanical theory right.

6. Photon:

In our view, the photon is a neither a spatially discrete particle nor even a condition that is permanently present in the aether.

A photon represents a specific kind of a transient condition in the aetherial quantum mechanical fields which comes to exist only for some finite interval of time.

In particular, it refers to the difference in the two field-conditions corresponding to a change in the energy eigenstates (of the same particle).

In the last sentence, we should have added: “of the same particle” without parentheses; however, doing so requires us to identify what exactly is a particle when the reference is squarely being made to field conditions. A proper discussion of photons cannot actually be undertaken until a good amount of physics preceding it is understood. So, we will develop the understanding of this “particle” only slowly.

For the time being, however, make a note of the fact that:

In our view, all photons always are “virtual” particles.

Photons are attributes of real conditions in the aether, and in this sense, they are not virtual. But they are not spatially discrete particles. They always refer to continuous changes in the field conditions with time. Since these changes are anchored into the positions of the positively charged protons in the atomic nuclei, and since the protons are point-particles, therefore, a photon also has at least one singularity in the electrostatic fields to which its definition refers. (I am still not clear whether we need just one singularity or at least two.) In short, photon does have point-position(s) as the reference points. Its emission/absorption events cannot be specified without making reference to definite points. In this sense, it does have a particle character.

Finally, one more point about photons:

Not all transient changes in the fields refer to photons. The separation vectors between charges are always changing, and they are always therefore causing transient changes in the system wavefunction. But not all such changes result in a change of energy eigenstates. So, not all transient field changes in the aether are photons. Any view of QM that seeks to represent every change in a quantum system via an exchange of photons is deeply suspect, to say the least. Such a view is not justified on the basis of the inductive context or nature of the Schrodinger equation.

We will now develop the context required to identify the exact ontological nature of the quantum mechanical kinetic energy fields.

7. The form of Schrodinger’s equation points to an oscillatory phenomenon:

Schrodinger’s equation (SE) in $1D$ formulation reads:

$i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t)$

BTW, when we say SE, we always mean TDSE (time-dependent Schrodinger’s equation). When we want to refer specifically to the time-independent Schrodinger’s equation, we will call it by the short form TISE. In short, TISE is not SE!

Setting constants to unity, the SE shows this form:
$i\,\dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t)$.

Its form is partly comparable to the following two real-valued PDEs:

heat-diffusion equation with internal heat generation:
$\dfrac{\partial T(x,t)}{\partial t} =\ \dfrac{\partial^2 T(x,t)}{\partial x^2} + \dot{Q}(x,t)$,

and the wave equation:
$\dfrac{\partial^2 u(x,t)}{\partial t^2} =\ \dfrac{\partial^2 u(x,t)}{\partial x^2} + V(x,t)u(x,t)$,

Yet, the SE is different from both.

• Unlike the diffusion equation, the SE has the $i$ sticking out on the left-hand side, and a negative sign (think of it as $(i)(i)$ on the first term on the right hand-side. That makes the solution of SE complex—literally. For quite a long time (years), I pursued this idea, well known to the Monte Carlo Quantum Chemistry community, that the SE is the diffusion equation but in imaginary time $it$. Turns out that this idea, while useful in simplifying simulation techniques for problems like determining the bonding energy of molecules, doesn’t really help throw much light on the ontology of QM. Indeed, it serves to get at the right ontology more difficult.
• As to the wave equation, it too has only a partial similarity to SE. We mentioned the last time the main difference: In the wave PDE, the time differential is to the second order, whereas in the SE, it is to the first order.

The crucial thing to understand here is (and I got it from Lubos Motl’s blog or replies on StackExchange or so) that even if the time-differential is to the first-order, you still get solutions that oscillate in time—if the wave variable is regarded as being full-fledged complex-valued.

The important lesson to be drawn: The Schrodinger equation gives the maths of some kind of a vibratory/oscillatory system. The term “wavefunction” is not a misnomer. (Under the diffusion equation analogy, for some time, I had wondered if it shouldn’t be called “diffusionfunction”. That way of looking at it is wrong, misleading, etc.)

So, to understand the physics and ontology of the SE better, we need to understand vibrations/oscillations/waves better. I don’t have the time to do it here, so I refer you to David Morin’s online draft book on waves as your best free resource. A good book also seems to be the one by Walter Fox Smith’s “Waves and Oscillations, a Prelude to QM” though I haven’t gone through all its parts (but what exactly is his last name?). A slightly “harder” book but excellent, at the UG level, and free, comes from Howard Georgi. Mechanical engineers could equally well open their books on vibrations and FEM analysis of the same. For real quick notes, see Allan Bower’s UG course notes on this topic as a part of his dynamics course at the Brown university.

8. Ontology of the quantum mechanical fields:

8.1. Schrodinger’s equation has complex-valued fields of energies:

OK. To go back to Schrodinger’s equation:

$i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m} \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t) = (\text{a real-valued constant}) \Psi(x,t)$.

As seen in the last post, the scheme of derivation of the SE makes it clear that these terms have come from: the total internal energy, the kinetic energy, and the potential energy, respectively. Informally, we may refer to them as such. However, notice that whereas $V(x,t)$ by itself is a field, what appears in the SE is the term of $V(x,t)$ multiplifed by $\Psi(x,t)$, which makes all the energies complex-valued. Further, since $\Psi(x,t)$ is a field, all energies in the SE also are fields.

If you wish to have real-valued fields of energies, then you have no choice but to divide all the terms in the SE by $\Psi(x,t)$. That’s what we indicated in the last post too. However, note, complex-valued fields cannot still be got rid of; they still enter the calculations.

8.2. Potential energy fields only come from the elementary point-charges:

The $V(x,t)$ field itself is the same as in the electrostatics:

$V(x,t) = \dfrac{1}{2} \dfrac{1}{4\,\pi\,\epsilon_0} \sum\limits_{i}^{N}\sum\limits_{j\neq i; j=1}^{N} \dfrac{q_i\,q_j}{r_iA}$,
where $|q_i| = |q_j| = -e$, with $e$ being the fundamental electronic charge.

In our QM ontology we postulate that the above equation is logically complete as far as the potential energy field of QM is concerned.

That is to say, in the basic ontological description of QM, we do not entertain any other sources of potentials (such as gravity or magnetism). Equally important, we also do not entertain arbitrarily specified values for potentials (such as the parabolic potential well of the quantum harmonic oscillator, or the well with the sharply vertical walls of the particle-in-a-box model). Arbitrary potentials are mere mathematical abstractions—approximate models—that help us gain insight into some aspects of the physical phenomena; they do not describe the quantum mechanical reality in full. Only the electrostatic potential that is singularly anchored into elementary charge positions, does.

At least in the basic non-relativistic quantum mechanics, there is no scope to accommodate magnetism. The gravity, being too weak, also is best neglected. Thus, the only potentials allowed are the singular electrostatic ones.

We shall revisit this issue of the potentials after we solve the measurement problem. From our viewpoint, the mainstream QM’s use of arbitrary potentials of arbitrary sources is fine too, as the linear formulation of the mainstream QM turns out to be a limiting case of our nonlinear formulation.

8.3. What physically exists is only the complex-valued internal energy field:

Notice that according to our QM ontology, what physically exists is only the single field of the complex-valued total internal energy field.

Its isolation into different fields like the potential energy field, the kinetic energy field, the momentum field, or the wavefunction field, etc. are all mathematically isolated quantities. These fields do have certain direct physical referents, but only as aspects or attributes of the total internal energy field. They do have a physical existence, but their existence is not independent of the total internal energy field.

Finally, note that the total internal energy field itself exists only as a field condition in the aether; it is an attribute of the aether; it cannot exist without the aether.

9. Implications of the complex-valued nature of the internal energy field:

9.1. System-level attributes to spatial fields—real- vs. complex-valued functions:

Consider an isolated system—say the physical universe. In our notation, $E$ denotes the aspatial global attribute of its internal energy. Think of a perfectly isolated box for a system. Then $E$ is like a label identifying a certain quantity of joule slapped on to it. It has no spatial existence inside the box—nor outside it. It’s just a device of book-keeping.

To convert $E$ into a spatially identifiable object, we multiply it by some field, say $F(x,t)$. Then, $E F(x,t)$ becomes a field.

If $F(x,t)$ is real-valued, then $\int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,F(x,t)$ gives you the amount of $E$ present in a small CV (which is just a part of the system, not the whole). To fix ideas, suppose you have a stereo boom-box with two detachable speakers. Then, the volume of the overall boombox is a sum of the volumes of each of its three parts. The volume is a real-valued number, and so, the total volume is the simple sum of its parts $V = V_1 + V_2 + V_3$. Ditto for the weights of these parts. Ditto, for the energy in a volumetric part of a system if the energy forms a real-valued field.

Now, when the field is complex-valued, say denoted as $\tilde{F}(x,t)$, then the volume integral still applies. $\int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,\tilde{F}(x,t)$ still gives you the amount of the complex valued quantity $E\tilde{F}(x,t)$ present in the CV. But the fact that $\tilde{F}$ is complex-valued means that there actually are two fields of $E$ inside that small CV. Expressing $\tilde{F}(x,t) = a(x,t) + i b(x,t)$, there are two real-valued fields, $a(x,t)$ and $b(x,t)$. So, the energy inside the small CV also has two energy components: $E_R = E a(x,t)$ and $E_I = E b(x,t)$, which we call “real” and “imaginary”. Actually, physically, they both are real-valued. However, the magnitude of their net effect $|E \tilde{F}(x,t)| != E_R + E_I$. Instead, it follows the Pythagorean theorem all the way to the positive sign: $|E \tilde{F}| = |\sqrt{E_R^2 + E_I^2}|$. (Aren’t you glad you learnt that theorem!)

If you take it in a naive-minded way, then $E$ can be greater or smaller than $E_R + E_I$, and so things won’t sum up to $|E \tilde{F}|$—conservation seems to fail.

But in fact, energy conservation does hold. It’s just that it follows a further detailed law of combining the two field components within a given CV (or the entire system).

In QM, the wavefunction $\Psi(x,t)$ plays the role of $\tilde{F}$ given above. It brings the aspatial energy $E$ from its Platonic mathematical “heaven” and, further, being a field itself, also distributes it in space—thereby giving a complex-valued field of $E$.

We do not know the physical mechanism which manipulates the real and imaginary parts $\Psi_R(x,t)$ and $\Psi_I(x,t)$ so that they come to obey the Pythogorean theorem. But we know that unless we have $\Psi(x,t)$ as complex-valued, the book-keeping of the system’s energy does not come out right—in QM, that is.

Since the product $E_{\text{sys}}\Psi(x,t)$ can come up any time, and since what ontologically exists is a single quantity, not a product of two, it’s better to have a different notation for it. Accordingly, define:

$\tilde{E}(x,t) = E_{\text{sys}}\,\Psi(x,t)$

9.2. In QM, the conserved quantity itself is complex-valued:

Note an important difference between pre-quantum mechanics and QM:

The energy conservation principle for the classical (pre-quantum) mechanics says that $E_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega E(x,t)$ is conserved.
The energy conservation principle for quantum mechanics is that $\tilde{E}_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega \tilde{E}(x,t)$ is conserved.

No one says it. But it is there, right in the context (the scheme of derivation) of the Schrodinger equation!

For the cyclic change, we started from the classical conservation statement:
$\oint \text{d}E_{\text{sys}} = 0 = \oint \text{d}T_{\text{sys}} + \oint \text{d}\Pi_{\text{sys}}$

Or, in differential terms (for an arbitrary change, not cyclic):
$\text{d}E_{\text{sys}} = 0 = \text{d}T_{\text{sys}} + \text{d}\Pi_{\text{sys}}$.

Or, integrating over the end-points of an arbitrary process,
$E_{\text{sys}} = \text{a constant (real-valued) number}$.

We then multiplied both sides by $\Psi(x,t)$ (remember the quizzical-looking multiplication from the last post?), and only then got to Schrodinger’s equation. In effect, we did:
$\text{d}E_{\text{sys}}\Psi(x,t) = 0 = \text{d}T_{\text{sys}}\Psi(x,t) + \text{d}\Pi_{\text{sys}}\Psi(x,t)$.

That’s nothing but saying, using the notation introduced just above, that:
$\text{d}\tilde{E}(x,t) = 0 = \text{d}\tilde{T}(x,t) + \text{d}\tilde{\Pi}(x,t)$.

Or, integrating over the end-points of an arbitrary process and over the system volume,
$\tilde{E}_{\text{sys}} = \text{a constant complex number}$.

So, what’s conserved is not $E$ but $\tilde{E}$.

The aspatial, global, thermodynamic number for the total internal energy is the complex number $\tilde{E}_{\text{sys}}$ in QM. QM by postulation comes with two coupled real-valued fields together obeying the algebra of complex numbers.

10. Consequences of conservation of complex-valued energy of the universe:

10.1. There is a real-valued measure of quantum-mechanical energy which is conserved too:

In QM, is there a real-valued number that gets conserved too? if not by postulate then at least by consequence?

Answer: Well, yes, there is. But it loses the richness of the physics of complex-numbers.

To obtain the conserved real-valued number, we follow the same procedure as for “converting” a complex number to a real number, i.e., extracting a real-valued and essential feature of a complex number. We take its absolute magnitude. If $\tilde{E}_{\text{sys}}$ is a constant complex number, then obviously, $|\tilde{E}_{\text{sys}}|$ is a constant number too. Accordingly,

$|\tilde{E}_{\text{sys}}| = |\sqrt{\tilde{E}_{\text{sys}}\,\tilde{E}_{\text{sys}}^{*}}| = \text{another, real-valued, constant}$.

But obviously, a statement of this kind of a constancy has lost all the richness of QM.

10.2. The normalization condition has its basis in the energy conservation:

Another implication:

Since $|\tilde{E}_{\text{sys}}|$ itself is conserved, so is $|\tilde{E}_{\text{sys}}|^2$ too.

[An aside to experts: I think we thus have solved the curious problem of the arbitrary phase factors in quantum mechanics, too. Let me know if you disagree.]

It then follows, by definitions of $\tilde{E}_{\text{sys}}$, $\tilde{E}$ and $\Psi(x,t)$, that

$\int\limits_{\Omega}\text{d}\Omega\,\Psi(x,t)\Psi^{*}(x,t) = 1$

Thus, the square-normalization condition follows from the energy conservation principle.

We believe this view places the normalization condition on firm grounds.

The mainstream QM (at least as presented in textbooks) makes reference to (i) Born’s postulate for the probability of finding a particle in an elemental volume, and (ii) conservation of mass for the system (“the electron has to be somewhere in the system”).

In our view, the normalization condition arises because of conservation of energy alone. Conservation of mass is a separate principle, in our opinion. It applies to the attribute of mass of the EC Object of elementary charges. But not to the aetherial field of $\Psi$. Ontologically, the massive EC Objects and the aether are different entities. Finally, the probabilistic notions of particle position have no relevance in deriving the normalization condition. You don’t have to insert the measurement theory before imposing the normalization condition. Indeed, the measurement postulate comes way later.

Notice that the total complex-valued number for the energy of the universe remains constant. However, the time-dependence of $\Psi(x,t)$ implies that the aether, and hence the universe forever remains in a state of oscillatory motions. (In the nonlinear theory, the system remains oscillatory, but the state evolutions are not periodic. Mark the difference between these two ideas.)

10.3. The wavefunction of the universe is always in energy eigenstates.

Another interesting consequence of the energy conservation principle is this:

Consider these two conclusions: (i) The universe is an isolated system; hence, its energy is conserved. (ii) There is only one aether object in the universe; hence, there is only one universal wavefunction.

A direct consequence therefore is this:

For an isolated system, the system wavefunction always remains in energy eigenstates. Hence, every state assumed by the universal wavefunction is an energy eigenstate.

Take a pause to note a few peculiarities about the preceding statement.

No, this statement does not at all reinforce misconceptions (see Dan Styer’s paper, here: [^][Preprint PDF ^])

The statement refers to isolated systems, including the universe. It does not refer to closed or open systems. When matter and/or energy can cross system boundaries, a mainstream-supposed “wavefunction” of the system itself may not remain in an energy eigenstate. Yet, the universe (system plus environment) always remains in some or the other energy eigenstate.

However, the fact that the universal wavefunction is always in an energy eigenstate does not mean that the universe always remains in a stationary state. Notice that the $V(x,t)$ itself is time-dependent. So, the time-changes in it compel the $\Psi$ to change in time too. (In the language of mainstream QM: The Hamiltonian operator is time-dependent, and yet, at any instant, the state of the universe must be an energy eigenstate.)

In our view, due to nonlinearity, $V(x,t)$ also is an indirect function of the instantaneous $\Psi(x,t)$. Will cover the nonlinearity and the measurement problem the next time. (Yes, I am extending this series by one post.)

Of course, at any instant, the integral over the domain of the algebraic sum of the kinetic and the potential energy fields is always going to come to the single number which is: the aspatial attribute of the total internal energy number for the isolated system.

10.4. The wavefunction $\Psi(x,t)$ is ontic, but only indirectly so—it’s an attribute of the energy field, and hence of the aether, which is ontic:

So, is the wavefunction ontic or epistemic? It is ontic.

An attribute does not have a physical existence independent of, or as apart from, the object whose attribute it is. However, this does not mean that an attribute does not have any physical existence at all. Saying so would be a ridiculously simple error. Objects exist, and they exist as identities. The identity of an object refers to all its attributes—known and unknown. So, to say that an object exists is also to say that all its attributes exist (with all their metaphysically existing sizes too). It is true that blueness does not exist without there being a blue object. But if a blue object exist, obviously, its blueness exists in the reality out there too—it exists with all the blue objects. So, “things” such as blueness are part of existence. Accordingly, the wavefunction is ontic.

Yet, the isolation (i.e. identification) of the wavefunction as an attribute of the aether does require a complex chain of reasoning. Ummm… Yes, literally complex too, because it does involve the complex-valued SE.

The aether is a single object. There are no two or more aethers in the universe—or zero. Hence, there is only a single complex-valued field of energy, that of the total internal energy. For this reason, there is only one wavefunction field in the universe—regardless of the number of particles there might be in it. However, the system wavefunction can always be mathematically decomposed into certain components particular to each particle. We will revisit this point when we cover multi-particle quantum systems.

10.5. The wavefunction $\Psi(x,t)$ itself is dimensionless:

In our view, the wavefunction, i.e., $\Psi(x,t)$ itself is dimensionless. We base this conclusion on the fact that while deriving the Schrodinger equation, where $\Psi(x,t)$ gets introduced, each term of the equation is regarded as an energy term. Since each term has $\Psi(x,t)$ also appearing in it (and you cannot get rid of the complex nature of the Schrodinger equation merely by dividing all terms by it), obviously, the multiplying factor of $\Psi(x,t)$ must be taken as being dimensionless. That’s how we in fact have proceeded.

The mainstream view is to assign the dimensions of $\dfrac{1}{\sqrt{\text{(length)}^d}}$, where $d$ is the dimensionality of the embedding space. This interpretation is based on Born’s rule and conservation of matter; for instance, see here [^].

However, as explained in the sub-section 10.2., we arrive at the normalization condition from the energy conservation principle, and not in reference to Born’s postulate at all.

All in all, $\Psi(x,t)$ is dimensionless. It appears in theory only for mathematical convenience. However, once defined, it can be seen as an attribute (aspect) of the complex-valued internal energy field (and its two components, viz. the complex-valued kinetic- and potential-energy fields). In this sense, it is ontic—as explained in the preceding sub-section.

11. Visualizing the wavefunction and the single particle in the PIB model:

11.1. Introductory remarks:

What we will be doing in this section is not ontology, strictly speaking, but only physics and visualization. PIB stands for: Particle-In-a-Box. Study this model from any textbook and only then read further.

The PIB model is unrealistic, but pedagogically useful. It is unrealistic because it uses a potential energy distribution that is not singularly anchored into point-particle positions. So, the potential energy distribution must be seen as a mathematically convenient abstraction. PIB is not real QM, in short. It’s the QM of the moron, in a way—the electron has no “potential” inside the well.

11.2. The potential energy function used in the model:

The model says that there is just one particle in a finite interval of space, and its $V(x,t)$ always stays the same at all times. So, it uses $V(x)$ in place of $V(x,t)$.

The $V(x)$ is defined to be zero everywhere in the domain except at the boundary-points, where the particle is supposed to suddenly acquire an infinite potential energy. Yes, the infinitely tall walls are inside the system, not outside it. The potential energy field is the potential energy of a point-particle, and unless it were to experience an infinity of potential energy while staying within the finite control volume of the system, no non-trivial solution would at all be possible. (The trivial solution for the SE when $V(x) = 0$ is that $\Psi(x,t) = 0$—whether the domain is finite or infinite.) In short, the “side-walls” are included in the shipped package.

If the particle is imagined to be an electron, then why does its singular field not come into picture? Simple: There is only one electron, and a given EC Object (an elementary point-charge) never comes to experience its own field. Thus, the PIB model is unrealistic on another ground: In reality, force-fields due to charges always come in pairs. However, since we consider only one particle in PIB, there are no singular force-fields anchored into a moving particle’s position, in it, at all.

Yes, forces do act on the particle, but only at the side-walls. At the boundary points, it is a forced particle. Everywhere else, it is a free particle. Peculiar.

The domain of the system remains fixed at all times. So, the potential walls remain fixed in space—before, during, and after the particle collides with them.

The impulse exerted on the particle at the time of collision at the boundary is theoretically infinite. But it lasts only for an infinitesimally small patch of space (which is represented as the point of the boundary). Hence, it cannot impart an infinity of velocity or displacement. (An infinitely large force would have to act over a finite interval of space and time before it could possibly result in an infinitely large velocity or displacement.)

OK. Enough about analysis in terms of forces. To arrive at the particular solution of this problem using analytical methods (as with most any other advanced problem), energy-analytical methods are superior. So, we go back to the energy-based analysis, and Schrodinger’s equation.

11.3. TDSE as a continuous sequence of TISE’s:

Note that you can always apply the product ansatz to $\Psi(x,t)$, and thereby split it into two functions:

$\Psi(x,t) = \chi(x)\tau(t)$,

where $\chi(x)$ is the space-dependent part and $\tau(t)$ is the time-dependent part.

No one tells you, but it is true that:

Even when the Hamiltonian operator is time-dependent, you can still use the product ansatz separately at every instant.

It is just that doing so is not very useful in analytical solution procedures, because both the $\chi(x)$ and $\tau(t)$ themselves change in time. Therefore, you cannot take a single time-dependent function $\tau(t)$ as applying at all times, and thereby simplify the differential equation. You would have to progress the solution in time—somehow—and then again apply the product ansatz to obtain new functions of $\chi(x)$ and $\tau(t)$ which would be valid only for the next instant in the continuous progression of such changes.

So, analytical solution procedures do not at all benefit from the product ansatz when the Hamiltonian operator is time-dependent.

However, when you use numerical approaches, you can always progress the solution in time using suitable methods, and then, whatever $\Psi(x,t)\big|_{t_n}$ you get for the current time $t_n$, you can regard it as if it were solving a TISE which was valid for that instant alone.

In other words, the TDSE is seen as being a continuous progression of different instantaneous TISE’s. Seen this way, each $\Psi(x,t)\big|_{t_n}$ can be viewed as representing an energy eigenstate at every instant.

Not just that, but since there is no heat in QM, the adiabatic approximation always applies. So, for an isolated system or the physical universe:

For an isolated system or the physical universe, the time-dependent part $\tau(t)$ of $\Psi(x,t)$ may not be the same function at all times. Yet, it always progresses through a continuous progression of different $\chi(x)$ and $\tau(t)$‘s.

We saw in the sub-section 10.3. that the universal wavefunction must always be in energy eigenstates. We had reached that conclusion in reference to energy conservation principle and the uniqueness of the aether in the universe. Now, in this sub-section, we saw a more detailed meaning of it.

11.4. PIB anyway uses time-independent potential energy function, and hence, time-independent Hamiltonian:

When $V(x)$ is time-independent, the time-dependent part $\tau(t)$ stays the same for all times. Using this fact, the SE reduces to one and the same pair of $\chi(x)$ and $\tau(t)$. So, the TISE in this case is very simple to solve. See your textbooks on how to solve the TISE for the PIB problem.

However, make sure to

work through any solution using only the full-fledged complex variables.

The solutions given in most text-books will prove insufficient for our purposes. For instance, if $\tau(t)$ is the time-dependent part of the solution of TISE, then don’t substitute $\tau(t) = \cos \omega t$ in place of the full-fledged $\tau = e^{-i\omega t}$.

Let the $\tau(t)$ acquire imaginary parts too, as it evolves in time.

The reason for this insistence on the full complex numbers will soon become apparent.

11.5. Use the full-fledged $3D$ physical space:

To visualize this solution, realize that as in EM so also in QM, even if the problem is advertised as being $1D$, it still makes sense to see this one dimension as an aspect of the actually existing $3D$ physical space. (In EM, you need to go “up” to $3D$ because the curl demands it. In QM, the reason will become apparent if you do the homework given below.)

Accordingly, we imagine two infinitely large parallel planes for the system boundaries, and the aether filling the space in between them. (Draw a sketch. I won’t. I would have, in a real class-room, but don’t have the enthusiasm to draw pics while writing mere blog-posts. And, whatever happened to your interest in visualization rather than in “math”?) The planes remain fixed in space.

Now, pick up a line passing normally through the two parallel planes. This is our $x$-axis.

11.6. The aetherial momentum field:

Next, consider the aetherial momentum field, defined by:

$\vec{p}(x,t) =\ i\,\hbar\,\nabla\Psi(x,t)$.

This definition for the complex-valued momentum field is suggested by the form of the complex-valued quantum mechanical kinetic energy field. It has been derived in analogy to the classical expression $T = \dfrac{p^2}{2m}$.

In our PIB model, this field exists not just on the chosen line of the $x$-axis, but also everywhere in the $3D$ space. It’s just that it has no variation along the $y$– and $z$-axes.

11.7. Gaining physical clarity (“intuition”) with analysis in terms of forces, first:

In the PIB model, when the massive point-particle of the electron is at some point $\vec{r}_j$, then it experiences a zero potential force (except at the boundary points).

So, electrostatically speaking, the electron (i.e. the singularity at the EC Object’s position) should not move away from the point where it was placed as part of IC/BCs of the problem. However, the existence of the momentum field implies that it does move.

To see how this happens, consider the fact that $\Psi(x,t)$ involves not just the space-dependent part $\chi(x)$, but also the time-dependent part $\Theta(t)$. So,

The total wavefunction $\Psi(\vec{r}_j, t)$ is time-dependent—it continuously changes in time. Even in stationary problems.

Naturally, there should be an aetherial force-field associated with the aetherial momentum field (i.e. the aetherial kinetic energy field) too. It is given by:

$\vec{F}_{T}(x,t) = \dfrac{\partial}{\partial t} \vec{p}_{T}(x,t) = \dfrac{\partial}{\partial t} \left[ i\,\hbar\,\nabla\Psi(x,t) \right]$,

where the subscript $T$ denotes the fact these quantities refer to their conceptual origins in the kinetic energy field. These $_T$ quantities are over and above those due to the electrostatic force-fields. So, if $V$ were not to be zero in our model, then there would a force-field due to the electrostatic interactions as well, which we might denote as $\vec{F}_{V}$, where the subscript $_V$ denotes the origin in the potentials.

Anyway, here $V(x) = 0$ at all internal points, and so, only the quantity of force given by $\vec{F}_{T}(\vec{r}_j,t)$ would act on our particle when it strays at the location $\vec{r}_j$. Naturally, it would get whacked! (Feel good?)

The instantaneous local acceleration for the elemental CV of the aether around the point $\vec{r}_j$ is given by $\vec{a}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \dfrac{\partial \vec{p}_{T}(\vec{r}_j,t)}{\partial t}$.

This acceleration should imply a velocity too. It’s easy to see that the velocity so implied is nothing but

$\vec{v}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \vec{p}_{T}(\vec{r}_j,t)$.

Yes, we went through a “circle,” because we basically had defined the force on the basis of momentum, and we had given the more basic definition of momentum itself on the basis of the kinetic energy fields.

11.8. Representing complex-valued fields as spatial entities is logically consistent with everything we know:

Notice that all the fields we considered in the force-based analysis: the momentum field, the force-field, the acceleration field, and the velocity field are complex-valued. This is where the $3D$-ness of our PIB model comes handy.

Think of any arbitrary $yz$-planes in the domain as representing the mathematical Argand-plane. Then, the $\Psi(x,t)$ field at an arbitrary point $\vec{r}_j$ would be a phasor of constant length, but rotating in the same $yz$-plane at a constant angular velocity, given by the time-dependent part $\tau(t)$.

Homework: Write a Python simulation to show an animation of a few representative phasors for a few points in the domain, following the above convention.

11.9. Time evolution, and the spatial directions of the $\Psi(x,t)$-based vector fields:

Consider the changes in the $\Psi(x,t)$ field, distributed in the physical $3D$ space.

Consider that as $\tau(t)$ evolves in time, even if the IC had only a real-valued function like $\cos t$ specified for it, considering the full-fledged complex-valued nature of $\tau(t)$, it would soon enough (with the passage of an infinitesimal amount of time), acquire a so-called “imaginary” component.

Following our idea of representing the real- and imaginary-components in the $y$– and $z$-axes, the $\Psi(x,t)$ field no longer remains confined to a variation along the $x$-axis alone. It also has variations along the plane normal to the $x$-axis.

Accordingly, the unit vectors for the grad operator, and hence for all the vector quantities (of momentum, velocity, force and acceleration) also acquire a definite orientation in the physical $3D$ space—without causing any discomfort to the “math” of the mainstream quantum mechanics.

Homework: Consider the case when $\Psi(x,t)$ varies along all three spatial axes. An easy example would be that of the hydrogen atom wavefunction. Verify that the spatial representation of the vector fields (momentum, velocity, force or acceleration) proposed by us causes no harm to the the “math” of the mainstream quantum mechanics.

If doing simulations, you can integrate in time (using a suitable time-stepping technique), and come to calculate the instantaneous displacements of the particle, too. Exercise left for the reader.

Homework: Perform both analytical integration and numerical for the PIB model. Verify that your simulation is correct.

Homework: Build an animation for the motion of the point-particle of the EC Object, together with the time-variations of all the complex-valued fields: $\Psi(x,t)$, and all the complex-valued vector fields derived from it.

11.10. Too much of homework?

OK. I’ve been assigning so many pieces for the homework today. Have I completed any one of them for myself? Well, actually not. But read on, anyway.

The locus of all possible particle-positions would converge to a point only at the boundary points (because $\Psi(x,t) = 0$ there. At all the internal points in the domain, the particle-position should be away from the $x$-axis.

That’s my anticipation, but I have not checked it. In fact, I have not built even a single numerical simulation of the sort mentioned here.

So, take this chance to prove me wrong!

Please do the homework and let me know if I am going wrong. Thanks in advance. (I have to finish this series first, somehow!)

12. What the PIB model tells about the wave-particle duality:

What happened to the world-famous wave-particle duality? If you build the animations, you would know!

There is a point-particle of the electron (which we regard as the point of the singularity in the $\vec{\mathcal{F}}$ field), and there is an actual, $3D$ field of the internal energy fields—and hence of $\Psi(x,t)$. And, assuming our hypothesis of representing phasors of the complex numbers via a spatial representation, of all the complex-valued fields—including the vector fields like displacement.

The particle motion is governed by both the potential energy-forces and the kinetic energy-forces. That is, the aetherial wavefunction “guides” etc. the particle. In our view, the kinetic energy field too forces the particle.

“Ah, smart!,” you might object. “And what happened to the Born rule? If the wavefunction is a field, then there is a probability for finding the particle anywhere—not just at the position where it is, as predicted in this model. So, your model is obviously dumb!! It’s not quantum mechanics at all!!!”

Hmmm… We have not solved the measurement problem yet, have we?

We will need to cover the many-particle QM first, and then go to the nonlinearity implied by the kinetic energy field-forces, and only then would we be able to present our solution to the measurement problem. Since I got tired of typing (this post is already ~9,500 words), I will cover it in some other post. I will also try to touch on entanglement, because it would come in the flow of the coverage.

But in the meanwhile, try to play with something.

Homework: “Invert” the time-displacement function/relationship you obtain for the PIB model, and calculate the time spent by the particle in each infinitesimally small CV of the $3D$ domain, during a complete round-trip across the domain. Find its $x$-component. See if you can relate the motion, in any way, to the probability rule given by Born (i.e., try to anticipate our next development).

Do that. This way, you will stay prepared to spot if I have made any mistakes in this post, and also if I make any further mistakes in the next—and have made any mistakes in the last post as well.

Really. I could easily have made a mistake or two. … These matters still are quite new to me, and I really haven’t worked out the maths of everything ahead of writing these posts. That’s why I say so.

13. A preview of the things to come:

I had planned to finish this series in this post. In a sense, it is over.

The most crucial ontological aspects have already been given. Starting from the comprehensive list of the QM objects, we also saw that the quantum mechanical aetherial fields are all complex-valued; that there is an additional kinetic energy field too, not just potential; and also saw our new ideas concerning how to visualize the complex-valued fields by regarding the Argand plane as a mathematical abstraction of a real physical plane in $3D$. We also saw how these QM ontological objects come together in a simple but fairly well illustrative problem of the PIB. We even touched on the wave-particle duality.

So, as far as ontology is concerned, even the QM ontology is now essentially over. There might be important repercussions of the ontological points we discussed here (and, also before, in this series). But as far as I can see, these should turn out to be mostly consequences, not any new fundamental points.

Of course, a lot of physics issues still remain to be clarified. I would like to address them too.

So, while I am at it, I would also like to say something about the following topics: (i) Multi-particle quantum systems. (ii) Issue of the $3D$ vs. $3ND$ nature of the wavefunction field. (iii) Physics of entanglement. (iv) Measurement problem.

All these topics use the same ontology as used here. But saying something about them would, I hope, help understand it better. Applications always serve to understand the exact scope and the nuances of a theory. In their absence, a theory, even if well specified, still runs the risk of being misunderstood.

That’s why I would like to pick up the above four topics.

No promises, but I will try to write an “extra” post in this series, and finish off everything needed to understand the points touched upon in the Outline document (which I had uploaded at iMechanica in February this year, see here [^]). Unlike until now, this next post would be mostly geared towards QM experts, and so, it would progress rapidly—even unevenly or in a seeming “broken” manner. (Experts would always be free to get in touch with me; none has, in the 8+ months since the uploading of the Outline document at iMechanica.)

I would like it if this planned post (on the four physics topics from QM) forms the next post on this blog, but then again, as I said, no promises. There might be an interruption with other topics in the meanwhile (though I would try to keep them at the bay). Plus, I am plain tired and need a break too. So, no promises regarding the time-frame of when it might come.

OK.

So, do the homework, and think about the whole thing. Also, brush up on the topic of coupled oscillations, say from David Morin/Walter Fox Smith/Howard Georgi, or even as covered in the FEM modeling of idealized spring-mass systems. Do that, so that you are ready for the next post in this series—whenever it comes.

In the meanwhile, sure feel free to drop in a comment or email if you find that I am going wrong somewhere—especially in the maths of it or its implications. Thanks in advance.

Take care, and bye for now.

A song I like:

(Marathi) “aalee kuThoonashee kanee taaLa mrudungaachi dhoona”
Music and Singer: Vasant Ajgaonkar
Lyrics: Sopandev Chaudhari

History:
— First published: 2019.11.05 17:19 IST.
— Added the sub-section 10.5. and the songs section. Corrected LaTeX typos. the same day at 20:31 IST.
— Expanded the section 11. considerably, and also added sub-section titles to it. Revised also the sections 12. and 13. Overall, a further addition of approx. 1,500 words. Also corrected typos. Now, unless there is an acute need even for typo-corrections (i.e. if something goes blatantly in an opposite direction than the meaning I had in mind), I would leave this post in the shape in which it is. 2019.11.06 11:06 IST.

# Should I give up on QM?

After further and deeper studies of the Schrodinger formalism, I have now come to understand the exact position from which the physicists must be coming (I mean the couple of physicists with who I discussed the ideas of my new approach, as mentioned here [^])—why they must be raising their objections. I came to really understand their positions only now. Here is how it happened.

I was pursuing finding correspondence between the $3ND$ configuration space of the Schrodinger formalism on the one hand and the $3D$ physical space on the other, when I run into this subtle point which made everything look completely different. That point is the following:

Textbooks (or lecture notes, or lecturers) don’t ever highlight this point (in fact, indirectly, they actually obfuscate it), but I came to realize that even in the $1D$ cases like the QM harmonic oscillator (QHO), the Schrodinger formalism itself remains defined only on an abstract hyperspace—it’s just that in the case of the QHO, this hyperspace happens to be $1D$ in nature, that’s all.

I came to realize that, even in the simplest $1D$ case like the QHO the $x$ variable which appears in the Schrodinger equation does not directly refer to the physical space. In case of QHO, it refers to the change in the equilibrium separation between the centers of the two atoms.

Physicists and textbooks don’t mention this point, and in fact, the way they present QM, they make it look as if $x$ is the simple position variable. But in reality, no it is not. It can be made to look like a position variable (and not a change-in-the-interatomic-distance variable) by fixing the coordinate system to one of the two atoms (i.e. by making it a moving or Lagrangian coordinate system). But doing so leads to losing the symmetry in the motion of the two atoms, and more important, it further results in an obfuscation of the real nature of the issue. Mind you, textbook authors are trying to be helpful here. But unwittingly, they end up actually obfuscating the real story.

So, the $x$ variable whose Laplacian you take for the kinetic energy term also does not represent the physical space—not even in the simplest $1D$ cases like the QHO.

This insight, which I gained only now, has made me realize that I need to rethink through the whole thing once again.

In other words, my understanding of QM turned out to have been faulty—though the fault is much more on the part of the textbook authors (and lecturers) than on the part of someone like me—one who has learnt QM only through self-studies.

One implication of this better understanding now is that the new approach as stated in the Outline document isn’t going to work out. Even if there are a lot of good ideas in it (Only the Coulomb potentials, the specific nonlinearity proposed in the potential energy term, the ideas concerning measurements, etc.), there are several other ideas in that document which are just so weak that I will have to completely revise my entire approach once again.

Can I do that—take up a complete rethinking once again, and still hope to succeed?

Frankly, I don’t know. Not at this point of time anyway.

I still have not given up. But a sense of tiredness has crept in now. It now seems possible—very easily possible—that QM will end up defeating me, too.

But before outright leaving the fight, I would like to give it just one more try. One last try.

So, I have decided that I will “work” on this issue for just a little while more. May be a couple of weeks or so. Say until the month-end (March 2019-end). Unless I make some clearing, some breaththrough, I will not pursue QM beyond this time-frame.

What is going to be my strategy?

The only way an enterprise like mine can work out is if the connection between the $3D$ world of observations and the hyperspace formalism can be put in some kind of a valid conceptual correspondence. (That is to say, not just the measurement postulate but something deeper than that, something right at the level of the basic conceptual correspondence itself).

The only strategy that I will now pursue (before giving up on QM) is this: The Schrodinger formalism is based on the higher-dimensional configuration space not because a physicist like him would go specifically hunting for a higher-dimensional space, but primarily because the formulation of Schrodinger’s theory is based on the ideas from the energetics program, viz., the Leibniz-Lagrange-Euler-Hamilton program, their line(s) of thought.

The one possible opening I can think of as of today is this: The energetics program necessarily implies hyperspaces. However, at least in the classical mechanics, there always is a $1:1$ correspondence between such hyperspaces on the one hand and the $3D$ space on the other. Why should QM be any different? … As far as I am concerned, all the mystification they effected for QM over all these decades still does not supply any reason to believe that QM should necessarily be very different. After all, QM does make predictions about real world as described in $3D$! Why, even the position vectors that go into the potential energy operator $\hat{V}$ are defined only in the $3D$ space. …

… So, naturally, it seems that I just have to understand the nature of the correspondence between the Lagrangian mechanics and the $3D$ mechanics better. There must be some opening in there, based on this idea. In fact my suspicion is stronger: If at all there is a real opening to be found, if at all there is any real way to crack this nutty problem, then its key has to be lying somewhere in this correspondence.

So, I have decided to work on seeing if pursuing this line of thought yields something definitive or not. If it doesn’t, right within the next couple of weeks or so, I think I better throw in the towel and declare defeat.

Now, understanding the energetics program better meant opening up once again the books. But given my style, you know, it couldn’t possibly be the maths books—but only the conceptual ones.

So, this morning, I spent some time opening a couple of the movers-and-packers boxes (in which stuff was still lying as I mentioned before [^]), and also made some space in my room (somehow) by shoving the boxes a bit away to open the wall-cupboard, and brought out a few books I wanted to read  / browse through. Here they are.

The one shown opened is what I had mentioned as “the energetics book” in the background material document (see this link [^] in this post [^]). I am going to begin my last shot at QM—the understanding of the $3ND$$3D$ issue, starting with this book. The others may or may not be helpful, but I wanted to boast that they are just a part of personal library too!

Wish me luck!

(And suggest me a job in Data Science all the same! [Not having a job is the only thing that gets me (really) angry these days—and it does. So there.])

BTW, I really LOL on the Record of 17 off 71. (Just think what happened in 204!)

A song I like:

(Hindi) “O mere dil ke chain…”
Singer: Kishor Kumar
Music: R. D. Burman
Lyrics: Majrooh Sultanpuri

Minor editing to be done and a song to be added, tomorrow. But feel free to read the post right starting today.

Song added on 2019.03.10 12.09 AM IST. Subject to change if I have run it already.

# An update on my research

28th February is the National Science Day in India.

The story goes that it was on this day (in 1928) that C. V. Raman discovered the effect known by his name.

I don’t believe that great discoveries like that are made in just one single day. There is a whole sequence of many crucially important days involved in them.

Yes, on this day, Raman might have achieved a certain milestone or made a key finding regarding his discovery. However, even if true in this case (which I very much doubt), it’s not true in general. Great discoveries are not made in a single day; they are usually spread over much longer span of time. A particular instant or a day has more of just a symbolic value—no matter how sudden the discovery might have looked to someone, including to the discoverer.

There of course was a distinguished moment when Kekule, in his famous dream, saw a snake swallowing its own tail. However, therefore to say that he made the discovery concerning the ring structure of the benzene molecule, just in a single moment, or in a single flash of imagination, is quite a bit of a stretch.

Try it out yourself. Think of a one-line statement that encapsulates the findings of a discovery made by a single man. Compare it with another statement which encapsulates any of the previous views regarding the same matter (i.e., before this discovery came along). This way, you can isolate the contributions of a single individual. Then analyze those contributions. You would invariably find that there are several different bits of progress that the discovery connected together, and these bits themselves (i.e., the contributions made individually by the discoverer himself) were not all discovered on the same day. Even if a day or an hour is truly distinctive in terms of the extent of progress made, it invariably has the character of taking an already ongoing process to a state of completion—but not of conducting that entire process. Mystical revelation is never a good metaphor to employ in any context—not even in the spiritual matters, let alone in the scientific ones.

Anyway, it’s nice that they didn’t choose Raman’s birth-day for this Day, but instead chose a day that was related to his most famous work in science. Good sense! And easy to remember too: 28-02-’28.

Let me celebrate this year’s Science Day in my own, small, personal way. Let me note down a bit of an update on my research.

1. I have had a bit of a correspondence, regarding my new approach, with a couple of physicists. Several objections were made by them, but to cut a long story short, neither seemed to know how to get into that mode kind of thinking which most naturally leads to my main thesis, and hence helps understand it.

The typical thought process both these physicists displayed was the one which is required in finding analytical solutions of problems of a certain kind, using an analysis of a specific kind. But it is not the kind of thought process which is typically required in the computational modeling of complex phenomena. Let me remind you that my theory is nonlinear in nature. Nonlinearity, in particular, is best approached only computationally—you would be hopelessly out of your wits if you try to find analytical solutions to a nonlinear system. What you should instead pursue is: thinking in terms of the following ingredients: certain objects, an algorithm to manipulate their states, and tracing the run-time evolution of the system. You try this algorithmic way of thinking, and the whole thing (I mean understanding the nature of a nonlinear system) becomes easy. Otherwise, it looks hopelessly complicated, incomprehensible, and therefore, deeply suspicious, if not outright wrong. Both the physicists with who I interacted seemed to be thinking in terms of the linear theory of QM, thereby restricting their thought modes to only the traditional formalism based on the abstract Hilbert-spaces and linear Hermitian operators. Uh oh! Not good. QM is fundamentally nonlinear; the linear formulations of QM are merely approximations to its true nature. No matter how analytically rigorous you can get in the traditional QM, it’s not going to help you understand the true nature of quantum phenomena, simply because a linear system is incapable of throwing much light on the nonlinear system of which it is an approximation.

I believe it was out of this reason—their continuing to think in terms of linear systems defined over hyperspaces and the operator algebra—that one of them raised the objection that if $\Psi$ in MSQM (mainstream QM) is defined on a $3ND$ configuration space, how come my $\Psi(x,t)$ could be defined over the physical $3D$ space. He didn’t realize, even after I supplied the example of the classical $N$-particle molecular dynamics (MD) simulations, that using an abstract higher-dimensional space isn’t the only viable manner in which you can capture the physics of a situation. (And I had indicated right in the Outline document too, that you first try to understand how a Newtonian evolution would work for multiple, charged, point-particles as in classical physics, and only then modify this evolution by introducing the system wavefunction.)

I came to gather that apparently, some people (who follow the Bohmian mechanics doctrine) have tried to find a $3ND \leftrightarrow 3D$ correspondence for a decade, if not more. Apparently, they didn’t succeed. I wonder why, because doing so should be so damn straight-forward (even if it would not be easy). You only have to realize that a configuration space refers to all possible configurations, whereas what an evolution over a $3D$ physical space directly deals with is only one initial configuration at a time. That is what specifying the ICs and the BCs does for you.

In case of MD simulations, you don’t define a function over the entire $3ND$ configuration space in the first place. You don’t try to produce an evolution equation which relies on only those kinds of operators which modify all parts of the entire hyperspace-function in one shot, simultaneously. Since you don’t think in such hyperspace terms in the first place, you also don’t have to think in terms of the projection operators bringing the system dynamics down to $3D$ in particular cases either. You don’t do that in the context of MD simulations, and you don’t do it in the context of my approach either.

This physicist also didn’t want me to say something using analogies and metaphors, and so I didn’t mention it to him, but I guess I can use an analogy here. It will allow even a layman to get a sense of the issue right.

This physicist was insisting on having a map of an entire territory, and was more or less completely dismissing my approach on the grounds that I only supply the surveying instruments like the theodolite and the triangulation algorithm. He expected to see the map—even when a theory is at a fledgling stage. He nevertheless was confident that I was wrong because I was insisting that each physical object in the actual territory is only at one place at any given instant, that it is not spread all over the map. This analogy is not exact, but it is helpful: it does bring out the difference of focusing on only the actually followed trajectory in the configuration space, vs. an insistence on using the entirety of the configuration space for any description of an evolution. But that guy didn’t get this point either. And he wanted equations, not analogies or metaphors.

Little wonder they have not been successful in finding out what logical connection there is between the abstract $3ND$ hyperspace on the one hand, and the $3D$ physical space on the other hand. Little wonder they don’t progress despite having worked on the problem for a decade or so (as this guy himself said).

Yeah, physicists, work harder, I say! [LOL!]

2. Apart from it all—I mean all those “discussions”—I have also realized that there are several errors or confusing explanations in the Outline document which I uploaded at iMechanica on 11th February 2019. Of course, these errors are more minor in nature. There are many, many really important ideas in that document which are not in error.

The crucially important and new ideas which are valid include, just to cite a few aspects: (i) my insistence on using only those potentials that are singularly anchored into the point-particle charges, (ii) the particular nonlinearity I have proposed for the system evolution, (iii) the idea that during a measurement it is the Instrument whose state undergoes a cascade of bifurcations or catastrophic changes, whereas the System state essentially remains the same (that there is no wavefunction collapse). And, many, many other ideas too. These ideas are not only crucial to my approach but they also are absolutely new and original. (Yes, you can be confident about this part, too—else, Americans would have pointed out the existing precedence by now. (They are just looking to find errors in what(ever) I say.)) All these ideas do remain intact. The confusing part or the one having erroneous statements indeed is more minor. It concerns more with how I tried to explain things. And I am working on removing these errors too.

I have also come to realize that I need to explicitly give a set of governing equations, as well as describe the algorithm that could be used in building the simulations. Yes, the physicist had asked me for an evolution equation. I thought that any one, given the Schrodinger equation and my further verbal additions / modifications to it, could easily “get” it. But apparently, he could not. So, yes, I will explicitly write down the evolution equation for my approach, as an equation that is separate from Schrodinger’s. In the next revision of the document (or addition to it) I will not rely on the only implicitly understood constraints or modifications to the TDSE.

3. There also are some other issues which I noticed entirely on my own, and I am working on them.

One such issue concerns the way the kinetic energy is captured in the MSQM vs. how my approach ought to handle and capture it.

In MSQM, the kinetic energy consists of a sum of 1-particle Laplacian operators that refer to particle coordinates. Given the fact that my approach has the wavefunction defined over the $3D$ space, how should this aspect be handled? … By the time I wrote my Outline document (version 11 February 2019), I had not thought a lot about the kinetic energy part. Now, I found out, I have to think really deep about it. May be, I will have to abandon the form of Schrodinger’s equation itself to a further extent. Of course, the energy analysis will progress on the same lines (total energy = kinetic + potential), and the de Broglie relations will have to be honored. But the form of the equation may turn out to be a bit different.

You see, what MSQM does is to represent the particles using only the $\Psi(x,t)$ field. The potential energy sure can be constructed in reference to a set of discrete particle positions even in MSQM, but what the $\hat{V}$ operator then yields is just a single number. (In case of time-dependent potentials, the value of this variable varies in time.) The multiplication by the hyperspace-function $\Psi(x,t)$ then serves to distribute this much amount of energy (that single number) over the entire hyperspace. Now realize that $|\Psi(x,t)|^2$ gives the probability. So, in a way, indirectly, even if you can calculate / compute the potential energy of the system starting from a certain set of particle positions, in the MSQM, you then have to immediately abandon them—the idea of the discrete particles. The MSQM formalism doesn’t need it—the particle positions. You deal only with the hyperspace-occupying $\Psi(x,t)$. The formulation of kinetic energy also refers to only the $\Psi(x,t)$ field. Thus, in MSQM, particles are ultimately represented only via the $\Psi(x,t)$ field. The $\Psi(x,t)$ is the particles.

In contrast, in my approach, the particles are represented directly as point-phenomena, and their positions remain significant throughout. The $\Psi(x,t)$ field of my approach connects, and causally interacts with, the particles. But it does not represent the particles. Ontologically, $\Psi(x,t)$ is basically different from particles, even if the background object does interacts with the particles. Naturally, why should I represent their kinetic energies via the Laplacian terms? … Got the idea? The single number that is the kinetic energy of the particles, need not be regarded as being distributed over the $3D$ space at all, in my approach. But in 11th February version of the Outline document, I did say that the governing equation is only Schrodinger’s. The modifications required to be made to the TDSE on account of the kinetic energy term, is something I had not even thought of, because in writing that version, I was trying focusing on getting as many details regarding the potential energy out as possible. After all, the nonlinear nature of QM occurs due to the potential term, doesn’t it?

So, I need to get issues like these straightened out too.

… All in all, I guess I can say that I am more or less (but not completely) done with the development concerning the spin-less 1-particle systems, esp. the time-independent states. So far, it seems that my approach does work fine with them. Of course, new issues continue to strike me all the time, and I continue finding answers to them as well—as happens in any approach that is completely new. New, right from the stage of the very basic ideation  concerning what kind of objects there should be, in the theory.

I have just about begun looking into the (spin-less) multi-particle states. That is the natural order in which the theory should progress, and my work is tracing just this same path. But as I said, I might also be revising some parts of the earlier presented theory, as and when necessary.

4. I also realized on my own, but only after the interaction with the physicists was already over, that actually, I need not wait for the entire multi-particle theory to get developed before beginning with simulations. In fact, it should be possible to handle some simple 1-particle $1D$ cases like the particle in a box or the QHO (quantum-mechanical oscillator) right away.

I plan to pursue these simulations right in the near future. However, I will not be able to complete pursuing all their aspects in the near future—not even in the simple cases involving just $1D$ simulations. I plan to do a preliminary simulation or two, and then suspend this activity until the time that I land a well-paying job in data science in Pune.

No songs section this time because I happened to post several entries almost back to back here, and in the process, I seem to have used up all the songs that were both new (not run here before) and also on the top of my mind. … May be I will return later and add a song if one strikes me easily.

Bye for now, and have a happy Science Day!

Minor editing may be done later today. Done by 20:15 hrs the same day.