Entanglement, nonlocality, and the slickness of the MSQM folks

Update: See at the end of this post.


0. Context

This post began its life as a comment to Roger Schlafly’s blog post: “Smolin preaches nonlocality nonsense” [^]. However, at 7000+ characters, my comment was almost twice the limit (of 4k characters) there. So, I decided to post my reply here, as a separate entry by itself.

I assume that you have read Schlafly’s post in toto before going any further.


1. Schlafly’s comments:

Schlafly says:

“Once separated, the two particles are independent.”

The two particles remain two different entities, but their future dynamics also remains, in part, governed by a single, initial, entangling, wavefunction.

“Nothing you do to one can possibly have any effect on the other.”

The only possible things you can do to any one (or both) of the entangled particles necessarily involves their shared (single) wavefunction.

Let me explain. Let’s begin at the beginning.


1. System description and notation:

Call the two entangled particles EP1 and EP2.

If you want to imagine two different things physically being done to the two EPs, then you have to have at least two additional particles (APs) with which these EP’s eventually interact. APs may be large assemblages of particles like detectors; EPs are regarded as simple single particles, say two electrons.

Imagine a 1D situation. Initially, the EPs interact at the origin of the x-axis. Then they fly apart. EP1 goes to, say, +1000.0 km (or lightyears), and EP2 goes to -1000.0 km (or lightyears). Both points lie on the same x-axis, symmetrically away from the origin.

To physically do something with the EP1, suppose you have the additional particle (detector) AP1 already existing at 1000.0 +\epsilon km, and similarly, there is another AP2, exactly at -1000.0 - \epsilon km, where \epsilon is a small distance, say of the order of a millimeter or so.

Homework 1: Check out the distance from the electron emitter to the detector in the single-particle double-slit interference experiments. Alternatively, the size of the relevant chamber inside a TEM (transmission electron microscope).

The overall system thus actually has (and always had) four different particles, and in the ultimate analysis, they all have always had a single, common, universal wavefunction. (Assume, there is nothing else in the universe.)

But for simplicity of talking, we approximated the situation by eking out a two-particle entangled wavefunction for the EPs—just to get the discussion going.

All MSQM (mainstream QM) people blithely jump to and forth between abstractions in this way—between two abstractions of having basically different scopes. That’s not the trouble. The trouble is: They never tell you exactly when they are about to do that.

OK. Now, think of the 4-particles system-wavefunction as being built from four different 1-particle wavefunctions (via an appropriate linear superposition of all the appropriate product-states of the four 1-particle wavefunctions, with the proviso that the resulting single wavefunction must have enough generality, and that it obey the appropriate exchange-operator rules etc.).


2. The sense in which entangled particles approach independence—in their interactions with the other particles:

Each 1-particle wavefunction has an anchoring point in space.

[MSQM people never tell you that. [Google on “anchoring of” “potentials” or “wavefunctions” in the context of QM.]]

Each such a wavefunction very rapidly drops off in intensity from its anchoring point, so as to satisfy the Sommerfeld radiation condition. …May be there is a generalization of this principle for the many-particle situations; I don’t know. But I know that if the system-wavefunction has to be square-normalizable, then some condition specifying a rapid decay over space is what Sommerfeld the nature ordered.

[MSQM people never remind you of such a condition in any such contexts to you. [Google!]]

So, the 1-particle wavefunction for AP1 affects EP1 far, far more than it affects EP2. Similarly, the 1-particle wavefunction for AP2 affects EP2 far, far more that it affects EP1.

Homework 2: Find the de Broglie wavelength for an electron, and for a typical detector. Work it out on your own. Don’t cheat [^][^] !

In this sense, sure, what AP1 does to EP1 (and vice-versa) has overwhelmingly greater effect than what it does to EP2 (and vice-versa).

So, what Schlafly says (“Nothing you do to one can possibly have any effect on the other”) does have a certain merit to it, but only in a limiting and approximate (“classical-like”) sense.

In a certain limiting sense, the AP1 \Leftrightarrow EP1 and AP2 \Leftrightarrow EP2 interactions do approach full independence.

To use the language that the MSQM people typically use, the reason put forth is that AP1 and AP2 never directly interacted with each other.

Actually, they all always had interacted with all the others—but in this case, only dimly so. So, as we would say to describe the same point: Due to the Sommerfeld radiation condition, AP1 \Leftrightarrow AP2 interaction always was, remains, and assuming that they don’t leave their fixed positions at \pm 1000.0 km so as to go nearer to each other, it will also always remain, very negligibly small.


3. The entangled particles’ dynamics continues to be influenced from the initial entanglement:

However, note that as EP1 and EP2 travel from the origin to their respective points (to their respective positions at \pm 1000.0 km), this entire evolution in their states (consisting of their “travel”s/displacements) occurs at all times under an always continuing influence of the same, initial, 2-particle entangled part of, the 4-particle system wavefunction—its deterministic time-evolution (as given by the Schrodinger equation).

Since the state evolution for both EP1 and EP2 was guided at each instant by the same 2-particle entangled part of the same wavefunction, the amount of distance does not matter—at all.

Even if their common entangled wavefunction initially has almost a zero strength at the distant points \pm 1000.0 km away, once EP1 and EP2 particles begin moving away from the origin, their states evolve deterministically (obeying the time-dependent Schrodinger equation). As they approach the two \pm 1000.0 km points respectively, the common wavefunction’s strength at these two points accordingly increases (and the strength of that portion of the same wavefunction which lies in the space near the origin progressively decreases). That’s because the common entangling part of the system wavefunction, is composed from two 1-particle wavefunctions, one each for EP1 and EP2, and each of these two 1-particle wavefunction has the respective current positions of EP1 and EP2 as their reference (or anchoring) point. Why? Because the potential energy has a singularity in their current point positions, that’s why.

So, all in all, yes, the nature of what EP1 can at all do in its interaction with AP1 is still, in part, being governed by the deterministically evolved state of the initial, single, 2-particle entangling wavefunction. [That’s how even the MSQM folks put it. Actually, it’s a 2-particle part of the 4-particle system wavefunction.]

So, the net result at the +1000.0 km point is that, when seen in an approximate manner, EP1 seems to be interacting with AP1 (or, AP1 with EP1) in a manner that seems to be completely independent of how  EP1 interacts with AP2 and EP2—i.e., there is almost no interaction at all.

Similarly, the net result at the -1000.0 km point is that, when seen in an approximate manner, EP2 seems to be interacting with AP2 (or, AP2 with EP2) in a manner that seems to be completely independent of how  EP2 interacts with AP1 and EP1—i.e., there is almost no interaction at all.


4. The paradox we have to resolve:

We thus have two apparently contradictory ways of summarizing the same situation.

  • Since the two EPs have gone so farther apart, and since AP1 and AP2 never “interacted” strongly with each other (or with EP1 and EP2), therefore, EP1’s behaviour should be taken to be “independent” of EP2’s behaviour, when they are at the \pm 1000.0 km points. Their behaviour should have nothing in common.
  • Yet, since EP1 and EP2 were initially entangled, and since both their respective state-evolutions were governed by the common, single wavefunction entangling them, therefore, their behaviour must also have something in common.

Got it?

How do we resolve this paradox?


5. What kind of things actually happen:

Suppose the interaction of AP1 with EP1 is such that we can say that it is EP1’s spin-property which gets measured by AP1.

Here, imagine an assemblage of a large number of particles, acting as a spin-detector, in place of AP1. (We will continue to call it a single “particle”, for the sake of simplicity.)

Suppose that the measurement outcome happens to be such that EP1’s spin is measured at AP1 to be “up” with respect to a certain z-axis (applicable to the entire universe).

Now, remember, measurement is a probabilistic process. Therefore, the correct statement to make here is:

If (and when) AP1 measures EP1’s spin, the outcome is one (and only one) of the two possibilities: either “up”, or “down.”

In other words, it is always possible that EP1 interacts with AP1, and yet, the action of EP1’s spin influencing some large-scale configuration changes within AP1 (an event which we call “measurement”) never actually comes to occur. This is possible to. However, if a measurement does occur, then the outcome is one and only one of those two possibilities.

Now suppose, to take the description further, that AP1 does indeed end up measuring EP1 spin. (That is to say, suppose that such a thing comes to occur as a physical fact, an irreversible change in the universe.)

Assume further—for the sake of pedagogic simplicity—that the EP1’s spin is measured to be “up” (and not “down”).

Suppose further that the interaction of AP2 with EP2 is such that we can say that it is EP2’s spin which is the property that gets measured by AP2—if there at all occurs a measurement when EP2 is near or at AP2. Again, remember, measurement is a probabilistic process. The correct statement now to make is:

If (and when) AP2 ends up measuring EP1’s spin, then, since the EP1 and EP2 are entangled, the outcome at the -1000.0 km point has to be: “down” (because we assumed that EP1’s spin was measured as “up” at the +1000.0 km point).

Note, the spin of EP2 is certain to be measured “down” in our case—provided it at all gets measured during the interaction of EP2 with AP2.

But note also that since AP2’s state is not entangled with AP1’s (they were too far away to begin with), just because AP1 does end up measuring EP1’s spin (as “up”) does not mean that AP2 will also necessarily measure EP2’s spin at all—despite the interaction they necessarily go through. (All four particles are, in reality, interacting. Here, AP2 and EP2, being closer, are interacting strongly.)


6. The game that the MSQM people play (with you):

Now, the whole game that MSQM (mainstream QM) physicists play with you is this.

They don’t explain to you, but it is true, that:

The fact

“AP1 interacted with EP1 to measure its spin state”

does not necessitate the conclusion

“AP2 must also measure the spin-state of EP2 in the same experimental trial“.

The latter is not at all necessary. It does not have to physically take place.

If so, then what can we say here? It is this:

But if (and when) AP2 does measure the spin-state (and no other measurable) of EP2, then the measured spin will necessarily be “down”.

The preceding statement is true.

This is because angular momentum conservation implies that if any one of the spins is measured as “up”, then the other has to get measured as “down”. This necessity is built right in the way the single entangling wavefunction is composed from the two 1-particle wavefunctions. It is the property of the initial entangling wavefunction that it has zero net spin-angular momentum. It gets reflected also in the measured read-outs with equal probability if two measurements at all take place at symmetrically far away points, so that the local patterns of the common wavefunction themselves must be symmetrically opposite. (Only a symmetrically opposite pair of 1-particle wavefunctions can together conserve angular momentum for the 2-particle entangling wavefunction.)

The slickness of MSQM people consists of refusing to make you realize that the common (entangling) wavefunction must, of necessity, arise from such symmetry conditions as just mentioned, and that it must also evolve perfectly preserving this symmetry throughout the Schrodinger evolution. Further, their slickness consists of making you believe that if AP1 does indeed physically measure EP1’s spin as “up”, then AP2 is also mandated to physically end up measuring EP2’s spin, in each and every trial.


7. How the MSQM people maintain their slickness, while presenting experimental data:

When they do experiments, they actually send entangled particles apart, and measure their respective spins at two equal distance apart and similarly tilted detector-positions.

What their raw data shows is that when the AP1 measures EP1 to be in the “up” state, AP2 may not always show any measurement outcome at all. Also, for all other three possibilities. (AP1 says “down”, nothing at AP2. AP1 says nothing, AP2 says “up”. AP1 says nothing, AP2 says “down”.)

What the MSQM folks do is, effectively, to simply drop all such observations. They retain only those among the raw data-points which have one of the two results:

  • EP1 actually measured (by AP1) to have the spin “up”, and EP2 actually measured (by AP2) to have the spin “down” in a single trial, or
  • EP1 actually measured (by AP1) to have the spin “down”, and EP2 actually measured (by AP2) to have the spin “up” in some other, single, trial.

So, their conclusion never do highlight the previously mentioned four possibilities.

No, they are not doing any data-fudging as such.

The data they present is the actual one, and it does support the theory.

But the as-presented data is not all the data there is—it’s not all there is to these experiments. And, so, it is not the complete story.

And, the part dropped-out of the final datasets sure tells you more about demystifying entanglement than the part that is eventually kept in does. It is this same—mystifying—data that gets presented in conferences, summarized in textbooks and pop-sci articles (including those on the Quanta Magazine site), and of course, in the pop-sci books (by all authors writing on this subject [Google (verb)!]).

Just hold the above discussion in mind, and see how it straightens out everything.


8. Summary of what we saw thus far:

A measured value is decided only in an act of measurement—if any measurement at all occurs during the ongoing interaction of a particle and a detector.

The respective probabilities for each of the two possible outcomes (in the spin “up” or “down” type of two-state situations) have already been decided by the deterministic time-evolution (the Schrodinger-evolution) of the initial, 2-particle entangling, part of the 4-particle system wavefunction.

If the AP1 detector is oriented to measure EP1’s spin as “up” with a P % probability, then EP2’s spin is necessarily “shaped” by the same wavefunction as to be inclined to be measured by AP2 as “down”, with the same P % probability—provided that:

  1. AP2 was in all respects identical to AP1 (including their orientations—say, placed in an exact mirror-symmetrical arrangement), and
  2. AP2 does at all end up measuring EP1. It might not, always.

Existence of an entanglement between EP1 and EP2 does not necessitate that if AP1 measures the spin-property of EP1 (w.r.t. a certain axis), then AP2 for the corresponding EP2 (coming from the same trial) must also measure the spin-property of EP2 (w.r.t to the same axis).

But if AP2 undergoes a measurement process too, then the outcome is determined, due to the commonality of the single entangling wavefunction (including the spinor function) which is shared by EP1 and EP2. And it works out as: if the first is “up”, the second must be “down”, or, vice versa.

 


Note: I am not sure if I noted in the NY resolutions post or not. But I’ve decided that I may not add a songs section every time—but sure enough I will, if one is somewhere at the back of the mind.

This topic is not difficult, but it is intricate. Easy to make typos. Also, very easy to make long-winding statements, not find the right phrases, ways of expression, metaphors, etc. So, I think I should come back and revise it after a few days. I should also give titles to the sections and all … But, anyway, in the meanwhile, do feel free to read.


History:

— 2020.01.03 12:15 IST: Initial posting.
— 2020.01.03 13:44 IST: Correction of typos, misleading statements. Addition of section titles, and a further section on the comparison with classical diffusion systems.
— 2020.01.03 15:33 IST: Added the section: “One last comment…”.
— 2020.01.03 17:03 IST: Further additions/corrections. Now am going to leave this post in this shape for at least a couple of days or more. But looks like it’s mostly done.
— 2020.01.04 14:18 IST: Nope. In simplifying everything as much as possible, it seems to me that I ended up getting off the track, and thus wrote something which is, I now think, was wrong. The error was confined to section 9.

The wrong part was important. I will have to look into the maths involving the spin property once again (and in fact learn more about it and many-particle systems in general), and further, I will have to integrate it with my new approach. Only then would I be able to come back on this point. It may take me quite some time to finalize such an integration, may be weeks, may be months.

My plan all so far was to leave the spin property of QM systems alone, and present the new approach only for spin-less systems. (That’s what I did in the Outline document too.) Yet, yesterday, somehow, I got tempted at covering the spin and the new approach together, right on the fly, and ended up writing a bit inadvertently adopting an ensembles-based interpretation. I thus sounded a bit too much like the Bohmian approach than what my approach actually should be like. (I know it from some other points of view that there are going to be important differences in my approach and the Bohmian one.)

All this, I realized, completely on my own, without any one prompting me or providing any feedback (not an indirect one, say as through the “follow-up” sort of channels), only this morning. So, I am deleting what earlier was the section 9.

The section 10 was not wrong as such. But its contents were prompted only by the topic covered in section 9. That’s why, though section 10 was essentially correct, I am also deleting it. I will cover both their topics in future.

In case any one is at all interested in having the original (erroneous) version of this post (with sections 9. and 10.), then I could share it. Feel free to approach me via an email or a comment.

As to any other errors/ambiguities/ill-expositions, I will let them be. I am done with this post. Time to move on.

Ontologies in physics—10: Objects in QM. Aetherial fields in QM. Particle-in-a-box.

0. Prologue:

The last time we saw the context for, and the scheme of the inductive derivation of, the Schrodinger equation. In this post, we will see the ontology which it demands—the kind of ontological objects there have to be, so that the physical meaning of the Schrodinger equation can be understood correctly.

I wrote down at least 2 or 3 different ways of presentations of the topics for this post. However, either the points weren’t clear enough, or the discussion was going too far away, and I was losing the focus on ontology per say.

That’s why, I have decided to first present the ontology of QM without any justification, and only then to explain why assuming this particular ontology, rather than any other, makes sense. In justifying this ontology, we will have to note the salient peculiarities regarding the mathematical nature of Schrodinger’s equation, as also many relevant quantum mechanical features.

In this post, we will deal with only one-particle quantum systems.

So, let’s get going with the actual ontology first.


1. Our overall view of the QM ontology:

1.1. Introductory remarks:

To specify an ontology of physics is to state the basic types of objects there have to exist in the physical reality, and the basic ways in which they interact, so that the given theory of physics makes sense—the physical phenomena the theory subsumes are identified with appropriate concepts, causal relations, laws, and so, an understanding can be developed for applications, for building new systems that make use of the subsumed phenomena. The basic purpose of physics is to develop understanding so that it can be put to use to build better systems—structures, engines, machines, circuits, devices, gadgets, etc.

Accordingly, we will first give a list of the type of objects that must exist in the physical world so that the quantum mechanical phenomena can be completely described using them. The theory we will assume is Schrodinger’s non-relativistic quantum mechanics of multiple particles, including phenomena like entanglement, but without including the quantum mechanical spin. However, in this post, we will cover those aspects that can be understood (or at least touched upon) using only the single-particle quantum systems.

1.2. The list of objects in our QM ontology:

The list of our QM ontological objects is this:

  • The EC Objects of electrons and protons.
  • A special category of objects called neutrons.
  • The aether filling all of the 3D space where other objects are not, and certain field-conditions present in it; the all-connecting aspect of the physical universe.
  • The photon as a certain kind of a transient condition in the aether, i.e., a virtual object.

Let’s see all of them in detail, one by one, but beginning with the aether first.


2. The aether:

Explaining the concept of the aether and its necessity are full-fledged topics by themselves, and we have already said a lot about the ontology of this background object in the previous posts. So, we will note just a few indicative characteristics of the aether here.

Our idea of the QM aether is exactly the same as that of the EM aether of Lorentz. The only difference is that the aether, when used in QM, the aether is seen as supporting not only the electrostatic fields but also one more type of a field, the complex-valued quantum mechanical field.

To note some salient points about the aether:

  • The aether has no such inertia that it shows up in the electrostatic or quantum-mechanical phenomena. So, in this sense, the aether is non-inertial in nature.
  • It exists in all parts of space where the other QM ontological objects (of electrons, protons and neutrons) are not.
  • It exchanges electrostatic as well as additional quantum-mechanical forces with the electrons and protons, but always by direct contact alone.
  • Apart from the electrostatic and quantum-mechanical forces, there are no other forces that enter into our ontological description. Thus, there is no drag-force exerted by the aether on the electrons, protons or neutrons (basically because the Lorentz aether is not a mechanical aether; it is not an NM-Ontological object). In the non-relativistic QM, we also ignore fields like magnetic, gravitational, etc.
  • All parts of the aether always remains stationary, i.e., no CV of itself translates in space at any time. Even if there is any actual translation going on in the aether, the quantum mechanical phenomena are unable to capture it, and so, a capacity to translate does not enter our ontology.
  • However, unlike in the EM theory, when it comes to QM, we have to assume that there are other motions in aether. In QM, the aether does come to carry a kinetic energy too, whereas in EM, the kinetic energy is a feature of only the massive EC Objects. So, the aether is stationary—but that’s only translation-wise. Yet, even in the absence of net displacements, it does force (and is forced by) the elementary charged objects of the electrons and protons.

We will note further details regarding the fields in the aether as we progress.


3. Electrons and protons:

The view of electrons and protons which we take in the QM ontology is exactly the same as that in the ontology of electrostatics; so see the previous posts in this series for details not again repeated here.

Electrons and protons are seen as elementary point-particles having, within the algebraic sign, the same amount of electrostatic charge e. They set up certain 3D field conditions in the non-inertial aether, but acting in pairs. We may sometimes informally call them as point-charges, but it is to be kept in mind that, strictly speaking, in our view, we do not regard the charge to be an attribute of the point-particle, but only of the aether.

For two arbitrary EC objects (electrons or protons) q_i and q_j forming a pair, there are two fields which simultaneously exist in the 3D aether. None can exist without the other. These fields may be characterized as force-fields or as potential energy fields.

In the interest of clarity in the multi-particle situations, we will now expand on the notation presented earlier in this series. Accordingly,

\vec{\mathcal{F}}(q_i|q_j) is the 3D force field which exists everywhere in the aether. It gives the Coulomb force that q_j experiences from the aether at its instantaneous position \vec{r}_j via direct contact (between the aether and itself). Thus, in this notation, q_j is the forced charge, and q_i is the field-producing charge. Quantitatively, this force-field is given by Coulomb’s law:

\vec{\mathcal{F}}(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i q_A}{r_{iA}^2} \hat{r}_{iA}, where q_A = q_j.

Similarly, \vec{\mathcal{F}}(q_i|q_j) is the aetherial force-field set up by q_j and felt by q_i in the same pair, and is given as:

\vec{\mathcal{F}}(q_j|q_i) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_j q_A}{r_{jA}^2} \hat{r}_{jA}, where q_A = q_i.

The fields are singular at the location of the forcing charge, but not at the location of the forced charge. Due to the divergence theorem, a given charge does not experience its own field.

There is no self-interaction problem either, because the EC Object (the point-charge) is ontologically a different object from both the aether and the NM objects. Only an NM Object could possibly explode under the self-field, primarily, because an NM Object is a composite. However, an EC Object (of an electron or a proton) is not an NM Object—it is elementary, not composite.

Notice that the specific forces at the positions of the q_i and q_j are equal in magnitude and opposite in directions. However, these two vectors act on two different objects, and therefore they don’t cancel each other. The two vectors also act at two different locations. In any case, in going from these two vectors to the two vector fields, it’s misleading to keep thinking in terms of one force-field as being the opposite of the other! Their respective anchoring locations (i.e. the two singularities) themselves are different, and they have the same signs too!! They are the same 1/(r^2) fields, but spatially shifted so as to anchor into the two charges of a pair.

When there are N number of elementary charged particles in a system, then a given charge q_j will experience the force fields produced by all the other (N-1) number of charges at its position. We can list them all before the pipe | symbol. For instance, \vec{\mathcal{F}}(q_1, q_3, q_4|q_2) is the net field that q_2 feels at its position \vec{r}_2; it equals the sum of the three force-fields produced by the other three charges because of the three pairs in which they act:
\vec{\mathcal{F}}(q_1, q_3, q_4|q_2) = \vec{\mathcal{F}}(q_1|q_2) + \vec{\mathcal{F}}(q_3|q_2) + \vec{\mathcal{F}}(q_4|q_2).

The charges always act pairs-wise; hence there always are pairs of fields; a single field cannot exist. Therefore, any analysis that has only one field (e.g., as in the quantum harmonic oscillator problem or the H atom problem), must be regarded as only a mathematical abstraction, not an existent.

The two fields of a given specific pair both are of the same algebraic sign: both + or both -. However, a given charge q_j may come to experience fields of arbitrary signs—depending on the signs of the other q_i‘s forming those particular pairs.

The electrons and protons thus affect each other via the intervening aether.

In electrostatics as well as in non-relativistic QM, the interaction between charges are via direct contact. However, the two fields of any arbitrary pair of charges shift instantaneously in space—the entirety of a field “moves” when the singular point where it is anchored, moves. Thus, there is no action-at-a-distance in this ontology. However, there are instantaneous changes everywhere in space.

A relativistic theory of QM would include magentic fields and their interactions with the electric fields. It is these interactions which together impose the relativistic speed limit of v < c for all material particles. However, such speed-limiting interaction are absent in the non-relativistic QM theory.

The electron and protons have the same magnitude of charge, but different masses.

The Coulombic force should result in accelerations of both the charges in a pair. However, since the proton is approx. 1846 times more massive than the electron, the actual accelerations (and hence net displacements over a finite time interval) undergone by them are vastly different.

There is a viewpoint (originally put forth by Lorentz, I guess) which says that since the entire interaction proceeds through the aether, there is no need to have massive particles of charge at all. This argument in essence says: We took the attribute of the electric charge away from the particle and re-attributed it to the aether. Why not do the same for the mass?

Here we observe that mass can be regarded as an attribute of the interactions of two *singular* fields in the aether. We tentatively choose to keep the instantaneous location of the attribute of the mass only at the distinguished point of the singularity. In short, we have both particles and the aether. If the need be, we will revisit this aspect of our ontology later on.

The electrostatic aetherial fields can also be expressed via two physically equivalent but mathematically different formulations: vector force-fields, and scalar energy-fields—also called the “potential” energy fields in the Schrodinger QM.

Notation: The potential energy field seen by q_j due to q_i is now on noted, and given, as:

V(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i\,q_A}{r_{iA}},

where q_A = q_j, and similarly for the other field of the pair, viz., V(q_j|q_i)

See the previous posts from this series for a certain reservation we have for calling them the potential energy fields (and not just internal energy fields). In effect, what we seem to have here is an interesting scenario:

When we have a pair of charges in the physical 3D space (say an infinite domain), then we have two singular fields existing simultaneously, as noted above. Moving the two charges from their definite positions “to” infinity makes the system devoid of all energy. When they are present at definite positions, their singular fields of V noted above imply an infinite amount of energy within the volume of the system. However, since the system-boundaries for a system of charged point-particles can be defined only at the point-locations where they are present, the work that can be extracted from the system is finite—even if the total energy content is infinite. In short, we have a situation that addition of two infinities results in a finite quantity.

Does this way of looking at the things provide a clue to solve the problem of cancelling infinities in the re-normalization problem? If yes, and if none has put forth a comparably clear view, please cite this work.


4. Neutrons:

Neutrons are massive objects that do not participate in electrostatic interactions.

From very basic, ontological, viewpoint, they could have presented very tricky situations to deal with.

For instance: When an EC Object (i.e., an electron or a proton) moves through the aether, there is no force over and above the one exerted by the Coulombic field on it. But EC Objects are massive particles. So, a tempting conclusion might be to say that the aether exerts no drag force at all on any massive object, and hence, there should be no drag force on the motion of a free neutron either.

I am not clear on such points. But I have certain reservations and apprehensions about it.

It is clear that the aforementioned tempting conclusion does not carry. It is known that the aether does not exert drag on the EC Objects. But an EC Object is different from a chargeless object of the neutron. Even a forced EC Object still has a field singularly anchored in its own position; it is just that in experiencing the forces by the field, the component of its own singular field plays no part (due to the divergence theorem). But the neutron, being chargeless object, has no singular field anchored in its position at all. It doesn’t have a field that is “silent” for its own motions. Since for a forced particle, the forces are exerted by the aether in its vicinity, I am not clear if the neutron should behave the same. May be, we could associate a pair of equal and opposite (positive and negative) fields anchored in the neutron’s position (of arbitrary q_N strength, not elementary), so that it again is chargeless, but can be seen to be interacting with the aether. If so, then the neutron could be seen as a special kind of an EC Object—one which has two equal and opposite aetherial-fields associated with it. In that case, we can be consistent and say that the neutron will not experience a drag force from the aether for the same reason the electron or the proton does not. I am not clear if I should be adopting such a position. I have to think further about it.

So, overall, my choice is to ignore all such issues altogether, and regard the neutrons, in the non-relativistic QM, as being present only in the atomic nucleus at all times. The nucleus itself is regarded, abstractly, as a charged point-particle in its own right.

Thus, effectively, we come regard the nuclear neutrons as just additions of constant masses to the total mass of the protons, and consider this extra-massive positively charged composite as the point-particle of the nucleus.


5. In QM, there is an aetherial field for the kinetic energy:

As stated previously, in addition to the electrostatic fields (mathematically expressed as force-fields or as energy-fields), in QM, the aether also comes to carry a certain time-varying field. The energy associated with these fields is kinetic in nature. That is to say, there should be some motion within the aether which corresponds to this part of the total energy.

We will come to characterize these motions with the complex-valued \Psi(x,t) field. However, as the discussion below will clarify, the wavefunction is only a mathematically isolated attribute of the physically existing kinetic energy field.

We will see that the motion associated with the quantum mechanical kinetic energy does not result in the net displacement of a CV. (It may be regarded as the motion of time-varying strain-fields.)

In our ontology, the kinetic energy field (and hence the field that is the wavefunction) primarily “lives” in the physical 3D space.

However, when the same physics is seen from a higher-level, abstract, mathematical viewpoint, the same field may also be seen as “living” in an abstract 3ND configuration space. Adopting such an abstract view has its advantages in simplifying some of the mathematical manipulations at a more abstract level. However, make a note that doing so also risks losing the richness of the concept of the physical fields, and with it, the opportunity to tackle the unusual features of the quantum mechanical theory right.


6. Photon:

In our view, the photon is a neither a spatially discrete particle nor even a condition that is permanently present in the aether.

A photon represents a specific kind of a transient condition in the aetherial quantum mechanical fields which comes to exist only for some finite interval of time.

In particular, it refers to the difference in the two field-conditions corresponding to a change in the energy eigenstates (of the same particle).

In the last sentence, we should have added: “of the same particle” without parentheses; however, doing so requires us to identify what exactly is a particle when the reference is squarely being made to field conditions. A proper discussion of photons cannot actually be undertaken until a good amount of physics preceding it is understood. So, we will develop the understanding of this “particle” only slowly.

For the time being, however, make a note of the fact that:

In our view, all photons always are “virtual” particles.

Photons are attributes of real conditions in the aether, and in this sense, they are not virtual. But they are not spatially discrete particles. They always refer to continuous changes in the field conditions with time. Since these changes are anchored into the positions of the positively charged protons in the atomic nuclei, and since the protons are point-particles, therefore, a photon also has at least one singularity in the electrostatic fields to which its definition refers. (I am still not clear whether we need just one singularity or at least two.) In short, photon does have point-position(s) as the reference points. Its emission/absorption events cannot be specified without making reference to definite points. In this sense, it does have a particle character.

Finally, one more point about photons:

Not all transient changes in the fields refer to photons. The separation vectors between charges are always changing, and they are always therefore causing transient changes in the system wavefunction. But not all such changes result in a change of energy eigenstates. So, not all transient field changes in the aether are photons. Any view of QM that seeks to represent every change in a quantum system via an exchange of photons is deeply suspect, to say the least. Such a view is not justified on the basis of the inductive context or nature of the Schrodinger equation.

We will now develop the context required to identify the exact ontological nature of the quantum mechanical kinetic energy fields.


7. The form of Schrodinger’s equation points to an oscillatory phenomenon:

Schrodinger’s equation (SE) in 1D formulation reads:

i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t)

BTW, when we say SE, we always mean TDSE (time-dependent Schrodinger’s equation). When we want to refer specifically to the time-independent Schrodinger’s equation, we will call it by the short form TISE. In short, TISE is not SE!

Setting constants to unity, the SE shows this form:
i\,\dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t).

Its form is partly comparable to the following two real-valued PDEs:

heat-diffusion equation with internal heat generation:
\dfrac{\partial T(x,t)}{\partial t} =\ \dfrac{\partial^2 T(x,t)}{\partial x^2} + \dot{Q}(x,t),

and the wave equation:
\dfrac{\partial^2 u(x,t)}{\partial t^2} =\ \dfrac{\partial^2 u(x,t)}{\partial x^2} + V(x,t)u(x,t),

Yet, the SE is different from both.

  • Unlike the diffusion equation, the SE has the i sticking out on the left-hand side, and a negative sign (think of it as (i)(i) on the first term on the right hand-side. That makes the solution of SE complex—literally. For quite a long time (years), I pursued this idea, well known to the Monte Carlo Quantum Chemistry community, that the SE is the diffusion equation but in imaginary time it. Turns out that this idea, while useful in simplifying simulation techniques for problems like determining the bonding energy of molecules, doesn’t really help throw much light on the ontology of QM. Indeed, it serves to get at the right ontology more difficult.
  • As to the wave equation, it too has only a partial similarity to SE. We mentioned the last time the main difference: In the wave PDE, the time differential is to the second order, whereas in the SE, it is to the first order.

The crucial thing to understand here is (and I got it from Lubos Motl’s blog or replies on StackExchange or so) that even if the time-differential is to the first-order, you still get solutions that oscillate in time—if the wave variable is regarded as being full-fledged complex-valued.

The important lesson to be drawn: The Schrodinger equation gives the maths of some kind of a vibratory/oscillatory system. The term “wavefunction” is not a misnomer. (Under the diffusion equation analogy, for some time, I had wondered if it shouldn’t be called “diffusionfunction”. That way of looking at it is wrong, misleading, etc.)

So, to understand the physics and ontology of the SE better, we need to understand vibrations/oscillations/waves better. I don’t have the time to do it here, so I refer you to David Morin’s online draft book on waves as your best free resource. A good book also seems to be the one by Walter Fox Smith’s “Waves and Oscillations, a Prelude to QM” though I haven’t gone through all its parts (but what exactly is his last name?). A slightly “harder” book but excellent, at the UG level, and free, comes from Howard Georgi. Mechanical engineers could equally well open their books on vibrations and FEM analysis of the same. For real quick notes, see Allan Bower’s UG course notes on this topic as a part of his dynamics course at the Brown university.


8. Ontology of the quantum mechanical fields:

8.1. Schrodinger’s equation has complex-valued fields of energies:

OK. To go back to Schrodinger’s equation:

i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m} \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t) = (\text{a real-valued constant}) \Psi(x,t).

As seen in the last post, the scheme of derivation of the SE makes it clear that these terms have come from: the total internal energy, the kinetic energy, and the potential energy, respectively. Informally, we may refer to them as such. However, notice that whereas V(x,t) by itself is a field, what appears in the SE is the term of V(x,t) multiplifed by \Psi(x,t), which makes all the energies complex-valued. Further, since \Psi(x,t) is a field, all energies in the SE also are fields.

If you wish to have real-valued fields of energies, then you have no choice but to divide all the terms in the SE by \Psi(x,t). That’s what we indicated in the last post too. However, note, complex-valued fields cannot still be got rid of; they still enter the calculations.

8.2. Potential energy fields only come from the elementary point-charges:

The V(x,t) field itself is the same as in the electrostatics:

V(x,t) = \dfrac{1}{2} \dfrac{1}{4\,\pi\,\epsilon_0} \sum\limits_{i}^{N}\sum\limits_{j\neq i; j=1}^{N} \dfrac{q_i\,q_j}{r_iA},
where |q_i| = |q_j| = -e, with e being the fundamental electronic charge.

In our QM ontology we postulate that the above equation is logically complete as far as the potential energy field of QM is concerned.

That is to say, in the basic ontological description of QM, we do not entertain any other sources of potentials (such as gravity or magnetism). Equally important, we also do not entertain arbitrarily specified values for potentials (such as the parabolic potential well of the quantum harmonic oscillator, or the well with the sharply vertical walls of the particle-in-a-box model). Arbitrary potentials are mere mathematical abstractions—approximate models—that help us gain insight into some aspects of the physical phenomena; they do not describe the quantum mechanical reality in full. Only the electrostatic potential that is singularly anchored into elementary charge positions, does.

At least in the basic non-relativistic quantum mechanics, there is no scope to accommodate magnetism. The gravity, being too weak, also is best neglected. Thus, the only potentials allowed are the singular electrostatic ones.

We shall revisit this issue of the potentials after we solve the measurement problem. From our viewpoint, the mainstream QM’s use of arbitrary potentials of arbitrary sources is fine too, as the linear formulation of the mainstream QM turns out to be a limiting case of our nonlinear formulation.

8.3. What physically exists is only the complex-valued internal energy field:

Notice that according to our QM ontology, what physically exists is only the single field of the complex-valued total internal energy field.

Its isolation into different fields like the potential energy field, the kinetic energy field, the momentum field, or the wavefunction field, etc. are all mathematically isolated quantities. These fields do have certain direct physical referents, but only as aspects or attributes of the total internal energy field. They do have a physical existence, but their existence is not independent of the total internal energy field.

Finally, note that the total internal energy field itself exists only as a field condition in the aether; it is an attribute of the aether; it cannot exist without the aether.


9. Implications of the complex-valued nature of the internal energy field:

9.1. System-level attributes to spatial fields—real- vs. complex-valued functions:

Consider an isolated system—say the physical universe. In our notation, E denotes the aspatial global attribute of its internal energy. Think of a perfectly isolated box for a system. Then E is like a label identifying a certain quantity of joule slapped on to it. It has no spatial existence inside the box—nor outside it. It’s just a device of book-keeping.

To convert E into a spatially identifiable object, we multiply it by some field, say F(x,t). Then, E F(x,t) becomes a field.

If F(x,t) is real-valued, then \int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,F(x,t) gives you the amount of E present in a small CV (which is just a part of the system, not the whole). To fix ideas, suppose you have a stereo boom-box with two detachable speakers. Then, the volume of the overall boombox is a sum of the volumes of each of its three parts. The volume is a real-valued number, and so, the total volume is the simple sum of its parts V = V_1 + V_2 + V_3. Ditto for the weights of these parts. Ditto, for the energy in a volumetric part of a system if the energy forms a real-valued field.

Now, when the field is complex-valued, say denoted as \tilde{F}(x,t), then the volume integral still applies. \int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,\tilde{F}(x,t) still gives you the amount of the complex valued quantity E\tilde{F}(x,t) present in the CV. But the fact that \tilde{F} is complex-valued means that there actually are two fields of E inside that small CV. Expressing \tilde{F}(x,t) = a(x,t) + i b(x,t), there are two real-valued fields, a(x,t) and b(x,t). So, the energy inside the small CV also has two energy components: E_R = E a(x,t) and E_I = E b(x,t), which we call “real” and “imaginary”. Actually, physically, they both are real-valued. However, the magnitude of their net effect |E \tilde{F}(x,t)| != E_R + E_I. Instead, it follows the Pythagorean theorem all the way to the positive sign: |E \tilde{F}| = |\sqrt{E_R^2 + E_I^2}|. (Aren’t you glad you learnt that theorem!)

If you take it in a naive-minded way, then E can be greater or smaller than E_R + E_I, and so things won’t sum up to |E \tilde{F}|—conservation seems to fail.

But in fact, energy conservation does hold. It’s just that it follows a further detailed law of combining the two field components within a given CV (or the entire system).

In QM, the wavefunction \Psi(x,t) plays the role of \tilde{F} given above. It brings the aspatial energy E from its Platonic mathematical “heaven” and, further, being a field itself, also distributes it in space—thereby giving a complex-valued field of E.

We do not know the physical mechanism which manipulates the real and imaginary parts \Psi_R(x,t) and \Psi_I(x,t) so that they come to obey the Pythogorean theorem. But we know that unless we have \Psi(x,t) as complex-valued, the book-keeping of the system’s energy does not come out right—in QM, that is.

Since the product E_{\text{sys}}\Psi(x,t) can come up any time, and since what ontologically exists is a single quantity, not a product of two, it’s better to have a different notation for it. Accordingly, define:

\tilde{E}(x,t) = E_{\text{sys}}\,\Psi(x,t)

9.2. In QM, the conserved quantity itself is complex-valued:

Note an important difference between pre-quantum mechanics and QM:

The energy conservation principle for the classical (pre-quantum) mechanics says that E_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega E(x,t) is conserved.
The energy conservation principle for quantum mechanics is that \tilde{E}_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega \tilde{E}(x,t) is conserved.

No one says it. But it is there, right in the context (the scheme of derivation) of the Schrodinger equation!

For the cyclic change, we started from the classical conservation statement:
\oint \text{d}E_{\text{sys}} = 0 = \oint \text{d}T_{\text{sys}} + \oint \text{d}\Pi_{\text{sys}}

Or, in differential terms (for an arbitrary change, not cyclic):
\text{d}E_{\text{sys}} = 0 = \text{d}T_{\text{sys}} + \text{d}\Pi_{\text{sys}}.

Or, integrating over the end-points of an arbitrary process,
E_{\text{sys}} = \text{a constant (real-valued) number}.

We then multiplied both sides by \Psi(x,t) (remember the quizzical-looking multiplication from the last post?), and only then got to Schrodinger’s equation. In effect, we did:
\text{d}E_{\text{sys}}\Psi(x,t) = 0 = \text{d}T_{\text{sys}}\Psi(x,t) + \text{d}\Pi_{\text{sys}}\Psi(x,t).

That’s nothing but saying, using the notation introduced just above, that:
\text{d}\tilde{E}(x,t) = 0 = \text{d}\tilde{T}(x,t) + \text{d}\tilde{\Pi}(x,t).

Or, integrating over the end-points of an arbitrary process and over the system volume,
\tilde{E}_{\text{sys}} = \text{a constant complex number}.

So, what’s conserved is not E but \tilde{E}.

The aspatial, global, thermodynamic number for the total internal energy is the complex number \tilde{E}_{\text{sys}} in QM. QM by postulation comes with two coupled real-valued fields together obeying the algebra of complex numbers.


10. Consequences of conservation of complex-valued energy of the universe:

10.1. There is a real-valued measure of quantum-mechanical energy which is conserved too:

In QM, is there a real-valued number that gets conserved too? if not by postulate then at least by consequence?

Answer: Well, yes, there is. But it loses the richness of the physics of complex-numbers.

To obtain the conserved real-valued number, we follow the same procedure as for “converting” a complex number to a real number, i.e., extracting a real-valued and essential feature of a complex number. We take its absolute magnitude. If \tilde{E}_{\text{sys}} is a constant complex number, then obviously, |\tilde{E}_{\text{sys}}| is a constant number too. Accordingly,

|\tilde{E}_{\text{sys}}| = |\sqrt{\tilde{E}_{\text{sys}}\,\tilde{E}_{\text{sys}}^{*}}| = \text{another, real-valued, constant}.

But obviously, a statement of this kind of a constancy has lost all the richness of QM.

10.2. The normalization condition has its basis in the energy conservation:

Another implication:

Since |\tilde{E}_{\text{sys}}| itself is conserved, so is |\tilde{E}_{\text{sys}}|^2 too.

[An aside to experts: I think we thus have solved the curious problem of the arbitrary phase factors in quantum mechanics, too. Let me know if you disagree.]

It then follows, by definitions of \tilde{E}_{\text{sys}}, \tilde{E} and \Psi(x,t), that

\int\limits_{\Omega}\text{d}\Omega\,\Psi(x,t)\Psi^{*}(x,t) = 1

Thus, the square-normalization condition follows from the energy conservation principle.

We believe this view places the normalization condition on firm grounds.

The mainstream QM (at least as presented in textbooks) makes reference to (i) Born’s postulate for the probability of finding a particle in an elemental volume, and (ii) conservation of mass for the system (“the electron has to be somewhere in the system”).

In our view, the normalization condition arises because of conservation of energy alone. Conservation of mass is a separate principle, in our opinion. It applies to the attribute of mass of the EC Object of elementary charges. But not to the aetherial field of \Psi. Ontologically, the massive EC Objects and the aether are different entities. Finally, the probabilistic notions of particle position have no relevance in deriving the normalization condition. You don’t have to insert the measurement theory before imposing the normalization condition. Indeed, the measurement postulate comes way later.

Notice that the total complex-valued number for the energy of the universe remains constant. However, the time-dependence of \Psi(x,t) implies that the aether, and hence the universe forever remains in a state of oscillatory motions. (In the nonlinear theory, the system remains oscillatory, but the state evolutions are not periodic. Mark the difference between these two ideas.)

10.3. The wavefunction of the universe is always in energy eigenstates.

Another interesting consequence of the energy conservation principle is this:

Consider these two conclusions: (i) The universe is an isolated system; hence, its energy is conserved. (ii) There is only one aether object in the universe; hence, there is only one universal wavefunction.

A direct consequence therefore is this:

For an isolated system, the system wavefunction always remains in energy eigenstates. Hence, every state assumed by the universal wavefunction is an energy eigenstate.

Take a pause to note a few peculiarities about the preceding statement.

No, this statement does not at all reinforce misconceptions (see Dan Styer’s paper, here: [^][Preprint PDF ^])

The statement refers to isolated systems, including the universe. It does not refer to closed or open systems. When matter and/or energy can cross system boundaries, a mainstream-supposed “wavefunction” of the system itself may not remain in an energy eigenstate. Yet, the universe (system plus environment) always remains in some or the other energy eigenstate.

However, the fact that the universal wavefunction is always in an energy eigenstate does not mean that the universe always remains in a stationary state. Notice that the V(x,t) itself is time-dependent. So, the time-changes in it compel the \Psi to change in time too. (In the language of mainstream QM: The Hamiltonian operator is time-dependent, and yet, at any instant, the state of the universe must be an energy eigenstate.)

In our view, due to nonlinearity, V(x,t) also is an indirect function of the instantaneous \Psi(x,t). Will cover the nonlinearity and the measurement problem the next time. (Yes, I am extending this series by one post.)

Of course, at any instant, the integral over the domain of the algebraic sum of the kinetic and the potential energy fields is always going to come to the single number which is: the aspatial attribute of the total internal energy number for the isolated system.

10.4. The wavefunction \Psi(x,t) is ontic, but only indirectly so—it’s an attribute of the energy field, and hence of the aether, which is ontic:

So, is the wavefunction ontic or epistemic? It is ontic.

An attribute does not have a physical existence independent of, or as apart from, the object whose attribute it is. However, this does not mean that an attribute does not have any physical existence at all. Saying so would be a ridiculously simple error. Objects exist, and they exist as identities. The identity of an object refers to all its attributes—known and unknown. So, to say that an object exists is also to say that all its attributes exist (with all their metaphysically existing sizes too). It is true that blueness does not exist without there being a blue object. But if a blue object exist, obviously, its blueness exists in the reality out there too—it exists with all the blue objects. So, “things” such as blueness are part of existence. Accordingly, the wavefunction is ontic.

Yet, the isolation (i.e. identification) of the wavefunction as an attribute of the aether does require a complex chain of reasoning. Ummm… Yes, literally complex too, because it does involve the complex-valued SE.

The aether is a single object. There are no two or more aethers in the universe—or zero. Hence, there is only a single complex-valued field of energy, that of the total internal energy. For this reason, there is only one wavefunction field in the universe—regardless of the number of particles there might be in it. However, the system wavefunction can always be mathematically decomposed into certain components particular to each particle. We will revisit this point when we cover multi-particle quantum systems.

10.5. The wavefunction \Psi(x,t) itself is dimensionless:

In our view, the wavefunction, i.e., \Psi(x,t) itself is dimensionless. We base this conclusion on the fact that while deriving the Schrodinger equation, where \Psi(x,t) gets introduced, each term of the equation is regarded as an energy term. Since each term has \Psi(x,t) also appearing in it (and you cannot get rid of the complex nature of the Schrodinger equation merely by dividing all terms by it), obviously, the multiplying factor of \Psi(x,t) must be taken as being dimensionless. That’s how we in fact have proceeded.

The mainstream view is to assign the dimensions of \dfrac{1}{\sqrt{\text{(length)}^d}}, where d is the dimensionality of the embedding space. This interpretation is based on Born’s rule and conservation of matter; for instance, see here [^].

However, as explained in the sub-section 10.2., we arrive at the normalization condition from the energy conservation principle, and not in reference to Born’s postulate at all.

All in all, \Psi(x,t) is dimensionless. It appears in theory only for mathematical convenience. However, once defined, it can be seen as an attribute (aspect) of the complex-valued internal energy field (and its two components, viz. the complex-valued kinetic- and potential-energy fields). In this sense, it is ontic—as explained in the preceding sub-section.


11. Visualizing the wavefunction and the single particle in the PIB model:

11.1. Introductory remarks:

What we will be doing in this section is not ontology, strictly speaking, but only physics and visualization. PIB stands for: Particle-In-a-Box. Study this model from any textbook and only then read further.

The PIB model is unrealistic, but pedagogically useful. It is unrealistic because it uses a potential energy distribution that is not singularly anchored into point-particle positions. So, the potential energy distribution must be seen as a mathematically convenient abstraction. PIB is not real QM, in short. It’s the QM of the moron, in a way—the electron has no “potential” inside the well.

11.2. The potential energy function used in the model:

The model says that there is just one particle in a finite interval of space, and its V(x,t) always stays the same at all times. So, it uses V(x) in place of V(x,t).

The V(x) is defined to be zero everywhere in the domain except at the boundary-points, where the particle is supposed to suddenly acquire an infinite potential energy. Yes, the infinitely tall walls are inside the system, not outside it. The potential energy field is the potential energy of a point-particle, and unless it were to experience an infinity of potential energy while staying within the finite control volume of the system, no non-trivial solution would at all be possible. (The trivial solution for the SE when V(x) = 0 is that \Psi(x,t) = 0—whether the domain is finite or infinite.) In short, the “side-walls” are included in the shipped package.

If the particle is imagined to be an electron, then why does its singular field not come into picture? Simple: There is only one electron, and a given EC Object (an elementary point-charge) never comes to experience its own field. Thus, the PIB model is unrealistic on another ground: In reality, force-fields due to charges always come in pairs. However, since we consider only one particle in PIB, there are no singular force-fields anchored into a moving particle’s position, in it, at all.

Yes, forces do act on the particle, but only at the side-walls. At the boundary points, it is a forced particle. Everywhere else, it is a free particle. Peculiar.

The domain of the system remains fixed at all times. So, the potential walls remain fixed in space—before, during, and after the particle collides with them.

The impulse exerted on the particle at the time of collision at the boundary is theoretically infinite. But it lasts only for an infinitesimally small patch of space (which is represented as the point of the boundary). Hence, it cannot impart an infinity of velocity or displacement. (An infinitely large force would have to act over a finite interval of space and time before it could possibly result in an infinitely large velocity or displacement.)

OK. Enough about analysis in terms of forces. To arrive at the particular solution of this problem using analytical methods (as with most any other advanced problem), energy-analytical methods are superior. So, we go back to the energy-based analysis, and Schrodinger’s equation.

11.3. TDSE as a continuous sequence of TISE’s:

Note that you can always apply the product ansatz to \Psi(x,t), and thereby split it into two functions:

\Psi(x,t) = \chi(x)\tau(t),

where \chi(x) is the space-dependent part and \tau(t) is the time-dependent part.

No one tells you, but it is true that:

Even when the Hamiltonian operator is time-dependent, you can still use the product ansatz separately at every instant.

It is just that doing so is not very useful in analytical solution procedures, because both the \chi(x) and \tau(t) themselves change in time. Therefore, you cannot take a single time-dependent function \tau(t) as applying at all times, and thereby simplify the differential equation. You would have to progress the solution in time—somehow—and then again apply the product ansatz to obtain new functions of \chi(x) and \tau(t) which would be valid only for the next instant in the continuous progression of such changes.

So, analytical solution procedures do not at all benefit from the product ansatz when the Hamiltonian operator is time-dependent.

However, when you use numerical approaches, you can always progress the solution in time using suitable methods, and then, whatever \Psi(x,t)\big|_{t_n} you get for the current time t_n, you can regard it as if it were solving a TISE which was valid for that instant alone.

In other words, the TDSE is seen as being a continuous progression of different instantaneous TISE’s. Seen this way, each \Psi(x,t)\big|_{t_n} can be viewed as representing an energy eigenstate at every instant.

Not just that, but since there is no heat in QM, the adiabatic approximation always applies. So, for an isolated system or the physical universe:

For an isolated system or the physical universe, the time-dependent part \tau(t) of \Psi(x,t) may not be the same function at all times. Yet, it always progresses through a continuous progression of different \chi(x) and \tau(t)‘s.

We saw in the sub-section 10.3. that the universal wavefunction must always be in energy eigenstates. We had reached that conclusion in reference to energy conservation principle and the uniqueness of the aether in the universe. Now, in this sub-section, we saw a more detailed meaning of it.

11.4. PIB anyway uses time-independent potential energy function, and hence, time-independent Hamiltonian:

When V(x) is time-independent, the time-dependent part \tau(t) stays the same for all times. Using this fact, the SE reduces to one and the same pair of \chi(x) and \tau(t). So, the TISE in this case is very simple to solve. See your textbooks on how to solve the TISE for the PIB problem.

However, make sure to

work through any solution using only the full-fledged complex variables.

The solutions given in most text-books will prove insufficient for our purposes. For instance, if \tau(t) is the time-dependent part of the solution of TISE, then don’t substitute \tau(t) = \cos \omega t in place of the full-fledged \tau = e^{-i\omega t}.

Let the \tau(t) acquire imaginary parts too, as it evolves in time.

The reason for this insistence on the full complex numbers will soon become apparent.

11.5. Use the full-fledged 3D physical space:

To visualize this solution, realize that as in EM so also in QM, even if the problem is advertised as being 1D, it still makes sense to see this one dimension as an aspect of the actually existing 3D physical space. (In EM, you need to go “up” to 3D because the curl demands it. In QM, the reason will become apparent if you do the homework given below.)

Accordingly, we imagine two infinitely large parallel planes for the system boundaries, and the aether filling the space in between them. (Draw a sketch. I won’t. I would have, in a real class-room, but don’t have the enthusiasm to draw pics while writing mere blog-posts. And, whatever happened to your interest in visualization rather than in “math”?) The planes remain fixed in space.

Now, pick up a line passing normally through the two parallel planes. This is our x-axis.

11.6. The aetherial momentum field:

Next, consider the aetherial momentum field, defined by:

\vec{p}(x,t) =\ i\,\hbar\,\nabla\Psi(x,t).

This definition for the complex-valued momentum field is suggested by the form of the complex-valued quantum mechanical kinetic energy field. It has been derived in analogy to the classical expression T = \dfrac{p^2}{2m}.

In our PIB model, this field exists not just on the chosen line of the x-axis, but also everywhere in the 3D space. It’s just that it has no variation along the y– and z-axes.

11.7. Gaining physical clarity (“intuition”) with analysis in terms of forces, first:

In the PIB model, when the massive point-particle of the electron is at some point \vec{r}_j, then it experiences a zero potential force (except at the boundary points).

So, electrostatically speaking, the electron (i.e. the singularity at the EC Object’s position) should not move away from the point where it was placed as part of IC/BCs of the problem. However, the existence of the momentum field implies that it does move.

To see how this happens, consider the fact that \Psi(x,t) involves not just the space-dependent part \chi(x), but also the time-dependent part \Theta(t). So,

The total wavefunction \Psi(\vec{r}_j, t) is time-dependent—it continuously changes in time. Even in stationary problems.

Naturally, there should be an aetherial force-field associated with the aetherial momentum field (i.e. the aetherial kinetic energy field) too. It is given by:

\vec{F}_{T}(x,t) = \dfrac{\partial}{\partial t} \vec{p}_{T}(x,t) = \dfrac{\partial}{\partial t} \left[ i\,\hbar\,\nabla\Psi(x,t) \right],

where the subscript T denotes the fact these quantities refer to their conceptual origins in the kinetic energy field. These _T quantities are over and above those due to the electrostatic force-fields. So, if V were not to be zero in our model, then there would a force-field due to the electrostatic interactions as well, which we might denote as \vec{F}_{V}, where the subscript _V denotes the origin in the potentials.

Anyway, here V(x) = 0 at all internal points, and so, only the quantity of force given by \vec{F}_{T}(\vec{r}_j,t) would act on our particle when it strays at the location \vec{r}_j. Naturally, it would get whacked! (Feel good?)

The instantaneous local acceleration for the elemental CV of the aether around the point \vec{r}_j is given by \vec{a}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \dfrac{\partial \vec{p}_{T}(\vec{r}_j,t)}{\partial t}.

This acceleration should imply a velocity too. It’s easy to see that the velocity so implied is nothing but

\vec{v}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \vec{p}_{T}(\vec{r}_j,t).

Yes, we went through a “circle,” because we basically had defined the force on the basis of momentum, and we had given the more basic definition of momentum itself on the basis of the kinetic energy fields.

11.8. Representing complex-valued fields as spatial entities is logically consistent with everything we know:

Notice that all the fields we considered in the force-based analysis: the momentum field, the force-field, the acceleration field, and the velocity field are complex-valued. This is where the 3D-ness of our PIB model comes handy.

Think of any arbitrary yz-planes in the domain as representing the mathematical Argand-plane. Then, the \Psi(x,t) field at an arbitrary point \vec{r}_j would be a phasor of constant length, but rotating in the same yz-plane at a constant angular velocity, given by the time-dependent part \tau(t).

Homework: Write a Python simulation to show an animation of a few representative phasors for a few points in the domain, following the above convention.

11.9. Time evolution, and the spatial directions of the \Psi(x,t)-based vector fields:

Consider the changes in the \Psi(x,t) field, distributed in the physical 3D space.

Consider that as \tau(t) evolves in time, even if the IC had only a real-valued function like \cos t specified for it, considering the full-fledged complex-valued nature of \tau(t), it would soon enough (with the passage of an infinitesimal amount of time), acquire a so-called “imaginary” component.

Following our idea of representing the real- and imaginary-components in the y– and z-axes, the \Psi(x,t) field no longer remains confined to a variation along the x-axis alone. It also has variations along the plane normal to the x-axis.

Accordingly, the unit vectors for the grad operator, and hence for all the vector quantities (of momentum, velocity, force and acceleration) also acquire a definite orientation in the physical 3D space—without causing any discomfort to the “math” of the mainstream quantum mechanics.

Homework: Consider the case when \Psi(x,t) varies along all three spatial axes. An easy example would be that of the hydrogen atom wavefunction. Verify that the spatial representation of the vector fields (momentum, velocity, force or acceleration) proposed by us causes no harm to the the “math” of the mainstream quantum mechanics.

If doing simulations, you can integrate in time (using a suitable time-stepping technique), and come to calculate the instantaneous displacements of the particle, too. Exercise left for the reader.

Homework: Perform both analytical integration and numerical for the PIB model. Verify that your simulation is correct.

Homework: Build an animation for the motion of the point-particle of the EC Object, together with the time-variations of all the complex-valued fields: \Psi(x,t), and all the complex-valued vector fields derived from it.

11.10. Too much of homework?

OK. I’ve been assigning so many pieces for the homework today. Have I completed any one of them for myself? Well, actually not. But read on, anyway.

The locus of all possible particle-positions would converge to a point only at the boundary points (because \Psi(x,t) = 0 there. At all the internal points in the domain, the particle-position should be away from the x-axis.

That’s my anticipation, but I have not checked it. In fact, I have not built even a single numerical simulation of the sort mentioned here.

So, take this chance to prove me wrong!

Please do the homework and let me know if I am going wrong. Thanks in advance. (I have to finish this series first, somehow!)


12. What the PIB model tells about the wave-particle duality:

What happened to the world-famous wave-particle duality? If you build the animations, you would know!

There is a point-particle of the electron (which we regard as the point of the singularity in the \vec{\mathcal{F}} field), and there is an actual, 3D field of the internal energy fields—and hence of \Psi(x,t). And, assuming our hypothesis of representing phasors of the complex numbers via a spatial representation, of all the complex-valued fields—including the vector fields like displacement.

The particle motion is governed by both the potential energy-forces and the kinetic energy-forces. That is, the aetherial wavefunction “guides” etc. the particle. In our view, the kinetic energy field too forces the particle.

“Ah, smart!,” you might object. “And what happened to the Born rule? If the wavefunction is a field, then there is a probability for finding the particle anywhere—not just at the position where it is, as predicted in this model. So, your model is obviously dumb!! It’s not quantum mechanics at all!!!”

Hmmm… We have not solved the measurement problem yet, have we?

We will need to cover the many-particle QM first, and then go to the nonlinearity implied by the kinetic energy field-forces, and only then would we be able to present our solution to the measurement problem. Since I got tired of typing (this post is already ~9,500 words), I will cover it in some other post. I will also try to touch on entanglement, because it would come in the flow of the coverage.

But in the meanwhile, try to play with something.

Homework: “Invert” the time-displacement function/relationship you obtain for the PIB model, and calculate the time spent by the particle in each infinitesimally small CV of the 3D domain, during a complete round-trip across the domain. Find its x-component. See if you can relate the motion, in any way, to the probability rule given by Born (i.e., try to anticipate our next development).

Do that. This way, you will stay prepared to spot if I have made any mistakes in this post, and also if I make any further mistakes in the next—and have made any mistakes in the last post as well.

Really. I could easily have made a mistake or two. … These matters still are quite new to me, and I really haven’t worked out the maths of everything ahead of writing these posts. That’s why I say so.


13. A preview of the things to come:

I had planned to finish this series in this post. In a sense, it is over.

The most crucial ontological aspects have already been given. Starting from the comprehensive list of the QM objects, we also saw that the quantum mechanical aetherial fields are all complex-valued; that there is an additional kinetic energy field too, not just potential; and also saw our new ideas concerning how to visualize the complex-valued fields by regarding the Argand plane as a mathematical abstraction of a real physical plane in 3D. We also saw how these QM ontological objects come together in a simple but fairly well illustrative problem of the PIB. We even touched on the wave-particle duality.

So, as far as ontology is concerned, even the QM ontology is now essentially over. There might be important repercussions of the ontological points we discussed here (and, also before, in this series). But as far as I can see, these should turn out to be mostly consequences, not any new fundamental points.

Of course, a lot of physics issues still remain to be clarified. I would like to address them too.

So, while I am at it, I would also like to say something about the following topics: (i) Multi-particle quantum systems. (ii) Issue of the 3D vs. 3ND nature of the wavefunction field. (iii) Physics of entanglement. (iv) Measurement problem.

All these topics use the same ontology as used here. But saying something about them would, I hope, help understand it better. Applications always serve to understand the exact scope and the nuances of a theory. In their absence, a theory, even if well specified, still runs the risk of being misunderstood.

That’s why I would like to pick up the above four topics.

No promises, but I will try to write an “extra” post in this series, and finish off everything needed to understand the points touched upon in the Outline document (which I had uploaded at iMechanica in February this year, see here [^]). Unlike until now, this next post would be mostly geared towards QM experts, and so, it would progress rapidly—even unevenly or in a seeming “broken” manner. (Experts would always be free to get in touch with me; none has, in the 8+ months since the uploading of the Outline document at iMechanica.)

I would like it if this planned post (on the four physics topics from QM) forms the next post on this blog, but then again, as I said, no promises. There might be an interruption with other topics in the meanwhile (though I would try to keep them at the bay). Plus, I am plain tired and need a break too. So, no promises regarding the time-frame of when it might come.

OK.

So, do the homework, and think about the whole thing. Also, brush up on the topic of coupled oscillations, say from David Morin/Walter Fox Smith/Howard Georgi, or even as covered in the FEM modeling of idealized spring-mass systems. Do that, so that you are ready for the next post in this series—whenever it comes.

In the meanwhile, sure feel free to drop in a comment or email if you find that I am going wrong somewhere—especially in the maths of it or its implications. Thanks in advance.

Take care, and bye for now.


A song I like:

(Marathi) “aalee kuThoonashee kanee taaLa mrudungaachi dhoona”
Music and Singer: Vasant Ajgaonkar
Lyrics: Sopandev Chaudhari

 


History:
— First published: 2019.11.05 17:19 IST.
— Added the sub-section 10.5. and the songs section. Corrected LaTeX typos. the same day at 20:31 IST.
— Expanded the section 11. considerably, and also added sub-section titles to it. Revised also the sections 12. and 13. Overall, a further addition of approx. 1,500 words. Also corrected typos. Now, unless there is an acute need even for typo-corrections (i.e. if something goes blatantly in an opposite direction than the meaning I had in mind), I would leave this post in the shape in which it is. 2019.11.06 11:06 IST.

Ontologies in physics—9: Derivation of Schrodinger’s equation: context, and essential steps

Updates (corrections, additions, revisions) have been made by 2019.10.28 10:52 IST. Only one of them is explicitly noted, but the others are still there (too many to note separately). However, the essentials of the basic points are kept as they were.


1. The cavity-radiation spectrum:

The continuous spectral-intensity curve for the cavity radiation was established empirically.

Now, before you jump to the Rayleigh-Jeans efforts, or the counting of the EM normal modes in the abstract space, as modern (esp. American) textbooks are wont to do, please take a moment to note an opinion of mine.

1.1 “Cavity-radiation” as a far more informative term:

I believe that we must first come to appreciate the late 19-th century applied physicists (mostly in Germany) who rightly picked up the cavity radiation as the right phenomenon for understanding the matter-light interactions.

Their motivation in studying the cavity radiation was to have a good datum in theory, a good theoretical standard, so that more efficient incandescent light bulbs could be produced for better profits by private businesses. This motivation ultimately paved the way for the discovery of quantum mechanics.

As to the nomenclature of the object/phenomenon they standardized, in my opinion, the term “cavity radiation” is far more informative than the term typically used in modern textbooks, viz. “black-body radiation”. Two reasons:

(i) Only a negligibly small hole on the surface of the cavity acts as a black-body; the rest of the cavity surface does not. Any other approximate realization of the perfectly black-body using a solid object alone, is not as good a choice, because the spectrum a solid body produces depends on the material of the solid (cf. Kirchhoff). The cavity spectrum, however, is independent of the wall-material; its spectrum is dominated by the effects due to pure aether in the cavity than those by the solid wall-material.

(ii) The “cavity-ness” of the body helps in theoretical analysis too, because unlike a structure-less solid continuum, the cavity has very easily demarkable regions for the matter and the eather, i.e., a region each for the material electric oscillators and the light-field. Since the spatial regions occupied by each participating phenomenon is generally different, their roles can be idealized away easily. This is exactly how, in theory, we take away the mass of a mechanical spring as also the stresses in a finite ball, and reach the idealized mechanical system of a massless-spring attached to a point-mass.

1.2 The problem for the theory:

The late 19th-century physicists did a lot of good experimental studies and arrived at the continuous spectrum of the cavity radiation.

The question now was how to explain this spectrum on the basis of the two most fundamental theories known at the time, viz., classical electrodynamics (relevant, because light was known to be EM waves), and thermodynamics including statistical mechanics (relevant, because the cavity was kept heated uniformly to the same temperature, so as to ensure thermal equilibrium between the light field and the cavity walls).

When classical electromagnetic theory was applied to this cavity radiation problem, it could not reproduce the empirically observed curve.

The theory, by Rayleigh and Jean, led to the unrealistic prediction of the ultraviolet catastrophe. It predicted that as you go towards on the higher frequencies side on the graph of the spectrum, the power being emitted by a given frequency (or the infinitesimally small band of frequencies around it) would go on increasing without any upper bound.

Physically speaking, the thermal energy used in keeping the cavity walls heated would be drained by the radiation field in such a fantastic way that the absolute temperature of the internal surface of the cavity wall would have to approach zero. (Notice, the cavity wall is a part of the system here; the environment contains the heat source but not the wall.) No amount of heat supplied at the system surface would be enough to fill the hunger of the aether to convert more and more of the thermal energy of the wall into radiation within itself. Mind you, this circumstance was being proposed by an analysis that was based on a thermodynamic equilibrium. Given a finite supply of heat from the surroundings to the system, the total increase in the internal energy of the system would be finite during any time interval. But since all thermal energy of the wall is converted into light of ever high (even infinitely high) frequencies, none would be left at the internal surface of the wall. In short, a finite wall would develop an infinite temperature gradient at the internal boundary surface.

This was the background when Planck, an accomplished thermodynamicist, picked up this problem for his attention.


2. Planck’s hypothesis:

Note again, in analysis, cavity walls are regarded as being at a thermodynamic equilibrium with the surroundings (so that they maintain a constant and uniform temperature on the entirety of the system boundary just outside of the wall), and also with the light field contained within the cavity (so that an analysis of the electrical oscillators inside the metallic wall of the cavity can provide a clue for the unexpected spectrum of the light).

That’s why, the starting point of Planck’s theorization was not the light field itself but the electrical oscillators in the metal of the wall. The analysis would be conducted for the solid metal, even if, eventually, predictions would be made only for the light field.

On the basis of statistical mechanics, and some pretty abstract “curve-fitting” of sorts [he was not just a skilled “Data Scientist”; he was a gifted one], Planck found that if the energy of the material oscillators were to be, somehow, quantized using the relation:

E = h\nu = \hbar \omega,

then the resulting energy distribution over the various frequencies would be identical to what was observed experimentally. Here, h is the constant Planck used for his abstract “curve-fitting”, \nu is the frequency of the cavity light, \hbar = \dfrac{h}{2\pi} is the modified Planck constant (aka the “reduced” Planck constant because its value is smaller than h), and \omega = 2\pi\nu is the angular frequency of the cavity light.

Ontologically, the significant fact to be noted here is that this analysis deals with the material oscillators, but ends up making an assertion—a quantitative prediction—about the nature of light. The analysis can be justified because the wall and the aether are assumed to be in a thermodynamic equilibrium.

Planck’s original formula was: E = (n+1/2) h \nu where n = 1, 2, 3, \dots. However, in the interest of simplicity (of isolating the relevant ontological issues), we have for now set n = 1 and ignored the 1/2 constant.

Homework: Try to relate the ignored quantities with phenomena such as quantum superposition, zero-temperature energy, etc. Hint: Don’t worry about many particles, whether distinguishable or indistinguishable, or phenomena like entanglement specific to many-particle systems.


3. Photoelectric effect: Einstein’s relation for a monochromatic light:

3.1 The Einstein relation:

On the basis of Planck’s energy-quantization hypothesis, which was eventually regarded as governing the light phenomenon itself (and not just the energies of the electrical oscillators inside the cavity wall), Einstein derived the relation:

\vec{p} = \hbar \vec{k}

where \vec{p} is the momentum associated with a monochromatic light wave, and \vec{k} = \dfrac{2\pi}{\lambda}\hat{e} is the wavevector, \lambda is the wavelength, and \hat{e} is the unit vector in the direction in which the wave travels.

How did Einstein arrive at this relation?

3.2 Einstein postulates particles of light to explain photo-electric effect:

As seen above, what Planck had postulated (ca 1900) was the quantization of energy for the cavity wall oscillators; hence for the wall-to-light field energy exchange; hence for the light-field itself. Yet, Planck did not propose particles of light in place of the continuous field of light in the cavity.

It was Einstein who postulated (ca. 1905) that the spatially continuous field of light be replaced by hypothetical, spatially discrete, particles of light. (It was G. N. Lewis, the then Dean of Chemistry at Berkeley, who, 21 years later, in 1926, coined the term “photon” for them.)

Einstein thought that a particulate nature of light was necessary in order to explain the existence of discrete steps in the phenomenon he studied, viz., the photo-electric effect. [This is not true; the photo-electric effect involves not just light per say, but energy transfers between light and matter; see our comment near the end of this section.]

Einstein then arrived at an expression for the momentum of a photon.

3.2 Momentum of the photon using the light particle postulate:

According to the theory of special relativity (i.e. the classical EM of Maxwell and Lorentz reformulated by Poincare et al. and published ca. 1905 also by Einstein), the energy for a free (unforced) relativistic massive particle is given by the so-called “E = mc^2” equation that even hippies know about; see, for instance, here [^] for clarification of the mass term involved in it:

E = mc^2

So,

E^2 = (m\,c^2)^2 = (\gamma\,m_0\,c^2)^2 = (p\,c)^2 + (m_0\,c^2)^2,

where E is the relativistic energy of a classical massive particle, m is its relativistic mass, c is the speed of light, \gamma = \dfrac{1}{\sqrt{1-(\dfrac{v}{c})^2}} is the Lorentz factor (which indicates physical phenomena like the Lorentz contraction and time dilation, the Lorentz boost, etc.), m_0 is the rest mass of the particle, and p = \gamma m_0\,v is the relativistic momentum of the particle.

According to Einstein, the theory of special relativity must apply to his light particle just as well as it does to the massive bodies. So, he would have the above-given equation govern his light particle’s dynamics, too. However, realizing that such a particle would have to be massless, Einstein set m_0 = 0 for the light particle. Thus, the above equation became, for his photon,

E^2 = (pc)^2, i.e.,

E = pc.

3.3 Momentum of light waves using the Maxwellian EM:

Actually, the same equation can also be derived assuming the electromagnetic wave nature for light (using Pyonting’s vector etc.). Then the distinctive character of the EM waves highlighted by this relation becomes apparent.

The classical NM-ontological waves (like the transverse waves on strings) do not result in a net transport of momentum, though they do transport energy. That’s because the scalar of energy varies as the square of the wave displacement, but the vector of momentum varies as the displacement vector itself. That’s in the classical NM-ontological waves.

In contrast, the “classical” EM-ontological waves transport momentum too, not just energy. That’s their distinctive feature. See David Morin’s online book on waves for explanations (I think chapter 8).

In short, the relation E = pc is basically mandated by the Maxwell’s theory itself.

The special relativistic relations are just a direct consequence of Maxwell’s theory. (The epistemological scope of the special relativity is identical to that of “classical” EM, not greater.)

All in all, you don’t have to assume a particle of light to have the energy-momentum relation for light, in short.

Einstein, however, took this relation apply to the massless particle of the photon hypothesized by him, as detailed in the preceding discussion.

3.4. Einstein lifts the expression for energy of classical waves, and directly uses it for his particles of light—without any pause or explanation:

Now, from the classical wave theory, \omega = ck for any classical wave, where k = \dfrac{2\,\pi}{\lambda} is the wavevector, and \lambda is the wavelength.

Notice that this relation applies only to the oscillations of the material oscillators in the cavity wall as also to aether-waves, but not to structureless particles of light. Did it bother Einstein? I think not.

Einstein did not put forth any argument to show why or how his light particle would obey the same relation. He gave no explanation for how \omega is to be interpreted in the context of his photons.

The fact of the matter is, if you assume a structure-less photon in an empty space, then you cannot explain how the frequency—a property of waves—can at all be an attribute of a point-particle of a photon. Mark my words: structure-less. Nature performs no local changes over time unless there is an internal structure to a spatially discrete particle. A solid body may have angular momentum, but each infinitesimal point-particle comprising it doesn’t. Something similar, for the photon. Einstein gave no description of the structure of the photon or the physical mechanism why it should carry a frequency attribute. … I should know, because I followed this Einstein-Feynman approach for too long, including during my PhD. The required maths of the wave-vector additions involved in a photon’s propagation through space won’t work unless you presume some internal structure to the photon, some device of keeping track of the net wave-vector by the photon.

Thus, Einstein had \omega = ck for his particles of light too—somehow.

3.5. Einstein reaches the momentum–wavevector relation known by his name:

Einstein then accepted Planck’s quantum hypothesis as being valid for his light-particles too, including the exact relation Planck had for the oscillatory phenomena (including waves):

E = \hbar \omega.

Einstein then substituted the relativistic light particle‘s equation E = pc on the left hand-side of Planck’s hypothesis (even though in Planck’s theory, this E was for oscillations/waves), and the classical wave relation \omega = ck on the right hand-side. Accordingly, he got, for his particle of light:

pc = \hbar ck,

i.e., cancelling out c,

p = \hbar k.

This is called the Einstein relation in QM.

3.6. Einstein as the physicist who introduced the wave-particle duality in physics:

Go through the subsections 3.4 and 3.5 again, and take a moment to realize the nature of what Einstein had done.

Einstein became the first man to put forth the wave-particle duality as an acceptable feature of a theory (and not just a conjecture). In effect, he put forth this duality as an essential feature of physics, because it was introduced at the most fundamental levels of theory. And, he did so without bothering to explain what he meant by it.

I gather that Einstein did not experience hesitation while doing so. (He was, you know Einstein! (That hair! That smile!! That very scientist-ness!!!))


4. Some comments on Einstein’s hypothesis of light particles:

4.1 Waves can explain the photo-electric effect; you don’t need Einstein’s particles to explain it:

To explain the photoelectric effect, it is enough to suppose that the light absorption process occurs in the following way: (1) Light is continuously spread in space as a field (as a “wave”), but its emission or absorption occurs only in spatially discrete regions—these processes occur at atoms. (2) An instance of an absorption process remains ongoing only for a finite period of time, but during this interval, it occurs continuously throughout. (3) The nature of the absorption process is such that it either goes to full completion, or it completely reverses as if no energy exchange had at all occurred.

To anticipate our development in the next post, and to give a caricature of the actual physics involved here: The energy is continuously transferred to an atom from the surrounding field. (Physically, this means that the infinitely spread field of light gets further concentrated at the nucleus of the atom, which serves as the reference point due to the singularity of fields at it.). After the process of the continuous energy transfer gets going, the process, for some reason to be supplied (by solving the measurement problem), snaps to one of the energy eigenvalues in the end (like an electrical switch snaps to either on or off position). With this snapping, the energy transfer process comes to a definite end. Hence the quantum nature of the eigenvalue-to-eigenvalue snapping-out–snapping-in process.

In this way, the absorption process, if it at all goes to completion, still results in only a certain quantum of energy being imparted to the absorber; an arbitrary amount of energy (say one-third of a quantum of energy) cannot be transferred in such a process.

In short, a quantized energy transfer process can still be realized without there being a spatially delimited particle of light travelling in space—as Einstein imagined.

4.2. No one highlighted Einstein’s error of introducing particles for light, because his theory had the same maths:

Notice that Einstein begins with the relativistic equation for a massive particle. In the cavity radiation set-up, this can only mean the electric charges (like the protons and electrons) in the cavity wall.

Even for waves in classical material media (like acoustic waves through air/metal), inertia is only a parameter, and not a variable of the wave dynamics. It co-determines the wave-speed in the medium, but beyond that, it has no other role to play. Inertia does not determine forces being exchanged by the wave phenomenon.

The aether anyway does not show any inertia in any electromagnetic phenomenon. So, Einstein’s assumption of the zero inertial effects in the expression for the energy of the photon is perfectly OK—if at all there is a particulate nature for light.

The existence of the thermodynamic equilibrium between the oscillating material charges in the wall and the waves in the aether implies that out of the total relativistic energy of a massive charge, only the pc part gets exchanged with the aether (in Einstein’s view, with the photon); the m_0\,c^2 part must remain with the massive charged particle in the wall.

The aether in cavity has no other means to acquire energy except as through an exchange with the EC Objects in the wall.

Therefore, the internal energy of the aether increases by a quantity that is numerically equal to the pc component of the energy lost by the massive charge.

Overall, Einstein’s assumption of a spatially discrete particle of light is not at all justified, even though the maths he proposed on that conceptual basis still makes perfect sense—it gives the same expression as that for light waves. See the Nobel laureate W. E. Lamb’s paper “Anti-photon” for fascinating discussion [^].

And, yes, Einstein is the original inventor of the mystical idea of the wave-particle duality.


5. Some remarks on Bohr:

We will skip going into Bohr’s theory of the hydrogen atom, primarily because there are hardly any ontological remarks to be made in reference to this theory other than that it was a very ad-hoc kind of a model—though it did predict to great accuracy some of the most interesting and salient features of the hydrogen atom.

Bohr’s was not an ordinary achievement. What he built was a good theoretical model in place of, and to explain, the mere algebraic correlations of the atomic spectra series as given by those formulae by Balmer, Paschen, et al.

If Bohr’s contribution to QM were to end at his 1913 model, he would have made for an ontologically very uninteresting a figure. Who remembers Jean Perin when it comes to the ontological discussions of the continuum vs. particles-based viewpoints? Perin won a physics Nobel for proving the atomic nature of matter, and yet, no one remembers him, because though Perin did fundamental work, he didn’t raise controversies. The knowledge he created has been silently absorbed in the integrated view of physics. Unlike Bohr and Einstein.

That’s why, the fact that Bohr’s model does not invite too many ontological remarks (other than that it is a very tentative, ad-hoc kind of a model) precisely also is the reason why it is best to ignore Bohr at this stage.

Regardless of the physics issues clarified and raised by his Nobel-winning work, we can’t regard Bohr’s model itself as being irritating—certainly not from an ontological viewpoint.

But Bohr, qua a father figure of the mainstream QM as it happened to get developed, of course is very irritating! All in all, any irritance we experience because of him, must be located in his other thoughts, not in his model of the hydrogen atom.


6. de Broglie’s hypothesis of matter-waves:

6.1. de Broglie postulates matter waves:

Light had long been thought (since the ca. 1801 experiment of interference of light-waves by Young) to have a wave nature—i.e., a spatially continuous phenomenon. So, following Einstein’s hypothesis, what was always a spatially continuous phenomenon now also acquired a spatially discrete character. Light always was waves, but also became particles.

The atomic nature of matter was well-established by now. Einstein was the leading physicists of those who must be credited to have helped this theory gain wide acceptance.

Bohr even had a theory for explaining emission / absorption spectra of the hydrogen atom—a model with spatially discrete nucleus and spatially discrete electrons. So, atoms were not just a hypothesis; they were an established fact. And, all parts of them were spatially discrete and finite in extent too.

Matter was particulate in nature. Discrete clumps of clay etc.

Then, following Einstein’s lead, a young Frenchman by name de Broglie put forth the hypothesis, in his PhD thesis, that what is regarded as particulate matter should also have a wave character. Accordingly, there should be waves of matter.

6.2. de Broglie supplies a physical explanation for the stability of the Bohr atom:

de Broglie went even further, and suggested that a massive particle like the electron in the Bohr atom must obey the same relations as are given by the Planck-Einstein relations for light. [Mark this point well; we will shortly make a comment on it. But to continue in the meanwhile…] He then proceeded to do calculations on this basis.

In Bohr’s theory, stationary orbits for the massive electron had been only postulated; they had not been explained on the basis of any physical principle or explanation that was more fundamental or wider in scope. Bohr’s orbits were stationary—by postulate. And, only Bohr’s orbits were stationary—by postulate.

On the basis of his matter-waves hypothesis, de Broglie could now explain the stability of the Bohr orbits (and of only the Bohr orbits). de Broglie pointed that the orbits being closed circles, the matter-waves associated with an electron must form standing waves on them. But standing waves are possible only for certain values of radii, which means that only certain values of angular momenta or energies were allowed for the electrons.

de Broglie thus became the first physicist to employ the eigenvalue paradigm for the dynamics of the electron in a stable hydrogen atom.

6.3. de Broglie’s limitations—mathematical, and ontological:

However, as the later theory of Schrodinger would show (which came within a year and a half), de Broglie’s analysis was too simple. de Broglie was wrong on two counts:

(i) The transverse matter-waves, according to de Broglie, existed with reference to a 1D curve (the Bohr circle) embedded in the 3D space as the reference neutral axis. He couldn’t think of filling the entire 3D space with his matter waves—which is what Schrodinger eventually did.

(ii) de Broglie also altered the ontological character of electrons from massive point-particles to the unexplained “hybrid” or “composite” of: massive point-particles and matter-waves.

From the ontological viewpoint, thus, de Broglie is the originator of the wave-particle duality for the massive particles of electrons, just the way Einstein was the originator of the wave-particle duality for the massless particles of light.

Einstein, of course, beat de Broglie by some 19 years in proposing any such a duality in the first place. (That hair! That smile!! That very scientist-ness!!!)

6.4. What no one notices about de Broglie’s relations:

de Broglie’s relations are nothing but the same old Planck- and Einstein-relations, but now seen as being applied to matter waves, not light. Thus the same equations

E = \hbar \omega and

p = \hbar k

are now known as de Broglie’s relations.

Notice the curious twist here: The equations de Broglie proposed for matter waves were actually derived for the massless phenomenon of light. The m_0-containing term was set to zero in deriving them.

I do not know if any one raised any objections on this basis or not, and whether or how de Broglie answered those objections. However, this issue sure is ontologically interesting. I will leave pursuing the questions it raises as a homework for you, at least for now. [I may cover it in a later post, if required.]

Homework: Why should the equations of a massless phenomenon (viz. the light) apply to the waves of matter?

(Hint: Look at our far too prolonged discussions and seemingly endless repetition of the fact that cavity radiation analysis applies at thermal equilibrium. Also refer to our preference for the term “cavity radiation,” and not “black-body radiation.” … That should be enough of a hint.)

…As a side remark: The quantum theory anyway had begun to get developed at a furious pace by 1924, and a relativistic theory for quantum mechanics would be given by Dirac just a few years later. Relativistic QM is out of the scope of our present series of posts.


7. Before getting into the derivation of Schrodinger’s equation:

7.1. The place of Schrodinger’s equation in the quantum theory:

We now come to the equation that has held sway over physicists’ imagination for almost a century (95 years, to be precise), viz., the linear partial differential equation (PDE) that was inductively derived by Schrodinger.

[Ignore Feynman here when he says that Schrodinger’s procedure is not a derivation. It is a derivation, but it is an inductive derivation, not a deductive one. Feynman artificially constrained the concept of derivation only to deduction. Expected of him.]

In terms of the commanding position of Schrodinger’s equation, every valid implication of QM, every non-intuitive feature of it, every interpretational issue about it, every debate in the QM history,… they all trace themselves back to some or the other term in this equation, or some or the other aspect of it, or some or the other fact assumed or implied by its analysis scheme—its overall nature.

If you have a nagging issue about QM, and if you can’t trace it back to Schrodinger’s equation (at least with a form of it, as in the relativistic QM), then the issue, we could even say, does not exist! All the empirical evidence we have so far points in that direction.

Either your issue is there explicitly in Schrodinger’s equation, or at least implicitly in its context, or in one of its concrete or abstract implications. Or, the issue simply isn’t there—physically.

Including the worst riddle of QM, viz., the measurement problem. This problem is a riddle precisely because of a mathematical nature of Schrodinger’s equation, viz., that it’s a linear PDE.

So, we want to highlight this fact:

Even if all that you want to do is to “just” solve the measurement problem, you still have to work with the Schrodinger equation—including its inductive context, the ontology it presupposes (at least implicitly), its mathematical structure and form, and all their implications.

The reason is: there is only one primary-unknown variable in the Schrodinger equation, viz., \Psi(x,t). And, there is only one more field, viz., V(x,t) in it. The rest are either constants or the space- and time-variables over which the fields are defined.

There is no place for any additional variable in QM—known or unknown. The reason for this, in turn, is: Schrodinger’s equation predicts all the known QM phenomena with astounding accuracy. That’s why, there is no place for hidden variables either—the very idea itself is plain wrong.You don’t have to make an appeal to a detail like Bell’s theorem. The mathematical nature of Schrodinger’s equation, and the predictive success, together say that.

Therefore, solving the measurement problem must “only” require some ontological, physical, or mathematical reorganization involving the same old \Psi(x,t) and V(x,t) variables, and the same old constants (not to mention the same x and t variables over which the two fields are defined).

That’s why it is important to develop a good intuition about each term in this equation, about how the terms are put together, etc. To start developing such an intuition (which we will formalize in the ontology of Schrodinger’s QM in the next post), it is necessary to look into the logical scheme of its inductive derivation.

Without any loss of the essential physical meaning, (and perhaps with a greater clarity about physical meaning), we will use only the energy-based analysis in the derivation here, not the full-fledged variationally-based analysis which Schrodinger had originally performed in deriving his equation (by appeal to an analogy of mechanics with geometrical optics). We will more or less directly follow David Morin’s presentation. (It’s the best in the “town” for the purposes of a learner.)

7.1. Energy analysis with single numbers (or the aspatial, system-level, variables):

In energy analysis, the total energy content of an isolated mechanical system of objects that exert only conservative forces on each other, can be given as a sum of the kinetic and potential energies.

E = T + \Pi

where E denotes the total internal energy number of the system (here, not the magnitude of the electric force field), T is the kinetic energy number, and \Pi is the potential energy number.

Notice that as stated just as above, and without any further addition to the equation, this is not a statement of energy conservation principle; it is a statement that the internal energy for an isolated system having conservative forces consists of two and only two forms: kinetic and potential. (We ignore the heat, for instance.)

Speaking properly, a statement for energy conservation here would have been:

\oint \text{d}E = 0 = \oint \text{d}T + \oint \text{d}\Pi.

That is, a cyclic change for the total energy number for an isolated system is zero, and therefore, the sum of cyclic changes in its kinetic and potential energy numbers also must be zero—assuming that potentials are produced by conservative forces (as the electrostatic forces are). Now, for conservative forces, \oint \text{d}\Pi turns out to be zero, and so, the cyclic change in the kinetic energy too must be zero. Thus:

\oint \text{d}E = 0 = \oint \text{d}\Pi =  \oint \text{d}T.

However, notice, for non-cyclic changes, the most informative statement would be a differential equation; it would read:

\text{d}E = 0 = \text{d}T + \text{d}\Pi

This is because \text{d}E = 0 for any change in an isolated system. However, in general, notice that:

\text{d}T = - \text{d}\Pi \neq 0.

By integration between any two arbitrary states, we get

E = T + \Pi = \text{a constant},

which says that the E number stays the same for an isolated system—it is conserved—in any arbitrary change. Notice that this is a statement of energy conservation—due to the addition of the last equality. The addition of the last equality looks trivial, but it is in fact necessary to be noted explicity. We will work with this form of the equation.

Both T and \Pi are still aspatial numbers here. Since we have thrashed out this topic thoroughly in the previous posts of this series, we will not go into the distinction of the aspatial variables vs. the spatially defined quantities/fields once again. We will simply proceed to bring the aspatial variables down from their Platonic Lagrangian “heaven” to our analysis formulated in reference to the physical space (which is, in practice, 3D).

7.2. Energy analysis for the mechanics of a massive point-particle moving on a curve:

Now, in classical mechanics, for a massive-point particle,

T(x,t) = \dfrac{1}{2}mv(x,t)^2 = \dfrac{1}{2}\dfrac{p(x,t)^2}{2m}.

So,

E = \dfrac{p(x,t)^2}{2m} + \Pi(x,t)

In particle mechanics, p(x,t)^2 = \vec{p}(x,t)\cdot\vec{p}(x,t) always lies with the instantaneous position x(t) of the massive particle. In the above equation, \Pi(x,t) should still remain an aspatial variable, but it’s common practice in the Variational/Lagrangian/Hamiltonian mechanics to assume that this function is specified without any particular reference to particle position, and therefore,

\Pi(x,t) not is only a known quantity, it also is independent of p(x,t).

Under this scope-narrowing assumption, it is OK to think of a 1D field for the potential energy function (e.g. a curved wire over which the bead slides under gravity with the time-dependent geometry of the wire not depending on the position or the kinetic energy function of the bead).

Accordingly the potential energy number \Pi(x,t) can be represented via a 1D field of V(x,t).

Thus we have:

E = \dfrac{p(x,t)^2}{2m} + V(x,t),

where  V(x,t) and p(x,t) are not functions of each other. Note,

In classical mechanics of point-particles (and their interactions with fields), though V(x,t) is a field, p(x,t) still remains a point-property at the particle’s position. So, E(x,t) may also be taken to be a point-property of the particle.

This may look like hair-splitting to most modern physicists. However, it is not. The reason we went to such great lengths in identifying the conditions under which an energy can be regarded as an aspatial attribute of the system, and the conditions under which it can be regarded to have an identifiable existence in space—whether at the position of a point-particle or all over the domain as a field—are matters having crucial bearing on the kind of ontology there is assumed for the objects in the system.

In Schrodinger’s equation, it eventually turns out that V(x,t) is assumed to be a field, and not only that, but, effectively, also is the momentum function p(x,t). In developing Schrodinger’s equation, we also have to be careful not to directly assign the aspatial variable E to successive points of space—thus, hold on before you convert E to E(x,t). The reason is, Schrodinger’s equation deals with fields, not particles. The equation E = T +\Pi applies equally well to systems of particles as well as to systems of particles and fields, and to systems of only fields. Be careful. (I am correcting some of my own slightly misleading statements below.)


8. Specific steps comprising the essential scheme of Schrodinger’s derivation:

8.1. There should be a wave PDE for de Broglie’s matter-waves:

Schrodinger became intrigued by de Broglie’s theory, and within months, gave a seminar on it at his university. Debye (of the Debye-Scherrer camera fame, among other achievements) was in attendance, and casually remarked:

if the electron is a matter wave, then there must be a wave equation for it.

Debye actually meant a partial differential equation when he said “a wave equation.” A wave equation relates some spatio-temporal changes in the wave variable. The V is a field (following our EM ontology; see previous posts in this series), and the wave variable also must be a 3D field.

8.2. The wave ansatz:

The simplest ansatz to assume for a wave function (i.e. a field) in the 3D physical space, is the plane-wave:

\Psi(x,t) = A e^{i(kx - \omega t)}

With the negative sign put only on \omega but not on k, we get a (co)sinusoidal wave that travels to the right (i.e., in the direction of the positive x-axis). We will consider only the plane-wave traveling in the x-direction, for simplicity; however, realize, in a 3D physical space, two more plane-waves, one each in y– and z-directions, will be required.

8.3. The energy conservation equation that reflects the de Broglie relations:

We need to somehow relate k and \omega to the the Planck-Einstein relations as used in de Broglie’s theory (i.e. as applying to the massive electron). To do so, take the specialized energy conservation statement explained above, viz.

E = \dfrac{p(x,t)^2}{2m} + V(x,t)

which is an equation for a particle at x(t), with its momentum also located at its position.

We then substitute de Broglie’s relations for matter waves in it—i.e., we use the mathematical equation of Planck’s for E on the left hand-side, and Einstein’s for p on the right hand-side. We thus get:

\hbar \omega = \dfrac{\hbar^2 k^2}{2m} + V(x,t).

Notice, we have brought in the wave-particle duality, implicit in the de Broglie relations, now into an equation which in classical mechanics was only for particles.

Update: As an after-thought, a better way to look at is to begin with the aspatial-variables equation:

E = \dfrac{p^2}{2m} + \Pi = \text{a constant},

then make an electrostatic field for \Pi by substituting V(x,t) in its place, so as to arrive at:

E = \dfrac{p^2}{2m} + V(x,t) = \text{a constant},

and then, without worrying about whether E, T or p are defined in the physical space or not, to take this equation as applying to the system as a whole, and proceed to the next step. Accordingly, I am also slightly modifying the discussion below.

8.4. Making the energy conservation equation with the de Broglie terms, refer to the wave ansatz:

To relate the above equation to the plane-wave ansatz, multiply all the terms by \Psi(x,t), and get:

\hbar \omega \Psi(x,t) = \dfrac{\hbar^2 k^2}{2m}\Psi(x,t) + V(x,t)\Psi(x,t) = \text{a constant}\Psi(x,t).

The preceding step might look a bit quizzical, but doing so comes in handy soon enough to keep the mathematics sensible.

Physically, what the step does is to convert the first or the total term (total energy, which is conserved) from the aspatial variable E (or a system-attribute of \text{a constant}) to a spatially distributed entity—because of the multiplication by \Psi(x,t), a field. \Psi basically distributes the system-wide global variable (an aspatial variable) to all points in the physical space. BTW, this is the basic physical reason why \Psi(x,t) has to be normalized—we don’t want to change the value of the conserved quantity of the total energy.

Similarly, the same step also converts the second term (the kinetic energy, now expressed using the momentum) from the aspatial variable p^2/2m to a spatially distributed field.

The \Psi(x,t) variable refers to matter waves in 1D, but can be easily generalized to 3D—unlike de Broglie’s standing waves on the circles of the Bohr orbits.

Why do we not have to offer a physical mechanism for the multiplications by \Psi(x,t)? The answer is: it is its absence which is ontologically impossible to interpret. What comes as physically existing in the physical 3D space are the fields, and \Psi(x,t) helps in pinning their quantities. It is the aspatial variables/numbers that are devices of calculations, not the fields.

8.5. Transforming the energy conservation equation having the de Broglie terms into a partial differential equation:

Now, to get to the wave equation, we note the partial differentiations of the wave ansatz:

\dfrac{\partial \Psi}{\partial x} = ik\,A e^{i(kx - \omega t)} = ik\,\Psi(x,t),

and so,

\dfrac{\partial^2 \Psi}{\partial x^2} = -k^2\,\Psi

which implies that

k^2\Psi = - \dfrac{1}{\Psi}\dfrac{\partial^2\Psi}{\partial x^2}.

On the time-side, the first-order differential is enough, if eliminating \omega is our concern:

\dfrac{\partial \Psi}{\partial t} = -i\,\omega\,A e^{i(kx - \omega t)} = -i\,\omega\,\Psi

which implies that

\omega\Psi(x,t) = \dfrac{1}{-i}\dfrac{\partial \Psi}{\partial t} = i \dfrac{\partial \Psi}{\partial t}

Now, simple: Plug and chug! The energy conservation equation with the de Broglie terms goes from:

\hbar \omega \Psi(x,t) = \dfrac{\hbar^2 k^2}{2m}\Psi(x,t) + V(x,t)\Psi(x,t).

to

i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t)

That’s the most general (time-dependent) Schrodinger equation for you!

8.6. A few comments:

1. There is the imaginary root of unity, viz. i on the left hand-side. So, the general solution must be complex-valued.

2. The PDE obtained has the space-derivative to the second order, but the time-derivative only to the first order.

In classical waves (i.e. the NM-ontological waves like the waves on strings, as well as in EM-waves), the both the space- and time-derivatives are to the second order. That’s because the classical waves are real-valued. If you have complex-valued waves, then the first order derivative is enough to get oscillations in time. Complex-valued waves are mandated because we inserted de Broglie’s relations in the energy conservation equation. In classical NM-ontological mechanics, we would have kept the \Psi real-valued, and so, would have to take its second-order time differential. But then, none of the energy terms in the energy conservation equation would obey Planck’s hypothesis, hence Einstein’s relation, and hence, de Broglie’s relations. In short, the complex-valued nature of \Psi is mandated, ultimately, by Planck’s hypothesis and a wave ansatz.

3. For later reference, note that:

The kinetic energy is a field, given by:

T(x,t) =\ -\,\dfrac{\hbar^2}{2m} \dfrac{1}{\Psi}\dfrac{\partial^2\Psi}{\partial x^2},

which suggests the following definition for the momentum in the system:

\vec{p}(x,t) =\ i\,\hbar \dfrac{1}{\Psi(x,t)} \nabla \Psi(x,t).

Thus, both kinetic energy and momentum are fields in the Schrodinger equation. The mainstream view regards them as fields defined on the abstract, 3ND configuration space.

In contrast, we take all fields of the Schrodinger equation: V(x,t), \Psi(x,t), and hence, T(x,t) as well as \vec{p}(x,t) as the 3D fields in aether. (The last two become fields because their terms involve \Psi).

The ontological and mathematical justification for our view that they are fields in the 3D physical space should be too simple and obvious by now (at this stage in this series of posts). The only thing to look into, now, is to justify that \Psi(x,t) field remains a 3D field even when there are two or more particles. We touch upon this issue in the next post, when we come to the ontology of QM.


9. Some comments on the development of Schrodinger’s equation, from our ontological viewpoint:

Notice very carefully the funny circling around going on here, with respect to light and matter, waves and particles, massless aether and massive objects. (We touched upon many of these points above and before, so they will get repeated, unfortunately! However, it’s important to put them together in one place for easy reference later on.)

Both Planck and Einstein began with the energy analysis of massive charged objects (roughly, the EC Objects of our EM ontology). They then ascribed the quantities of E = \hbar \omega and p = \hbar k to the massless aether. This procedure is justifiable because of the equality in the energy or momentum exchanges at the thermodynamic equilibrium.

As to the ontology, Planck had his own doubts about the “transfer” of quantization in the energy states of oscillators to the quantization of the EM fields. He regarded the quantization of energy of the radiation field as only a hypothesis. With the enormous benefit of hindsight (of more than a century), we can say that his hesitation for quantizing energy fields themselves was not justified. Schrodinger showed how they could be continuous (and continuously changing) entities and still obey the energy-eigenvalue equations for stationary states.

In contrast to Planck, Einstein was daring, actually brazen. He didn’t have any issue with quantization. In fact, he went much beyond, actually overboard in our opinion, and introduced also the spatial quantization to light, by introducing particles of light.

Notice, the sizes of attributes, i.e. the magnitudes of energy (or momentum), involved in the exchanges between the material point-charges and the aether is the same. However,

The fact that an exchange of energy is possible (that the sizes of the respective attributes can undergo changes in a mutually compensating way) does not alter the very ontological nature of the respective objects which enter into the interactions.

Ontologically, an EC object still remains a point-particle (more on this in the next post), and the aether still remains a spatially spread-out, non-mechanical, object—even when they interact, and therefore, even when their abstract measures like energies can change, and these changes be equated.

In our view, energy and momentum are point-properties when possessed by point-particles of EC Objects; they should be seen as moving in space with these objects (more on it, in the next post); they never exist at any other locations. In contrast, energy and momentum are field properties when possessed by the aether; they remain spread over all space at all times; they never concentrate in one place.

To equate the sizes of attributes is not to change the ontological character.

We do not blur the point-particle into a smear over space, neither do we collapse a field to a point. We simply say that from a more abstract, thermodynamic-systems perspective, the quantities of attributes called energy and momentum, in case of both types of objects, come to have the same magnitudes under equilibrium exchanges, that’s all.

In converting some quantity of steel into a piece of gold or vice versa, the respective physical objects remain the same; they only change hands, that’s all. The quantity of steel does not become golden, nor can a coin of gold be used in building a car, just because they got exchanged.

Einstein however confused this ontological issue and prescribed an exchange of not just quantities but also of the basic ontological characters. He put forth the idea of spatially discrete particles of light (later called photons).

de Broglie then entered the scene, compounded Einstein’s ontological error on the other hand of the ontological division, and prescribed an ontological wave character to the matter particles. He in effect smeared out matter into space, and also made the smear dance everywhere as a wave—a “symmetrical” counterpart to how Einstein was the original “inventor” of “anti-smearing” fields in space to a point-object, and then, of making this point-object (of the photon) go everywhere while a carrying wave attribute with it—but without any explanation about the internal structure which might lead to its having the wave attribute.

In effect, Einstein and de Broglie were the initiators of the wave-particle duality. Both their works implied, in the absence of any satisfactory explanation coming forth from either of them for their ontological transgressions, the riddles: The riddle of how the wave field “collapses” to the wave-attribute of the photon in Einstein’s theory, and how the mass and charge smear out in de Broglie’s theory.

Both must have been influence by bad elements of philosophy, including ontology. But the Copenhagen camp went further, much further.

The logical-positivistically minded Bohr, Heisenberg et al. of the Copenhagen camp then seized the moment, and formalized the measurement problem via the wavefunction “collapse,” the Complementarity Principle, etc. etc. etc. And despite all the “celebrated” debates, neither Einstein nor de Broglie ever realized that, as far as physics was concerned, it was they who had set the ball rolling in the first place!

Schrodinger didn’t think of questioning these ontological transgressions—neither did any one else. He merely improved the maths of it—by generalizing the eigenvalue problem from the original de Broglie waves on 1D curves to a similar problem for his wavefunction \Psi, initially in the 3D space. Then, in the absence of sufficient clarity regarding the nature of Lagrangian abstractions (and their nature to the physical 3D space), Schrodinger even took \Psi (following Lorentz’s objection) to the abstract 3ND configuration space.

A quarter of a century later, John von Neumann used his formidable skills in mathematical abstractions, and, as might be expected, also equipped with a perfect carelessness about ontology, took all the QM-related confusions, and cast them all into the concrete, by situating the entire theoretical structure QM on the “floating grounds” of an infinite-dimensional Hilbert space, with \Psi and V of course “living” in abstract 3ND configuration space.

Oh, BTW, regardless of his otherwise well-earned reputation, there were errors in von Neumann’s proofs too. It took decades before a non-mainstream non-American QM physicist, named John S. Bell, discovered an important one. Bell said:

The proof of von Neumann is not merely false but foolish!” [^].

I am tempted to ask Bell:

“Why just von Neumann, John? Weren’t they all at least partly both?”

The only way to counter all their errors is to clearly understand all the aspects of all such issues—by and for yourself. You must understand the epistemology and ontology involved in the issues (yes, this one, first!), also physics (both “classical” and QM), and then, also the relation of mathematics to physics to ontology and epistemology in general. But once you do that, you find that all their silly errors and objections have evaporated away.


10. Operators are not ontologically important:

I do not know who began to emphasize operators in QM. Dirac? von Neumann? Still others?

But the notion has become entrenched in the mainstream QM. A lot of store is set up on the idea that the classical variables must be represented, in theory, by operators—by objects that are, in Feynman’s memorable words “hungry” forever. The operators for the momentum and energy, for instance, are respectively given as:

\hat{p} = i\,\hbar \nabla

and

\hat{H} =\ -\ \dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} + V(x,t).

To somehow have everything fit their operator-primary theory, they also carefully formulated the notion that the operator of a number, a variable, or a field function, when it “acts” on \Psi, results in just plain multiplication of that mathematical object with the \Psi—without any explanation or justification on the physical grounds. Why multiply when Nature does no multiplications—without there being a mechanism acting to that effect? Blank out.

It all is more than just a bit weird, but it’s there—the operator-primacy theory of formulating QM. And I am sure that it has some carefully crafted and elegant-looking mathematical basis intelligently created for it too—complete with carefully noted notations, definitions, lemmas, theorems, proofs, etc. All forming such a hugely abstract and obfuscating structure that errors in proofs are kept well hidden for decades.

For now, just note that the very notion of operators itself is not very important when it comes to ontological discussions. Naturally, the finer distinctions about it like the linear operators, Hermitian operators, etc., also is not at all important. Just my personal opinion. But it’s been reached with a good ontological understanding of the issues, I think.


11. A preview of the things to come next:

OK. In this post, we touched on many of the finer points having ontological implications. The next time, we will provide our answer regarding the proper ontology of QM.

We will refer to the physics of only the simplest quantum system, viz. the hydrogen atom (and comparable quantum models, notably, the particle in the box (PIB), the quantum harmonic oscillator, and similar 1-particle quantum systems). We will make a formal list all the objects used in the QM ontology, and also indicate the kind of 3D aetherial field there has to be, for the system wavefunction. We will also discuss some analogies that help understand the nature of the \Psi field. For instance, we will point out the fact that \Psi exists even in the PIB model, i.e., even at places in the domain where V is zero. There are some interesting repercussions arising out of this fact.

We will also touch upon the fact that the action-at-a-distance is absent in our EM ontology, and hence, it should also be absent in our QM ontology. However, the presence of the direct-action does not mean that there cannot be changes that occur simultaneously everywhere in the aether. The two phenomena are slightly different, and we will delineate them. The non-relativistic QM theory, in particular, requires the latter.

We will, however, not touch upon the measurement problem. Understanding the measurement problem requires understanding two new physics topics to a certain depth and with sufficient scope: QM of many-particle systems, and the physics of the nonlinear differential equations. Both are vast topics in themselves. Further, tackling the measurement problem doesn’t change the list of different ontological objects that are involved in QM or their basic nature. In fact, measurement problem is rather a specifically detailed physics problem. Solving it, IMHO, does require a very good clarity on the QM ontology, but the basic ontological scheme remains the same as for the single hydrogen atom. That’s why, we won’t be touching on that topic. Experts in QM may refer to the Outline document I have already put out, earlier this year, at iMechanica [^]. All the rest: Well, you have wait, or ask the experts, what else?

One particular aspect of the many-particle quantum systems which is very much in vogue these days is the entanglement. Since we won’t be covering the many-particle systems in this series, we also wouldn’t be touching on the physics of quantum mechanical entanglement. However, at least as of today, I do not very clearly see if the phenomenon of entanglement requires us to make any substantial changes in the ontology of QM. In fact, I think not. Entanglement complicates only the physics of QM, but not its ontology. Hence the planned omission of entanglement from this series.

So, all in all, our description of the QM ontology itself would get completed right in the next post. And, with it, also this ontological series would come to an end.

Of course, my blogging would continue, as usual. So, I might write occasional posts on these topics: many-particle QM systems, nonlinearity proposed by me in the Schrodinger equation, the measurement problem, the quantum entanglement, and then, perhaps, also the quantum spin. However, there won’t be a continuously executed project of a series of posts as such, to cover these topics. I will simply write on these topics on a more or less “random” and occasional basis—whenever I feel like.

So there. Check out the next—i.e. the last—post in this series, when it comes, say in a week’s time or so. In the meanwhile, go through the previous posts if you have joined late.

Also, have a happy Diwali!

Alright, take care, and bye for now…


A song I like:

(Hindi) “dheere dheere machal ae dil-e-beqarar”
Music: Hemant Kumar
Singer: Lata Mangeshkar
Lyrics: Kaifi Aazmi
[Credits listed in a random order]


History:

— First published: 2019.10.26 18:37 IST
— Corrected typos, added sub-section headings, revised some contents (without touching the points), added a few explanations, etc., by 2019.10.27 11:28 IST. Will now this post (~7,500 words!) as is. At least until this series gets over.
Update on 2019.10.28 10:50 IST: Still corrected some misleading passages, added notes for better clarification, corrected typos, etc. (~8,825 words!). Let’s leave it. I need to really turn to writing the next post.