Python scripts for simulating QM, part 1: Time evolution of a particle in the infinite potential box

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


What’s been happening?

OK, with that special note done, let me now turn my attention to the two of you who regularly read this blog.

… After the MNIST classification problem, I turned my attention to using LSTM’s for time-series predictions, simply because I hadn’t tried much on this topic any time earlier. … So, I implemented a few models. I even seemed to get good accuracy levels.

However, after having continuously worked on Data Science straight for about 2.5 months, I began feeling like taking a little break from it.

I had grown tired a bit though I had not realized it while actually going through all those tedious trials. (The fact of my not earning any money might have added to some draining of the energy too.) In fact, I didn’t have the energy to pick up something on the reading side either.

Basically, after a lot of intense effort, I wanted something that was light but engaging.

In the end, I decided to look into Python code for QM.

Earlier, in late 2018, I had written a few scripts on QM. I had also blogged about it; see the “part 0” of this series [^]. (Somehow, it has received unusually greater number of hits after I announced my MNIST result.) However, after a gap of 1.5 years, I could not easily locate those scripts. … So, like any self-respecting programmer, I decided to code them again!

Below is the first result, a movie. … Though a movie, it should be boring to any one who is not interested in QM.


Movie of the wavefunction of an electron inside an infinite potential box:

 

An electron in an infinite potential box.

An electron inside a 3 cm long 1D box with infinite potentials on the sides. Time evolution of the second excited state (n = 3). In the standard QM, such states are supposed to be“stationary”.

 


The Python code: Main file:

Now, the code. It should be boring to any one who is not a Python programmer.

"""
01.AJ_PIB_1D_Class.py

Written by and copyright (c) Ajit R. Jadhav. All rights reserved.

Particle in a Box.

Solves the Time-Independent Schrodinger Equation in 1D,
using the Finite Difference Method. Eigenvalues are found using
the direct matrix method.

Also shows the changes in the total wavefunction with time.
[The stationarity in the TISE is not static. In the mainstream 
QM, the stationarity is kinematical. In my new approach, it 
has been proposed to be kinetic. However, this simulation concerns
itself only with the standard, mainstream, QM.]

Environment: Developed and tested using:
Python 3.7.6 64-bit. All packages as in Anaconda 4.8.3 
on Ubuntu-MATE 20.04 (focal fossa) LTS of the date (below).

TBD: It would be nice to use sparse matrices. Also, use eigenvalue 
functions from scipy (instead of those from numpy).

History:
This file begun: Friday 2020 May 22 19:50:26 IST 
This version: Saturday 2020 May 23 16:07:52 IST 
"""

import numpy as np 
from scipy.integrate import simps
import matplotlib.pyplot as plt 
from matplotlib.animation import ImageMagickFileWriter

# SEE THE ACCOMPANYING FILE. THE NUMERICAL VALUES OF CONSTANTS 
# ARE DEFINED IN IT.
from FundaConstants import h, hbar, me, mP, eV2J

################################################################################
# THE MAIN CLASS 

class AJ_PIB_1D( object ):

    def __init__( self, nInteriorNodes, dMass, dh ):
        self.nInteriorNodes = nInteriorNodes
        self.nDomainNodes = nInteriorNodes + 2
        self.dMass = dMass # Mass associated with the QM particle
        self.dh = dh # cell-size ( \Delta x ).

        # The following numpy ndarray's get allocated 
        # during computations.
        self.aaT = None 
        self.aaV = None 
        self.aaH = None 

        self.aaPsi = None 
        self.aE_n = None 

        self.A_ana = None 
        self.ak_ana = None 
        self.aE_ana = None 
        self.aPsi_ana = None 
        return 

    # Creates the kinetic energy matrix for the interior points of the
    # domain.
    def ComputeKE( self ):
        self.aaT = np.zeros( (self.nInteriorNodes, self.nInteriorNodes) )
        for i in range( self.nInteriorNodes ):
            self.aaT[ i ][ i ] = -2.0
        for i in range( self.nInteriorNodes-1 ):
            self.aaT[ i ][ i+1 ] = 1.0
        for i in range( 1, self.nInteriorNodes ):
            self.aaT[ i ][ i-1 ] = 1.0 
        dFactorKE = - hbar**2 / ( 2.0 * self.dMass * self.dh**2 )
        self.aaT *= dFactorKE
        return

    # Creates the potential energy matrix for the interior points of the
    # domain. You can supply an arbitrary potential function via the array
    # aV of size = interior points count, and values in joule.
    def ComputePE( self, aV= None ):
        self.aaV = np.zeros( (self.nInteriorNodes, self.nInteriorNodes) )
        if None != aV:
            for i in range( self.nInteriorNodes ):
                self.aaV[ i ][ i ] = aV[ i ]
        return

    def ComputeHamiltonian( self, aV= None ):
        self.ComputeKE() 
        self.ComputePE( aV )
        self.aaH = self.aaT + self.aaV 
        return 

    # Note, the argument aX has the size = the count of domain points, not 
    # the count of interior points.
    # QM operators are Hermitian. We exploit this fact by using the 
    # numpy.linalg.eigh function here. It is faster than numpy.linalg.eig, 
    # and, unlike the latter, also returns results sorted in the ascending 
    # order. 
    # HOWEVER, NOTE, the eigenvectors returned can have signs opposite 
    # of what the analytial solution gives. The eigh (or eig)-returned 
    # vectors still *are* *eigen* vectors. However, for easier comparison 
    # with the analytical solution, we here provide a quick fix. 
    # See below in this function.
    def ComputeNormalizedStationaryStates( self, aX, bQuickFixForSigns= False ):
        assert( self.nDomainNodes == len( aX ) )
        
        # Use the LAPACK library to compute the eigenvalues
        aEigVals, aaEigVecs = np.linalg.eigh( self.aaH )
        
        # SQUARE-NORMALIZE THE EIGENVECTORS

        # Note:
        # The eigenvectors were found on the interior part of the domain, 
        # i.e., after dropping the boundary points at extreme ends. But the 
        # wavefunctions are defined over the entire domain (with the 
        # Dirichlet condition of 0.0 specified at the boundary points).
        
        nCntVecs = aaEigVecs.shape[ 1 ]
        assert( nCntVecs == self.nInteriorNodes )

        # eigh returns vectors in *columns*. We prefer to store the 
        # normalized vectors in *rows*.
        aaPsi = np.zeros( (self.nInteriorNodes, self.nDomainNodes) )
        for c in range( nCntVecs ):
            aPsi = aaEigVecs[ :, c ]
            # Find the area under the prob. curve
            aPsiSq = aPsi * aPsi
            dArea = simps( aPsiSq, aX[ 1 : self.nDomainNodes-1 ] )
            # Use it to normalize the wavefunction
            aPsi /= np.sqrt( dArea )
            # The analytical solution always has the curve going up 
            # (with a +ve gradient) at the left end of the domain. 
            # We exploit this fact to have a quick fix for the signs.
            if bQuickFixForSigns is True:
                d0 = aPsi[ 0 ]
                d1 = aPsi[ 1 ]
                if d1 < d0:
                    aPsi *= -1
            aaPsi[ c, 1 : self.nDomainNodes-1 ] = aPsi
        self.aaPsi = aaPsi
        self.aE_n = aEigVals
        return 

    # Standard analytical solution. See, e.g., the Wiki article: 
    # "Particle in a box"
    def ComputeAnalyticalSolutions( self, aX ):

        xl = aX[ 0 ]
        xr = aX[ self.nDomainNodes-1 ]
        L = xr - xl 
        A = np.sqrt( 2.0 / L )
        self.A_ana = A 

        # There are as many eigenvalues as there are interior points
        self.ak_ana = np.zeros( self.nInteriorNodes )
        self.aE_ana = np.zeros( self.nInteriorNodes )
        self.aPsi_ana = np.zeros( (self.nInteriorNodes, self.nDomainNodes) )
        for n in range( self.nInteriorNodes ):
            # The wavevector. (Notice the absence of the factor of '2'. 
            # Realize, the 'L' here refers to half of the wavelength of 
            # the two travelling waves which make up the standing wave. 
            # That's why.)
            k_n = n * np.pi / L 
            # The energy.
            E_n = n**2 * h**2 / (8.0 * self.dMass * L**2)

            # A simplest coordinate transformation:
            # For x in [0,L], phase angle is given as
            # Phase angle = n \pi x / L = k_n x. 
            # We have arbitrary coordinates for the left- and 
            # right-boundary point. So, 
            # Phase angle = k_n (x - xl)
            ap = k_n * (aX - xl)
            
            aPsiAna = A * np.sin( ap )
            self.ak_ana[ n ] = k_n 
            self.aE_ana[ n ] = E_n 
            # We prefer to store the normalized wavefunction 
            # in rows. (Contrast: linalg.eigh and eig store the 
            # eigen vectors in columns.)
            self.aPsi_ana[ n, : ] = aPsiAna
        return 
        
    # This function gets the value that is the numerical equivalent to the 
    # max wave amplitude 'A', i.e., sqrt(2/L) in the analytical solution. 
    def GetMaxAmplNum( self ):
        dMax = np.max( np.abs(self.aaPsi) )
        return dMax


################################################################################
# Utility functions

# NOTE: SAVING MOVIES CAN TAKE A LOT MORE TIME (7--10 MINUTES).
def Plot( model, n, nTimeSteps, bSaveMovie= False, sMovieFileName= None ):
    # The class computes and stores only the space-dependent part.
    aPsiSpace = model.aaPsi[ n-1 ]

    # Period = 2 \pi / \omega = 1 / \nu
    # Since E = h \nu, \nu = E/h, and so, Period = h/E
    nu = model.aE_n[ n-1 ] / h 
    dPeriod = 1.0 / nu 

    dt = dPeriod / (nTimeSteps-1) 

    # Plotting...

    plt.style.use( 'ggplot' )
    # Plot size is 9 inches X 6 inches. Reduce if you have smaller 
    # screen size.
    fig = plt.figure( figsize=(9,6) ) 

    if bSaveMovie is True:
        movieWriter = ImageMagickFileWriter()
        movieWriter.setup( fig, sMovieFileName )

    dMaxAmpl = model.GetMaxAmplNum() # Required for setting the plot limits.
    dTime = 0.0 # How much time has elapsed in the model?
    for t in range( nTimeSteps ):
        # TIME-DEPENDENT PART: 
        # \psi_t = e^{-i E_n/\hbar t} = e^{-i \omega_n t} = e^{-i 2 \pi nu t}
        # Compute the phase factor (which appears in the exponent).
        dTheta = 2.0 * np.pi * nu * dTime 
        # The Euler identity. Compute the *complete* wavefunction (space and time)
        # at this instant.
        aPsi_R_t = aPsiSpace * np.cos( dTheta )
        aPsi_I_t = - aPsiSpace * np.sin( dTheta )
        
        plt.clf()
        sTitle = "Particle in an infinite-potential box (n = %d)\n" % (n)
        sTitle += "Domain size: %7.4lf m. Oscillation period: %7.4lf s.\n" % (L, dPeriod)
        sTitle += "Time step: %3d/%3d. Time elapsed in simulation: %7.4lf s." % (t+1, nTimeSteps, dTime) 
        plt.title( sTitle )

        plt.xlabel( "Distance, m" )
        plt.ylabel( "Wavefunction amplitude, $m^{-1/2}$" )

        plt.grid( True )

        plt.xlim( (xl - L/10), (xr + L/10) )
        plt.ylim( -1.1*dMaxAmpl, 1.1*dMaxAmpl )

        plt.plot( aX, aPsi_R_t , color= 'darkcyan', label= r'Re($\Psi$)' )
        plt.plot( aX, aPsi_I_t , color= 'purple', label= r'Im($\Psi$)' )

        plt.legend( loc= 'upper right', shadow= True, fontsize= 'small' ) 
        
        if bSaveMovie is True:
            movieWriter.grab_frame()
        else:
            plt.pause( 0.001 )

        dTime += dt

    if bSaveMovie is True:
        movieWriter.finish()
    else:
        plt.show()


################################################################################
# MAIN DRIVER CODE
# We use the SI system throughout. [This is a program. It runs on a computer.]

# DOMAIN GEOMETRY

xl = -1.0e-02 # Left end (min. x)
xr = 2.0e-02 # Right end (max. x)
L = xr - xl # Length of the domain
xc = (xl + xr )/ 2.0 # Center point

# MESH
# Count of cells = Count of nodes in the domain - 1. 
# It's best to take an odd number for the count of domain nodes. This way, 
# the peak(s) of the wavefunction will not be missed.
nDomainNodes = 101
aX, dh = np.linspace(   start= xl, stop= xr, num= nDomainNodes, 
                        endpoint= True, retstep= True, dtype= np.float )

# In the PIB model, infinite potential exists at either ends. So, we apply 
# the Dirichlet BC of \Psi(x,t) = 0 at all times. Even if the discretized 
# Laplacian were to be computed for the entire domain, in handling the 
# homogeneous BC, both the boundary-points would get dropped during the
# matrix partitioning. Similarly, V(x) would be infinite there. That's why,
# we allocate the Laplacian and Potential Energy matrices only for the
# interior points. 
nInteriorNodes = nDomainNodes - 2 

# We model the electron here. 
# Constants are defined in a separate file: 'FundaConstants.py'
# Suggestion: Try mP for the proton, and check the \Psi amplitudes.
dMass = me 

# Instantiate the main model class.
model = AJ_PIB_1D( nInteriorNodes, dMass, dh )

# Compute the system Hamiltonian.

# If you want, supply a custom-made potential function as an ndarray of 
# size nInteriorNodes, as an argument. Values should be in joules.
# 'None' means 0 joules everywhere inside the box.
model.ComputeHamiltonian( None )

# Compute the stationary states. For the second argument, see the 
# note in the function implementation.
model.ComputeNormalizedStationaryStates( aX, True )

# You can also have the analytical solution computed. Uncomment the 
# line below. The numerical and analytical solutions are kept in 
# completely different arrays inside the class. However, the plotting
# code has to be careful.
### model.ComputeAnalyticalSolutions( aX )


# PLOT THE STATIONARY STATES, AND SHOW THEIR OSCILLATIONS WITH TIME.
# (TISE *is* dynamic; the stationarity is dynamical.) 

# Note, here, we choose n to be a 1-based index, as is the practice
# in physics. Thus, the ground state is given by n = 1, and not n = 0.
# However, NOTE, the computed arrays of wavefunctions have 0-based 
# indices. If such dual-usage for the indices gets confusing, simple! 
# Just change the code!

n = 3 
# No. of frames to be plotted for a single period of oscillations
# The 0-th and the (nTimeSteps-1)th state is identical because the 
# Hamiltonian here is time-independent. 
nTimeSteps = 200 

# You can save a movie, but note, animated GIFs take a lot more time, even 
# ~10 minutes or more, depending on the screen-size and dpi.
# Note, ImageMagickFileWriter will write the temp .png files in the current 
# directory (i.e. the same directory where this Python file resides). 
# In case the program crashes (or you stop the program before it finishes), 
# you will have to manually delete the temporary .png files from the 
# program directory! (Even if you specify a separate directory for the 
# movie, the *temporary* files still get generated in the program directory.)
### Plot( model, n, nTimeSteps, True, './AJ_PIB_e_%d.gif' % (n) )

Plot( model, n, nTimeSteps, False, None )

################################################################################
# EOF
################################################################################


The ancillary file:

The main file imports the following file. It has nothing but the values of fundamental physical constants noted in it (together with the sources). Here it is:

"""
FundaConstants.py

Written by and copyright (c) Ajit R. Jadhav. All rights reserved.

Begun: Thursday 2020 May 21 20:55:37 IST 
This version: Saturday 2020 May 23 20:39:22 IST 
"""

import numpy as np 

"""
Planck's constant
https://en.wikipedia.org/wiki/Planck_constant 
``The Planck constant is defined to have the exact value h = 6.62607015×10−34 J⋅s in SI units.'' 
"""
h = 6.62607015e-34 # J⋅s. Exact value.
hbar = h / (2.0 * np.pi) # J⋅s. 

"""
Electron rest mass
https://en.wikipedia.org/wiki/Electron_rest_mass
``9.1093837015(28)×10−31'' 2018 CODATA value. NIST
"""
me = 9.1093837015e-31 # kg. 


"""
Proton rest mass
https://en.wikipedia.org/wiki/Proton 
1.67262192369(51)×10−27 kg
"""
mP = 1.67262192369e-27 # kg

"""
eV to Joule
https://en.wikipedia.org/wiki/Electronvolt 
1 eV = 1.602176634×10−19 J
"""
eV2J = 1.602176634e-19 # Conversion factor

And, that’s about it, folks! No documentation. But I have added a lot of (otherwise unnecessary) comments.

Take care, and bye for now.


A song I like:

(Marathi) सावली उन्हामध्ये (“saavali unaamadhe”)
Music: Chinar Mahesh
Lyrics: Chandrashekhar Sanekar
Singer: Swapnil Bandodkar

Entanglement, nonlocality, and the slickness of the MSQM folks

Update: See at the end of this post.


0. Context

This post began its life as a comment to Roger Schlafly’s blog post: “Smolin preaches nonlocality nonsense” [^]. However, at 7000+ characters, my comment was almost twice the limit (of 4k characters) there. So, I decided to post my reply here, as a separate entry by itself.

I assume that you have read Schlafly’s post in toto before going any further.


1. Schlafly’s comments:

Schlafly says:

“Once separated, the two particles are independent.”

The two particles remain two different entities, but their future dynamics also remains, in part, governed by a single, initial, entangling, wavefunction.

“Nothing you do to one can possibly have any effect on the other.”

The only possible things you can do to any one (or both) of the entangled particles necessarily involves their shared (single) wavefunction.

Let me explain. Let’s begin at the beginning.


1. System description and notation:

Call the two entangled particles EP1 and EP2.

If you want to imagine two different things physically being done to the two EPs, then you have to have at least two additional particles (APs) with which these EP’s eventually interact. APs may be large assemblages of particles like detectors; EPs are regarded as simple single particles, say two electrons.

Imagine a 1D situation. Initially, the EPs interact at the origin of the x-axis. Then they fly apart. EP1 goes to, say, +1000.0 km (or lightyears), and EP2 goes to -1000.0 km (or lightyears). Both points lie on the same x-axis, symmetrically away from the origin.

To physically do something with the EP1, suppose you have the additional particle (detector) AP1 already existing at 1000.0 +\epsilon km, and similarly, there is another AP2, exactly at -1000.0 - \epsilon km, where \epsilon is a small distance, say of the order of a millimeter or so.

Homework 1: Check out the distance from the electron emitter to the detector in the single-particle double-slit interference experiments. Alternatively, the size of the relevant chamber inside a TEM (transmission electron microscope).

The overall system thus actually has (and always had) four different particles, and in the ultimate analysis, they all have always had a single, common, universal wavefunction. (Assume, there is nothing else in the universe.)

But for simplicity of talking, we approximated the situation by eking out a two-particle entangled wavefunction for the EPs—just to get the discussion going.

All MSQM (mainstream QM) people blithely jump to and forth between abstractions in this way—between two abstractions of having basically different scopes. That’s not the trouble. The trouble is: They never tell you exactly when they are about to do that.

OK. Now, think of the 4-particles system-wavefunction as being built from four different 1-particle wavefunctions (via an appropriate linear superposition of all the appropriate product-states of the four 1-particle wavefunctions, with the proviso that the resulting single wavefunction must have enough generality, and that it obey the appropriate exchange-operator rules etc.).


2. The sense in which entangled particles approach independence—in their interactions with the other particles:

Each 1-particle wavefunction has an anchoring point in space.

[MSQM people never tell you that. [Google on “anchoring of” “potentials” or “wavefunctions” in the context of QM.]]

Each such a wavefunction very rapidly drops off in intensity from its anchoring point, so as to satisfy the Sommerfeld radiation condition. …May be there is a generalization of this principle for the many-particle situations; I don’t know. But I know that if the system-wavefunction has to be square-normalizable, then some condition specifying a rapid decay over space is what Sommerfeld the nature ordered.

[MSQM people never remind you of such a condition in any such contexts to you. [Google!]]

So, the 1-particle wavefunction for AP1 affects EP1 far, far more than it affects EP2. Similarly, the 1-particle wavefunction for AP2 affects EP2 far, far more that it affects EP1.

Homework 2: Find the de Broglie wavelength for an electron, and for a typical detector. Work it out on your own. Don’t cheat [^][^] !

In this sense, sure, what AP1 does to EP1 (and vice-versa) has overwhelmingly greater effect than what it does to EP2 (and vice-versa).

So, what Schlafly says (“Nothing you do to one can possibly have any effect on the other”) does have a certain merit to it, but only in a limiting and approximate (“classical-like”) sense.

In a certain limiting sense, the AP1 \Leftrightarrow EP1 and AP2 \Leftrightarrow EP2 interactions do approach full independence.

To use the language that the MSQM people typically use, the reason put forth is that AP1 and AP2 never directly interacted with each other.

Actually, they all always had interacted with all the others—but in this case, only dimly so. So, as we would say to describe the same point: Due to the Sommerfeld radiation condition, AP1 \Leftrightarrow AP2 interaction always was, remains, and assuming that they don’t leave their fixed positions at \pm 1000.0 km so as to go nearer to each other, it will also always remain, very negligibly small.


3. The entangled particles’ dynamics continues to be influenced from the initial entanglement:

However, note that as EP1 and EP2 travel from the origin to their respective points (to their respective positions at \pm 1000.0 km), this entire evolution in their states (consisting of their “travel”s/displacements) occurs at all times under an always continuing influence of the same, initial, 2-particle entangled part of, the 4-particle system wavefunction—its deterministic time-evolution (as given by the Schrodinger equation).

Since the state evolution for both EP1 and EP2 was guided at each instant by the same 2-particle entangled part of the same wavefunction, the amount of distance does not matter—at all.

Even if their common entangled wavefunction initially has almost a zero strength at the distant points \pm 1000.0 km away, once EP1 and EP2 particles begin moving away from the origin, their states evolve deterministically (obeying the time-dependent Schrodinger equation). As they approach the two \pm 1000.0 km points respectively, the common wavefunction’s strength at these two points accordingly increases (and the strength of that portion of the same wavefunction which lies in the space near the origin progressively decreases). That’s because the common entangling part of the system wavefunction, is composed from two 1-particle wavefunctions, one each for EP1 and EP2, and each of these two 1-particle wavefunction has the respective current positions of EP1 and EP2 as their reference (or anchoring) point. Why? Because the potential energy has a singularity in their current point positions, that’s why.

So, all in all, yes, the nature of what EP1 can at all do in its interaction with AP1 is still, in part, being governed by the deterministically evolved state of the initial, single, 2-particle entangling wavefunction. [That’s how even the MSQM folks put it. Actually, it’s a 2-particle part of the 4-particle system wavefunction.]

So, the net result at the +1000.0 km point is that, when seen in an approximate manner, EP1 seems to be interacting with AP1 (or, AP1 with EP1) in a manner that seems to be completely independent of how  EP1 interacts with AP2 and EP2—i.e., there is almost no interaction at all.

Similarly, the net result at the -1000.0 km point is that, when seen in an approximate manner, EP2 seems to be interacting with AP2 (or, AP2 with EP2) in a manner that seems to be completely independent of how  EP2 interacts with AP1 and EP1—i.e., there is almost no interaction at all.


4. The paradox we have to resolve:

We thus have two apparently contradictory ways of summarizing the same situation.

  • Since the two EPs have gone so farther apart, and since AP1 and AP2 never “interacted” strongly with each other (or with EP1 and EP2), therefore, EP1’s behaviour should be taken to be “independent” of EP2’s behaviour, when they are at the \pm 1000.0 km points. Their behaviour should have nothing in common.
  • Yet, since EP1 and EP2 were initially entangled, and since both their respective state-evolutions were governed by the common, single wavefunction entangling them, therefore, their behaviour must also have something in common.

Got it?

How do we resolve this paradox?


5. What kind of things actually happen:

Suppose the interaction of AP1 with EP1 is such that we can say that it is EP1’s spin-property which gets measured by AP1.

Here, imagine an assemblage of a large number of particles, acting as a spin-detector, in place of AP1. (We will continue to call it a single “particle”, for the sake of simplicity.)

Suppose that the measurement outcome happens to be such that EP1’s spin is measured at AP1 to be “up” with respect to a certain z-axis (applicable to the entire universe).

Now, remember, measurement is a probabilistic process. Therefore, the correct statement to make here is:

If (and when) AP1 measures EP1’s spin, the outcome is one (and only one) of the two possibilities: either “up”, or “down.”

In other words, it is always possible that EP1 interacts with AP1, and yet, the action of EP1’s spin influencing some large-scale configuration changes within AP1 (an event which we call “measurement”) never actually comes to occur. This is possible to. However, if a measurement does occur, then the outcome is one and only one of those two possibilities.

Now suppose, to take the description further, that AP1 does indeed end up measuring EP1 spin. (That is to say, suppose that such a thing comes to occur as a physical fact, an irreversible change in the universe.)

Assume further—for the sake of pedagogic simplicity—that the EP1’s spin is measured to be “up” (and not “down”).

Suppose further that the interaction of AP2 with EP2 is such that we can say that it is EP2’s spin which is the property that gets measured by AP2—if there at all occurs a measurement when EP2 is near or at AP2. Again, remember, measurement is a probabilistic process. The correct statement now to make is:

If (and when) AP2 ends up measuring EP1’s spin, then, since the EP1 and EP2 are entangled, the outcome at the -1000.0 km point has to be: “down” (because we assumed that EP1’s spin was measured as “up” at the +1000.0 km point).

Note, the spin of EP2 is certain to be measured “down” in our case—provided it at all gets measured during the interaction of EP2 with AP2.

But note also that since AP2’s state is not entangled with AP1’s (they were too far away to begin with), just because AP1 does end up measuring EP1’s spin (as “up”) does not mean that AP2 will also necessarily measure EP2’s spin at all—despite the interaction they necessarily go through. (All four particles are, in reality, interacting. Here, AP2 and EP2, being closer, are interacting strongly.)


6. The game that the MSQM people play (with you):

Now, the whole game that MSQM (mainstream QM) physicists play with you is this.

They don’t explain to you, but it is true, that:

The fact

“AP1 interacted with EP1 to measure its spin state”

does not necessitate the conclusion

“AP2 must also measure the spin-state of EP2 in the same experimental trial“.

The latter is not at all necessary. It does not have to physically take place.

If so, then what can we say here? It is this:

But if (and when) AP2 does measure the spin-state (and no other measurable) of EP2, then the measured spin will necessarily be “down”.

The preceding statement is true.

This is because angular momentum conservation implies that if any one of the spins is measured as “up”, then the other has to get measured as “down”. This necessity is built right in the way the single entangling wavefunction is composed from the two 1-particle wavefunctions. It is the property of the initial entangling wavefunction that it has zero net spin-angular momentum. It gets reflected also in the measured read-outs with equal probability if two measurements at all take place at symmetrically far away points, so that the local patterns of the common wavefunction themselves must be symmetrically opposite. (Only a symmetrically opposite pair of 1-particle wavefunctions can together conserve angular momentum for the 2-particle entangling wavefunction.)

The slickness of MSQM people consists of refusing to make you realize that the common (entangling) wavefunction must, of necessity, arise from such symmetry conditions as just mentioned, and that it must also evolve perfectly preserving this symmetry throughout the Schrodinger evolution. Further, their slickness consists of making you believe that if AP1 does indeed physically measure EP1’s spin as “up”, then AP2 is also mandated to physically end up measuring EP2’s spin, in each and every trial.


7. How the MSQM people maintain their slickness, while presenting experimental data:

When they do experiments, they actually send entangled particles apart, and measure their respective spins at two equal distance apart and similarly tilted detector-positions.

What their raw data shows is that when the AP1 measures EP1 to be in the “up” state, AP2 may not always show any measurement outcome at all. Also, for all other three possibilities. (AP1 says “down”, nothing at AP2. AP1 says nothing, AP2 says “up”. AP1 says nothing, AP2 says “down”.)

What the MSQM folks do is, effectively, to simply drop all such observations. They retain only those among the raw data-points which have one of the two results:

  • EP1 actually measured (by AP1) to have the spin “up”, and EP2 actually measured (by AP2) to have the spin “down” in a single trial, or
  • EP1 actually measured (by AP1) to have the spin “down”, and EP2 actually measured (by AP2) to have the spin “up” in some other, single, trial.

So, their conclusion never do highlight the previously mentioned four possibilities.

No, they are not doing any data-fudging as such.

The data they present is the actual one, and it does support the theory.

But the as-presented data is not all the data there is—it’s not all there is to these experiments. And, so, it is not the complete story.

And, the part dropped-out of the final datasets sure tells you more about demystifying entanglement than the part that is eventually kept in does. It is this same—mystifying—data that gets presented in conferences, summarized in textbooks and pop-sci articles (including those on the Quanta Magazine site), and of course, in the pop-sci books (by all authors writing on this subject [Google (verb)!]).

Just hold the above discussion in mind, and see how it straightens out everything.


8. Summary of what we saw thus far:

A measured value is decided only in an act of measurement—if any measurement at all occurs during the ongoing interaction of a particle and a detector.

The respective probabilities for each of the two possible outcomes (in the spin “up” or “down” type of two-state situations) have already been decided by the deterministic time-evolution (the Schrodinger-evolution) of the initial, 2-particle entangling, part of the 4-particle system wavefunction.

If the AP1 detector is oriented to measure EP1’s spin as “up” with a P % probability, then EP2’s spin is necessarily “shaped” by the same wavefunction as to be inclined to be measured by AP2 as “down”, with the same P % probability—provided that:

  1. AP2 was in all respects identical to AP1 (including their orientations—say, placed in an exact mirror-symmetrical arrangement), and
  2. AP2 does at all end up measuring EP1. It might not, always.

Existence of an entanglement between EP1 and EP2 does not necessitate that if AP1 measures the spin-property of EP1 (w.r.t. a certain axis), then AP2 for the corresponding EP2 (coming from the same trial) must also measure the spin-property of EP2 (w.r.t to the same axis).

But if AP2 undergoes a measurement process too, then the outcome is determined, due to the commonality of the single entangling wavefunction (including the spinor function) which is shared by EP1 and EP2. And it works out as: if the first is “up”, the second must be “down”, or, vice versa.

 


Note: I am not sure if I noted in the NY resolutions post or not. But I’ve decided that I may not add a songs section every time—but sure enough I will, if one is somewhere at the back of the mind.

This topic is not difficult, but it is intricate. Easy to make typos. Also, very easy to make long-winding statements, not find the right phrases, ways of expression, metaphors, etc. So, I think I should come back and revise it after a few days. I should also give titles to the sections and all … But, anyway, in the meanwhile, do feel free to read.


History:

— 2020.01.03 12:15 IST: Initial posting.
— 2020.01.03 13:44 IST: Correction of typos, misleading statements. Addition of section titles, and a further section on the comparison with classical diffusion systems.
— 2020.01.03 15:33 IST: Added the section: “One last comment…”.
— 2020.01.03 17:03 IST: Further additions/corrections. Now am going to leave this post in this shape for at least a couple of days or more. But looks like it’s mostly done.
— 2020.01.04 14:18 IST: Nope. In simplifying everything as much as possible, it seems to me that I ended up getting off the track, and thus wrote something which is, I now think, was wrong. The error was confined to section 9.

The wrong part was important. I will have to look into the maths involving the spin property once again (and in fact learn more about it and many-particle systems in general), and further, I will have to integrate it with my new approach. Only then would I be able to come back on this point. It may take me quite some time to finalize such an integration, may be weeks, may be months.

My plan all so far was to leave the spin property of QM systems alone, and present the new approach only for spin-less systems. (That’s what I did in the Outline document too.) Yet, yesterday, somehow, I got tempted at covering the spin and the new approach together, right on the fly, and ended up writing a bit inadvertently adopting an ensembles-based interpretation. I thus sounded a bit too much like the Bohmian approach than what my approach actually should be like. (I know it from some other points of view that there are going to be important differences in my approach and the Bohmian one.)

All this, I realized, completely on my own, without any one prompting me or providing any feedback (not an indirect one, say as through the “follow-up” sort of channels), only this morning. So, I am deleting what earlier was the section 9.

The section 10 was not wrong as such. But its contents were prompted only by the topic covered in section 9. That’s why, though section 10 was essentially correct, I am also deleting it. I will cover both their topics in future.

In case any one is at all interested in having the original (erroneous) version of this post (with sections 9. and 10.), then I could share it. Feel free to approach me via an email or a comment.

As to any other errors/ambiguities/ill-expositions, I will let them be. I am done with this post. Time to move on.

Ontologies in physics—10: Objects in QM. Aetherial fields in QM. Particle-in-a-box.

0. Prologue:

The last time we saw the context for, and the scheme of the inductive derivation of, the Schrodinger equation. In this post, we will see the ontology which it demands—the kind of ontological objects there have to be, so that the physical meaning of the Schrodinger equation can be understood correctly.

I wrote down at least 2 or 3 different ways of presentations of the topics for this post. However, either the points weren’t clear enough, or the discussion was going too far away, and I was losing the focus on ontology per say.

That’s why, I have decided to first present the ontology of QM without any justification, and only then to explain why assuming this particular ontology, rather than any other, makes sense. In justifying this ontology, we will have to note the salient peculiarities regarding the mathematical nature of Schrodinger’s equation, as also many relevant quantum mechanical features.

In this post, we will deal with only one-particle quantum systems.

So, let’s get going with the actual ontology first.


1. Our overall view of the QM ontology:

1.1. Introductory remarks:

To specify an ontology of physics is to state the basic types of objects there have to exist in the physical reality, and the basic ways in which they interact, so that the given theory of physics makes sense—the physical phenomena the theory subsumes are identified with appropriate concepts, causal relations, laws, and so, an understanding can be developed for applications, for building new systems that make use of the subsumed phenomena. The basic purpose of physics is to develop understanding so that it can be put to use to build better systems—structures, engines, machines, circuits, devices, gadgets, etc.

Accordingly, we will first give a list of the type of objects that must exist in the physical world so that the quantum mechanical phenomena can be completely described using them. The theory we will assume is Schrodinger’s non-relativistic quantum mechanics of multiple particles, including phenomena like entanglement, but without including the quantum mechanical spin. However, in this post, we will cover those aspects that can be understood (or at least touched upon) using only the single-particle quantum systems.

1.2. The list of objects in our QM ontology:

The list of our QM ontological objects is this:

  • The EC Objects of electrons and protons.
  • A special category of objects called neutrons.
  • The aether filling all of the 3D space where other objects are not, and certain field-conditions present in it; the all-connecting aspect of the physical universe.
  • The photon as a certain kind of a transient condition in the aether, i.e., a virtual object.

Let’s see all of them in detail, one by one, but beginning with the aether first.


2. The aether:

Explaining the concept of the aether and its necessity are full-fledged topics by themselves, and we have already said a lot about the ontology of this background object in the previous posts. So, we will note just a few indicative characteristics of the aether here.

Our idea of the QM aether is exactly the same as that of the EM aether of Lorentz. The only difference is that the aether, when used in QM, the aether is seen as supporting not only the electrostatic fields but also one more type of a field, the complex-valued quantum mechanical field.

To note some salient points about the aether:

  • The aether has no such inertia that it shows up in the electrostatic or quantum-mechanical phenomena. So, in this sense, the aether is non-inertial in nature.
  • It exists in all parts of space where the other QM ontological objects (of electrons, protons and neutrons) are not.
  • It exchanges electrostatic as well as additional quantum-mechanical forces with the electrons and protons, but always by direct contact alone.
  • Apart from the electrostatic and quantum-mechanical forces, there are no other forces that enter into our ontological description. Thus, there is no drag-force exerted by the aether on the electrons, protons or neutrons (basically because the Lorentz aether is not a mechanical aether; it is not an NM-Ontological object). In the non-relativistic QM, we also ignore fields like magnetic, gravitational, etc.
  • All parts of the aether always remains stationary, i.e., no CV of itself translates in space at any time. Even if there is any actual translation going on in the aether, the quantum mechanical phenomena are unable to capture it, and so, a capacity to translate does not enter our ontology.
  • However, unlike in the EM theory, when it comes to QM, we have to assume that there are other motions in aether. In QM, the aether does come to carry a kinetic energy too, whereas in EM, the kinetic energy is a feature of only the massive EC Objects. So, the aether is stationary—but that’s only translation-wise. Yet, even in the absence of net displacements, it does force (and is forced by) the elementary charged objects of the electrons and protons.

We will note further details regarding the fields in the aether as we progress.


3. Electrons and protons:

The view of electrons and protons which we take in the QM ontology is exactly the same as that in the ontology of electrostatics; so see the previous posts in this series for details not again repeated here.

Electrons and protons are seen as elementary point-particles having, within the algebraic sign, the same amount of electrostatic charge e. They set up certain 3D field conditions in the non-inertial aether, but acting in pairs. We may sometimes informally call them as point-charges, but it is to be kept in mind that, strictly speaking, in our view, we do not regard the charge to be an attribute of the point-particle, but only of the aether.

For two arbitrary EC objects (electrons or protons) q_i and q_j forming a pair, there are two fields which simultaneously exist in the 3D aether. None can exist without the other. These fields may be characterized as force-fields or as potential energy fields.

In the interest of clarity in the multi-particle situations, we will now expand on the notation presented earlier in this series. Accordingly,

\vec{\mathcal{F}}(q_i|q_j) is the 3D force field which exists everywhere in the aether. It gives the Coulomb force that q_j experiences from the aether at its instantaneous position \vec{r}_j via direct contact (between the aether and itself). Thus, in this notation, q_j is the forced charge, and q_i is the field-producing charge. Quantitatively, this force-field is given by Coulomb’s law:

\vec{\mathcal{F}}(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i q_A}{r_{iA}^2} \hat{r}_{iA}, where q_A = q_j.

Similarly, \vec{\mathcal{F}}(q_i|q_j) is the aetherial force-field set up by q_j and felt by q_i in the same pair, and is given as:

\vec{\mathcal{F}}(q_j|q_i) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_j q_A}{r_{jA}^2} \hat{r}_{jA}, where q_A = q_i.

The fields are singular at the location of the forcing charge, but not at the location of the forced charge. Due to the divergence theorem, a given charge does not experience its own field.

There is no self-interaction problem either, because the EC Object (the point-charge) is ontologically a different object from both the aether and the NM objects. Only an NM Object could possibly explode under the self-field, primarily, because an NM Object is a composite. However, an EC Object (of an electron or a proton) is not an NM Object—it is elementary, not composite.

Notice that the specific forces at the positions of the q_i and q_j are equal in magnitude and opposite in directions. However, these two vectors act on two different objects, and therefore they don’t cancel each other. The two vectors also act at two different locations. In any case, in going from these two vectors to the two vector fields, it’s misleading to keep thinking in terms of one force-field as being the opposite of the other! Their respective anchoring locations (i.e. the two singularities) themselves are different, and they have the same signs too!! They are the same 1/(r^2) fields, but spatially shifted so as to anchor into the two charges of a pair.

When there are N number of elementary charged particles in a system, then a given charge q_j will experience the force fields produced by all the other (N-1) number of charges at its position. We can list them all before the pipe | symbol. For instance, \vec{\mathcal{F}}(q_1, q_3, q_4|q_2) is the net field that q_2 feels at its position \vec{r}_2; it equals the sum of the three force-fields produced by the other three charges because of the three pairs in which they act:
\vec{\mathcal{F}}(q_1, q_3, q_4|q_2) = \vec{\mathcal{F}}(q_1|q_2) + \vec{\mathcal{F}}(q_3|q_2) + \vec{\mathcal{F}}(q_4|q_2).

The charges always act pairs-wise; hence there always are pairs of fields; a single field cannot exist. Therefore, any analysis that has only one field (e.g., as in the quantum harmonic oscillator problem or the H atom problem), must be regarded as only a mathematical abstraction, not an existent.

The two fields of a given specific pair both are of the same algebraic sign: both + or both -. However, a given charge q_j may come to experience fields of arbitrary signs—depending on the signs of the other q_i‘s forming those particular pairs.

The electrons and protons thus affect each other via the intervening aether.

In electrostatics as well as in non-relativistic QM, the interaction between charges are via direct contact. However, the two fields of any arbitrary pair of charges shift instantaneously in space—the entirety of a field “moves” when the singular point where it is anchored, moves. Thus, there is no action-at-a-distance in this ontology. However, there are instantaneous changes everywhere in space.

A relativistic theory of QM would include magentic fields and their interactions with the electric fields. It is these interactions which together impose the relativistic speed limit of v < c for all material particles. However, such speed-limiting interaction are absent in the non-relativistic QM theory.

The electron and protons have the same magnitude of charge, but different masses.

The Coulombic force should result in accelerations of both the charges in a pair. However, since the proton is approx. 1846 times more massive than the electron, the actual accelerations (and hence net displacements over a finite time interval) undergone by them are vastly different.

There is a viewpoint (originally put forth by Lorentz, I guess) which says that since the entire interaction proceeds through the aether, there is no need to have massive particles of charge at all. This argument in essence says: We took the attribute of the electric charge away from the particle and re-attributed it to the aether. Why not do the same for the mass?

Here we observe that mass can be regarded as an attribute of the interactions of two *singular* fields in the aether. We tentatively choose to keep the instantaneous location of the attribute of the mass only at the distinguished point of the singularity. In short, we have both particles and the aether. If the need be, we will revisit this aspect of our ontology later on.

The electrostatic aetherial fields can also be expressed via two physically equivalent but mathematically different formulations: vector force-fields, and scalar energy-fields—also called the “potential” energy fields in the Schrodinger QM.

Notation: The potential energy field seen by q_j due to q_i is now on noted, and given, as:

V(q_i|q_j) = \dfrac{1}{4\,\pi\,\epsilon_0}\dfrac{q_i\,q_A}{r_{iA}},

where q_A = q_j, and similarly for the other field of the pair, viz., V(q_j|q_i)

See the previous posts from this series for a certain reservation we have for calling them the potential energy fields (and not just internal energy fields). In effect, what we seem to have here is an interesting scenario:

When we have a pair of charges in the physical 3D space (say an infinite domain), then we have two singular fields existing simultaneously, as noted above. Moving the two charges from their definite positions “to” infinity makes the system devoid of all energy. When they are present at definite positions, their singular fields of V noted above imply an infinite amount of energy within the volume of the system. However, since the system-boundaries for a system of charged point-particles can be defined only at the point-locations where they are present, the work that can be extracted from the system is finite—even if the total energy content is infinite. In short, we have a situation that addition of two infinities results in a finite quantity.

Does this way of looking at the things provide a clue to solve the problem of cancelling infinities in the re-normalization problem? If yes, and if none has put forth a comparably clear view, please cite this work.


4. Neutrons:

Neutrons are massive objects that do not participate in electrostatic interactions.

From very basic, ontological, viewpoint, they could have presented very tricky situations to deal with.

For instance: When an EC Object (i.e., an electron or a proton) moves through the aether, there is no force over and above the one exerted by the Coulombic field on it. But EC Objects are massive particles. So, a tempting conclusion might be to say that the aether exerts no drag force at all on any massive object, and hence, there should be no drag force on the motion of a free neutron either.

I am not clear on such points. But I have certain reservations and apprehensions about it.

It is clear that the aforementioned tempting conclusion does not carry. It is known that the aether does not exert drag on the EC Objects. But an EC Object is different from a chargeless object of the neutron. Even a forced EC Object still has a field singularly anchored in its own position; it is just that in experiencing the forces by the field, the component of its own singular field plays no part (due to the divergence theorem). But the neutron, being chargeless object, has no singular field anchored in its position at all. It doesn’t have a field that is “silent” for its own motions. Since for a forced particle, the forces are exerted by the aether in its vicinity, I am not clear if the neutron should behave the same. May be, we could associate a pair of equal and opposite (positive and negative) fields anchored in the neutron’s position (of arbitrary q_N strength, not elementary), so that it again is chargeless, but can be seen to be interacting with the aether. If so, then the neutron could be seen as a special kind of an EC Object—one which has two equal and opposite aetherial-fields associated with it. In that case, we can be consistent and say that the neutron will not experience a drag force from the aether for the same reason the electron or the proton does not. I am not clear if I should be adopting such a position. I have to think further about it.

So, overall, my choice is to ignore all such issues altogether, and regard the neutrons, in the non-relativistic QM, as being present only in the atomic nucleus at all times. The nucleus itself is regarded, abstractly, as a charged point-particle in its own right.

Thus, effectively, we come regard the nuclear neutrons as just additions of constant masses to the total mass of the protons, and consider this extra-massive positively charged composite as the point-particle of the nucleus.


5. In QM, there is an aetherial field for the kinetic energy:

As stated previously, in addition to the electrostatic fields (mathematically expressed as force-fields or as energy-fields), in QM, the aether also comes to carry a certain time-varying field. The energy associated with these fields is kinetic in nature. That is to say, there should be some motion within the aether which corresponds to this part of the total energy.

We will come to characterize these motions with the complex-valued \Psi(x,t) field. However, as the discussion below will clarify, the wavefunction is only a mathematically isolated attribute of the physically existing kinetic energy field.

We will see that the motion associated with the quantum mechanical kinetic energy does not result in the net displacement of a CV. (It may be regarded as the motion of time-varying strain-fields.)

In our ontology, the kinetic energy field (and hence the field that is the wavefunction) primarily “lives” in the physical 3D space.

However, when the same physics is seen from a higher-level, abstract, mathematical viewpoint, the same field may also be seen as “living” in an abstract 3ND configuration space. Adopting such an abstract view has its advantages in simplifying some of the mathematical manipulations at a more abstract level. However, make a note that doing so also risks losing the richness of the concept of the physical fields, and with it, the opportunity to tackle the unusual features of the quantum mechanical theory right.


6. Photon:

In our view, the photon is a neither a spatially discrete particle nor even a condition that is permanently present in the aether.

A photon represents a specific kind of a transient condition in the aetherial quantum mechanical fields which comes to exist only for some finite interval of time.

In particular, it refers to the difference in the two field-conditions corresponding to a change in the energy eigenstates (of the same particle).

In the last sentence, we should have added: “of the same particle” without parentheses; however, doing so requires us to identify what exactly is a particle when the reference is squarely being made to field conditions. A proper discussion of photons cannot actually be undertaken until a good amount of physics preceding it is understood. So, we will develop the understanding of this “particle” only slowly.

For the time being, however, make a note of the fact that:

In our view, all photons always are “virtual” particles.

Photons are attributes of real conditions in the aether, and in this sense, they are not virtual. But they are not spatially discrete particles. They always refer to continuous changes in the field conditions with time. Since these changes are anchored into the positions of the positively charged protons in the atomic nuclei, and since the protons are point-particles, therefore, a photon also has at least one singularity in the electrostatic fields to which its definition refers. (I am still not clear whether we need just one singularity or at least two.) In short, photon does have point-position(s) as the reference points. Its emission/absorption events cannot be specified without making reference to definite points. In this sense, it does have a particle character.

Finally, one more point about photons:

Not all transient changes in the fields refer to photons. The separation vectors between charges are always changing, and they are always therefore causing transient changes in the system wavefunction. But not all such changes result in a change of energy eigenstates. So, not all transient field changes in the aether are photons. Any view of QM that seeks to represent every change in a quantum system via an exchange of photons is deeply suspect, to say the least. Such a view is not justified on the basis of the inductive context or nature of the Schrodinger equation.

We will now develop the context required to identify the exact ontological nature of the quantum mechanical kinetic energy fields.


7. The form of Schrodinger’s equation points to an oscillatory phenomenon:

Schrodinger’s equation (SE) in 1D formulation reads:

i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m}\dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t)

BTW, when we say SE, we always mean TDSE (time-dependent Schrodinger’s equation). When we want to refer specifically to the time-independent Schrodinger’s equation, we will call it by the short form TISE. In short, TISE is not SE!

Setting constants to unity, the SE shows this form:
i\,\dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t).

Its form is partly comparable to the following two real-valued PDEs:

heat-diffusion equation with internal heat generation:
\dfrac{\partial T(x,t)}{\partial t} =\ \dfrac{\partial^2 T(x,t)}{\partial x^2} + \dot{Q}(x,t),

and the wave equation:
\dfrac{\partial^2 u(x,t)}{\partial t^2} =\ \dfrac{\partial^2 u(x,t)}{\partial x^2} + V(x,t)u(x,t),

Yet, the SE is different from both.

  • Unlike the diffusion equation, the SE has the i sticking out on the left-hand side, and a negative sign (think of it as (i)(i) on the first term on the right hand-side. That makes the solution of SE complex—literally. For quite a long time (years), I pursued this idea, well known to the Monte Carlo Quantum Chemistry community, that the SE is the diffusion equation but in imaginary time it. Turns out that this idea, while useful in simplifying simulation techniques for problems like determining the bonding energy of molecules, doesn’t really help throw much light on the ontology of QM. Indeed, it serves to get at the right ontology more difficult.
  • As to the wave equation, it too has only a partial similarity to SE. We mentioned the last time the main difference: In the wave PDE, the time differential is to the second order, whereas in the SE, it is to the first order.

The crucial thing to understand here is (and I got it from Lubos Motl’s blog or replies on StackExchange or so) that even if the time-differential is to the first-order, you still get solutions that oscillate in time—if the wave variable is regarded as being full-fledged complex-valued.

The important lesson to be drawn: The Schrodinger equation gives the maths of some kind of a vibratory/oscillatory system. The term “wavefunction” is not a misnomer. (Under the diffusion equation analogy, for some time, I had wondered if it shouldn’t be called “diffusionfunction”. That way of looking at it is wrong, misleading, etc.)

So, to understand the physics and ontology of the SE better, we need to understand vibrations/oscillations/waves better. I don’t have the time to do it here, so I refer you to David Morin’s online draft book on waves as your best free resource. A good book also seems to be the one by Walter Fox Smith’s “Waves and Oscillations, a Prelude to QM” though I haven’t gone through all its parts (but what exactly is his last name?). A slightly “harder” book but excellent, at the UG level, and free, comes from Howard Georgi. Mechanical engineers could equally well open their books on vibrations and FEM analysis of the same. For real quick notes, see Allan Bower’s UG course notes on this topic as a part of his dynamics course at the Brown university.


8. Ontology of the quantum mechanical fields:

8.1. Schrodinger’s equation has complex-valued fields of energies:

OK. To go back to Schrodinger’s equation:

i\,\hbar \dfrac{\partial \Psi(x,t)}{\partial t} =\ -\, \dfrac{\hbar^2}{2m} \dfrac{\partial^2\Psi(x,t)}{\partial x^2} + V(x,t)\Psi(x,t) = (\text{a real-valued constant}) \Psi(x,t).

As seen in the last post, the scheme of derivation of the SE makes it clear that these terms have come from: the total internal energy, the kinetic energy, and the potential energy, respectively. Informally, we may refer to them as such. However, notice that whereas V(x,t) by itself is a field, what appears in the SE is the term of V(x,t) multiplifed by \Psi(x,t), which makes all the energies complex-valued. Further, since \Psi(x,t) is a field, all energies in the SE also are fields.

If you wish to have real-valued fields of energies, then you have no choice but to divide all the terms in the SE by \Psi(x,t). That’s what we indicated in the last post too. However, note, complex-valued fields cannot still be got rid of; they still enter the calculations.

8.2. Potential energy fields only come from the elementary point-charges:

The V(x,t) field itself is the same as in the electrostatics:

V(x,t) = \dfrac{1}{2} \dfrac{1}{4\,\pi\,\epsilon_0} \sum\limits_{i}^{N}\sum\limits_{j\neq i; j=1}^{N} \dfrac{q_i\,q_j}{r_iA},
where |q_i| = |q_j| = -e, with e being the fundamental electronic charge.

In our QM ontology we postulate that the above equation is logically complete as far as the potential energy field of QM is concerned.

That is to say, in the basic ontological description of QM, we do not entertain any other sources of potentials (such as gravity or magnetism). Equally important, we also do not entertain arbitrarily specified values for potentials (such as the parabolic potential well of the quantum harmonic oscillator, or the well with the sharply vertical walls of the particle-in-a-box model). Arbitrary potentials are mere mathematical abstractions—approximate models—that help us gain insight into some aspects of the physical phenomena; they do not describe the quantum mechanical reality in full. Only the electrostatic potential that is singularly anchored into elementary charge positions, does.

At least in the basic non-relativistic quantum mechanics, there is no scope to accommodate magnetism. The gravity, being too weak, also is best neglected. Thus, the only potentials allowed are the singular electrostatic ones.

We shall revisit this issue of the potentials after we solve the measurement problem. From our viewpoint, the mainstream QM’s use of arbitrary potentials of arbitrary sources is fine too, as the linear formulation of the mainstream QM turns out to be a limiting case of our nonlinear formulation.

8.3. What physically exists is only the complex-valued internal energy field:

Notice that according to our QM ontology, what physically exists is only the single field of the complex-valued total internal energy field.

Its isolation into different fields like the potential energy field, the kinetic energy field, the momentum field, or the wavefunction field, etc. are all mathematically isolated quantities. These fields do have certain direct physical referents, but only as aspects or attributes of the total internal energy field. They do have a physical existence, but their existence is not independent of the total internal energy field.

Finally, note that the total internal energy field itself exists only as a field condition in the aether; it is an attribute of the aether; it cannot exist without the aether.


9. Implications of the complex-valued nature of the internal energy field:

9.1. System-level attributes to spatial fields—real- vs. complex-valued functions:

Consider an isolated system—say the physical universe. In our notation, E denotes the aspatial global attribute of its internal energy. Think of a perfectly isolated box for a system. Then E is like a label identifying a certain quantity of joule slapped on to it. It has no spatial existence inside the box—nor outside it. It’s just a device of book-keeping.

To convert E into a spatially identifiable object, we multiply it by some field, say F(x,t). Then, E F(x,t) becomes a field.

If F(x,t) is real-valued, then \int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,F(x,t) gives you the amount of E present in a small CV (which is just a part of the system, not the whole). To fix ideas, suppose you have a stereo boom-box with two detachable speakers. Then, the volume of the overall boombox is a sum of the volumes of each of its three parts. The volume is a real-valued number, and so, the total volume is the simple sum of its parts V = V_1 + V_2 + V_3. Ditto for the weights of these parts. Ditto, for the energy in a volumetric part of a system if the energy forms a real-valued field.

Now, when the field is complex-valued, say denoted as \tilde{F}(x,t), then the volume integral still applies. \int\limits_{\Omega_\text{small CV}} \text{d}\Omega_\text{small CV}\, E\,\tilde{F}(x,t) still gives you the amount of the complex valued quantity E\tilde{F}(x,t) present in the CV. But the fact that \tilde{F} is complex-valued means that there actually are two fields of E inside that small CV. Expressing \tilde{F}(x,t) = a(x,t) + i b(x,t), there are two real-valued fields, a(x,t) and b(x,t). So, the energy inside the small CV also has two energy components: E_R = E a(x,t) and E_I = E b(x,t), which we call “real” and “imaginary”. Actually, physically, they both are real-valued. However, the magnitude of their net effect |E \tilde{F}(x,t)| != E_R + E_I. Instead, it follows the Pythagorean theorem all the way to the positive sign: |E \tilde{F}| = |\sqrt{E_R^2 + E_I^2}|. (Aren’t you glad you learnt that theorem!)

If you take it in a naive-minded way, then E can be greater or smaller than E_R + E_I, and so things won’t sum up to |E \tilde{F}|—conservation seems to fail.

But in fact, energy conservation does hold. It’s just that it follows a further detailed law of combining the two field components within a given CV (or the entire system).

In QM, the wavefunction \Psi(x,t) plays the role of \tilde{F} given above. It brings the aspatial energy E from its Platonic mathematical “heaven” and, further, being a field itself, also distributes it in space—thereby giving a complex-valued field of E.

We do not know the physical mechanism which manipulates the real and imaginary parts \Psi_R(x,t) and \Psi_I(x,t) so that they come to obey the Pythogorean theorem. But we know that unless we have \Psi(x,t) as complex-valued, the book-keeping of the system’s energy does not come out right—in QM, that is.

Since the product E_{\text{sys}}\Psi(x,t) can come up any time, and since what ontologically exists is a single quantity, not a product of two, it’s better to have a different notation for it. Accordingly, define:

\tilde{E}(x,t) = E_{\text{sys}}\,\Psi(x,t)

9.2. In QM, the conserved quantity itself is complex-valued:

Note an important difference between pre-quantum mechanics and QM:

The energy conservation principle for the classical (pre-quantum) mechanics says that E_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega E(x,t) is conserved.
The energy conservation principle for quantum mechanics is that \tilde{E}_{\text{sys}} = \int\limits_{\Omega} \text{d}\Omega \tilde{E}(x,t) is conserved.

No one says it. But it is there, right in the context (the scheme of derivation) of the Schrodinger equation!

For the cyclic change, we started from the classical conservation statement:
\oint \text{d}E_{\text{sys}} = 0 = \oint \text{d}T_{\text{sys}} + \oint \text{d}\Pi_{\text{sys}}

Or, in differential terms (for an arbitrary change, not cyclic):
\text{d}E_{\text{sys}} = 0 = \text{d}T_{\text{sys}} + \text{d}\Pi_{\text{sys}}.

Or, integrating over the end-points of an arbitrary process,
E_{\text{sys}} = \text{a constant (real-valued) number}.

We then multiplied both sides by \Psi(x,t) (remember the quizzical-looking multiplication from the last post?), and only then got to Schrodinger’s equation. In effect, we did:
\text{d}E_{\text{sys}}\Psi(x,t) = 0 = \text{d}T_{\text{sys}}\Psi(x,t) + \text{d}\Pi_{\text{sys}}\Psi(x,t).

That’s nothing but saying, using the notation introduced just above, that:
\text{d}\tilde{E}(x,t) = 0 = \text{d}\tilde{T}(x,t) + \text{d}\tilde{\Pi}(x,t).

Or, integrating over the end-points of an arbitrary process and over the system volume,
\tilde{E}_{\text{sys}} = \text{a constant complex number}.

So, what’s conserved is not E but \tilde{E}.

The aspatial, global, thermodynamic number for the total internal energy is the complex number \tilde{E}_{\text{sys}} in QM. QM by postulation comes with two coupled real-valued fields together obeying the algebra of complex numbers.


10. Consequences of conservation of complex-valued energy of the universe:

10.1. There is a real-valued measure of quantum-mechanical energy which is conserved too:

In QM, is there a real-valued number that gets conserved too? if not by postulate then at least by consequence?

Answer: Well, yes, there is. But it loses the richness of the physics of complex-numbers.

To obtain the conserved real-valued number, we follow the same procedure as for “converting” a complex number to a real number, i.e., extracting a real-valued and essential feature of a complex number. We take its absolute magnitude. If \tilde{E}_{\text{sys}} is a constant complex number, then obviously, |\tilde{E}_{\text{sys}}| is a constant number too. Accordingly,

|\tilde{E}_{\text{sys}}| = |\sqrt{\tilde{E}_{\text{sys}}\,\tilde{E}_{\text{sys}}^{*}}| = \text{another, real-valued, constant}.

But obviously, a statement of this kind of a constancy has lost all the richness of QM.

10.2. The normalization condition has its basis in the energy conservation:

Another implication:

Since |\tilde{E}_{\text{sys}}| itself is conserved, so is |\tilde{E}_{\text{sys}}|^2 too.

[An aside to experts: I think we thus have solved the curious problem of the arbitrary phase factors in quantum mechanics, too. Let me know if you disagree.]

It then follows, by definitions of \tilde{E}_{\text{sys}}, \tilde{E} and \Psi(x,t), that

\int\limits_{\Omega}\text{d}\Omega\,\Psi(x,t)\Psi^{*}(x,t) = 1

Thus, the square-normalization condition follows from the energy conservation principle.

We believe this view places the normalization condition on firm grounds.

The mainstream QM (at least as presented in textbooks) makes reference to (i) Born’s postulate for the probability of finding a particle in an elemental volume, and (ii) conservation of mass for the system (“the electron has to be somewhere in the system”).

In our view, the normalization condition arises because of conservation of energy alone. Conservation of mass is a separate principle, in our opinion. It applies to the attribute of mass of the EC Object of elementary charges. But not to the aetherial field of \Psi. Ontologically, the massive EC Objects and the aether are different entities. Finally, the probabilistic notions of particle position have no relevance in deriving the normalization condition. You don’t have to insert the measurement theory before imposing the normalization condition. Indeed, the measurement postulate comes way later.

Notice that the total complex-valued number for the energy of the universe remains constant. However, the time-dependence of \Psi(x,t) implies that the aether, and hence the universe forever remains in a state of oscillatory motions. (In the nonlinear theory, the system remains oscillatory, but the state evolutions are not periodic. Mark the difference between these two ideas.)

10.3. The wavefunction of the universe is always in energy eigenstates.

Another interesting consequence of the energy conservation principle is this:

Consider these two conclusions: (i) The universe is an isolated system; hence, its energy is conserved. (ii) There is only one aether object in the universe; hence, there is only one universal wavefunction.

A direct consequence therefore is this:

For an isolated system, the system wavefunction always remains in energy eigenstates. Hence, every state assumed by the universal wavefunction is an energy eigenstate.

Take a pause to note a few peculiarities about the preceding statement.

No, this statement does not at all reinforce misconceptions (see Dan Styer’s paper, here: [^][Preprint PDF ^])

The statement refers to isolated systems, including the universe. It does not refer to closed or open systems. When matter and/or energy can cross system boundaries, a mainstream-supposed “wavefunction” of the system itself may not remain in an energy eigenstate. Yet, the universe (system plus environment) always remains in some or the other energy eigenstate.

However, the fact that the universal wavefunction is always in an energy eigenstate does not mean that the universe always remains in a stationary state. Notice that the V(x,t) itself is time-dependent. So, the time-changes in it compel the \Psi to change in time too. (In the language of mainstream QM: The Hamiltonian operator is time-dependent, and yet, at any instant, the state of the universe must be an energy eigenstate.)

In our view, due to nonlinearity, V(x,t) also is an indirect function of the instantaneous \Psi(x,t). Will cover the nonlinearity and the measurement problem the next time. (Yes, I am extending this series by one post.)

Of course, at any instant, the integral over the domain of the algebraic sum of the kinetic and the potential energy fields is always going to come to the single number which is: the aspatial attribute of the total internal energy number for the isolated system.

10.4. The wavefunction \Psi(x,t) is ontic, but only indirectly so—it’s an attribute of the energy field, and hence of the aether, which is ontic:

So, is the wavefunction ontic or epistemic? It is ontic.

An attribute does not have a physical existence independent of, or as apart from, the object whose attribute it is. However, this does not mean that an attribute does not have any physical existence at all. Saying so would be a ridiculously simple error. Objects exist, and they exist as identities. The identity of an object refers to all its attributes—known and unknown. So, to say that an object exists is also to say that all its attributes exist (with all their metaphysically existing sizes too). It is true that blueness does not exist without there being a blue object. But if a blue object exist, obviously, its blueness exists in the reality out there too—it exists with all the blue objects. So, “things” such as blueness are part of existence. Accordingly, the wavefunction is ontic.

Yet, the isolation (i.e. identification) of the wavefunction as an attribute of the aether does require a complex chain of reasoning. Ummm… Yes, literally complex too, because it does involve the complex-valued SE.

The aether is a single object. There are no two or more aethers in the universe—or zero. Hence, there is only a single complex-valued field of energy, that of the total internal energy. For this reason, there is only one wavefunction field in the universe—regardless of the number of particles there might be in it. However, the system wavefunction can always be mathematically decomposed into certain components particular to each particle. We will revisit this point when we cover multi-particle quantum systems.

10.5. The wavefunction \Psi(x,t) itself is dimensionless:

In our view, the wavefunction, i.e., \Psi(x,t) itself is dimensionless. We base this conclusion on the fact that while deriving the Schrodinger equation, where \Psi(x,t) gets introduced, each term of the equation is regarded as an energy term. Since each term has \Psi(x,t) also appearing in it (and you cannot get rid of the complex nature of the Schrodinger equation merely by dividing all terms by it), obviously, the multiplying factor of \Psi(x,t) must be taken as being dimensionless. That’s how we in fact have proceeded.

The mainstream view is to assign the dimensions of \dfrac{1}{\sqrt{\text{(length)}^d}}, where d is the dimensionality of the embedding space. This interpretation is based on Born’s rule and conservation of matter; for instance, see here [^].

However, as explained in the sub-section 10.2., we arrive at the normalization condition from the energy conservation principle, and not in reference to Born’s postulate at all.

All in all, \Psi(x,t) is dimensionless. It appears in theory only for mathematical convenience. However, once defined, it can be seen as an attribute (aspect) of the complex-valued internal energy field (and its two components, viz. the complex-valued kinetic- and potential-energy fields). In this sense, it is ontic—as explained in the preceding sub-section.


11. Visualizing the wavefunction and the single particle in the PIB model:

11.1. Introductory remarks:

What we will be doing in this section is not ontology, strictly speaking, but only physics and visualization. PIB stands for: Particle-In-a-Box. Study this model from any textbook and only then read further.

The PIB model is unrealistic, but pedagogically useful. It is unrealistic because it uses a potential energy distribution that is not singularly anchored into point-particle positions. So, the potential energy distribution must be seen as a mathematically convenient abstraction. PIB is not real QM, in short. It’s the QM of the moron, in a way—the electron has no “potential” inside the well.

11.2. The potential energy function used in the model:

The model says that there is just one particle in a finite interval of space, and its V(x,t) always stays the same at all times. So, it uses V(x) in place of V(x,t).

The V(x) is defined to be zero everywhere in the domain except at the boundary-points, where the particle is supposed to suddenly acquire an infinite potential energy. Yes, the infinitely tall walls are inside the system, not outside it. The potential energy field is the potential energy of a point-particle, and unless it were to experience an infinity of potential energy while staying within the finite control volume of the system, no non-trivial solution would at all be possible. (The trivial solution for the SE when V(x) = 0 is that \Psi(x,t) = 0—whether the domain is finite or infinite.) In short, the “side-walls” are included in the shipped package.

If the particle is imagined to be an electron, then why does its singular field not come into picture? Simple: There is only one electron, and a given EC Object (an elementary point-charge) never comes to experience its own field. Thus, the PIB model is unrealistic on another ground: In reality, force-fields due to charges always come in pairs. However, since we consider only one particle in PIB, there are no singular force-fields anchored into a moving particle’s position, in it, at all.

Yes, forces do act on the particle, but only at the side-walls. At the boundary points, it is a forced particle. Everywhere else, it is a free particle. Peculiar.

The domain of the system remains fixed at all times. So, the potential walls remain fixed in space—before, during, and after the particle collides with them.

The impulse exerted on the particle at the time of collision at the boundary is theoretically infinite. But it lasts only for an infinitesimally small patch of space (which is represented as the point of the boundary). Hence, it cannot impart an infinity of velocity or displacement. (An infinitely large force would have to act over a finite interval of space and time before it could possibly result in an infinitely large velocity or displacement.)

OK. Enough about analysis in terms of forces. To arrive at the particular solution of this problem using analytical methods (as with most any other advanced problem), energy-analytical methods are superior. So, we go back to the energy-based analysis, and Schrodinger’s equation.

11.3. TDSE as a continuous sequence of TISE’s:

Note that you can always apply the product ansatz to \Psi(x,t), and thereby split it into two functions:

\Psi(x,t) = \chi(x)\tau(t),

where \chi(x) is the space-dependent part and \tau(t) is the time-dependent part.

No one tells you, but it is true that:

Even when the Hamiltonian operator is time-dependent, you can still use the product ansatz separately at every instant.

It is just that doing so is not very useful in analytical solution procedures, because both the \chi(x) and \tau(t) themselves change in time. Therefore, you cannot take a single time-dependent function \tau(t) as applying at all times, and thereby simplify the differential equation. You would have to progress the solution in time—somehow—and then again apply the product ansatz to obtain new functions of \chi(x) and \tau(t) which would be valid only for the next instant in the continuous progression of such changes.

So, analytical solution procedures do not at all benefit from the product ansatz when the Hamiltonian operator is time-dependent.

However, when you use numerical approaches, you can always progress the solution in time using suitable methods, and then, whatever \Psi(x,t)\big|_{t_n} you get for the current time t_n, you can regard it as if it were solving a TISE which was valid for that instant alone.

In other words, the TDSE is seen as being a continuous progression of different instantaneous TISE’s. Seen this way, each \Psi(x,t)\big|_{t_n} can be viewed as representing an energy eigenstate at every instant.

Not just that, but since there is no heat in QM, the adiabatic approximation always applies. So, for an isolated system or the physical universe:

For an isolated system or the physical universe, the time-dependent part \tau(t) of \Psi(x,t) may not be the same function at all times. Yet, it always progresses through a continuous progression of different \chi(x) and \tau(t)‘s.

We saw in the sub-section 10.3. that the universal wavefunction must always be in energy eigenstates. We had reached that conclusion in reference to energy conservation principle and the uniqueness of the aether in the universe. Now, in this sub-section, we saw a more detailed meaning of it.

11.4. PIB anyway uses time-independent potential energy function, and hence, time-independent Hamiltonian:

When V(x) is time-independent, the time-dependent part \tau(t) stays the same for all times. Using this fact, the SE reduces to one and the same pair of \chi(x) and \tau(t). So, the TISE in this case is very simple to solve. See your textbooks on how to solve the TISE for the PIB problem.

However, make sure to

work through any solution using only the full-fledged complex variables.

The solutions given in most text-books will prove insufficient for our purposes. For instance, if \tau(t) is the time-dependent part of the solution of TISE, then don’t substitute \tau(t) = \cos \omega t in place of the full-fledged \tau = e^{-i\omega t}.

Let the \tau(t) acquire imaginary parts too, as it evolves in time.

The reason for this insistence on the full complex numbers will soon become apparent.

11.5. Use the full-fledged 3D physical space:

To visualize this solution, realize that as in EM so also in QM, even if the problem is advertised as being 1D, it still makes sense to see this one dimension as an aspect of the actually existing 3D physical space. (In EM, you need to go “up” to 3D because the curl demands it. In QM, the reason will become apparent if you do the homework given below.)

Accordingly, we imagine two infinitely large parallel planes for the system boundaries, and the aether filling the space in between them. (Draw a sketch. I won’t. I would have, in a real class-room, but don’t have the enthusiasm to draw pics while writing mere blog-posts. And, whatever happened to your interest in visualization rather than in “math”?) The planes remain fixed in space.

Now, pick up a line passing normally through the two parallel planes. This is our x-axis.

11.6. The aetherial momentum field:

Next, consider the aetherial momentum field, defined by:

\vec{p}(x,t) =\ i\,\hbar\,\nabla\Psi(x,t).

This definition for the complex-valued momentum field is suggested by the form of the complex-valued quantum mechanical kinetic energy field. It has been derived in analogy to the classical expression T = \dfrac{p^2}{2m}.

In our PIB model, this field exists not just on the chosen line of the x-axis, but also everywhere in the 3D space. It’s just that it has no variation along the y– and z-axes.

11.7. Gaining physical clarity (“intuition”) with analysis in terms of forces, first:

In the PIB model, when the massive point-particle of the electron is at some point \vec{r}_j, then it experiences a zero potential force (except at the boundary points).

So, electrostatically speaking, the electron (i.e. the singularity at the EC Object’s position) should not move away from the point where it was placed as part of IC/BCs of the problem. However, the existence of the momentum field implies that it does move.

To see how this happens, consider the fact that \Psi(x,t) involves not just the space-dependent part \chi(x), but also the time-dependent part \Theta(t). So,

The total wavefunction \Psi(\vec{r}_j, t) is time-dependent—it continuously changes in time. Even in stationary problems.

Naturally, there should be an aetherial force-field associated with the aetherial momentum field (i.e. the aetherial kinetic energy field) too. It is given by:

\vec{F}_{T}(x,t) = \dfrac{\partial}{\partial t} \vec{p}_{T}(x,t) = \dfrac{\partial}{\partial t} \left[ i\,\hbar\,\nabla\Psi(x,t) \right],

where the subscript T denotes the fact these quantities refer to their conceptual origins in the kinetic energy field. These _T quantities are over and above those due to the electrostatic force-fields. So, if V were not to be zero in our model, then there would a force-field due to the electrostatic interactions as well, which we might denote as \vec{F}_{V}, where the subscript _V denotes the origin in the potentials.

Anyway, here V(x) = 0 at all internal points, and so, only the quantity of force given by \vec{F}_{T}(\vec{r}_j,t) would act on our particle when it strays at the location \vec{r}_j. Naturally, it would get whacked! (Feel good?)

The instantaneous local acceleration for the elemental CV of the aether around the point \vec{r}_j is given by \vec{a}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \dfrac{\partial \vec{p}_{T}(\vec{r}_j,t)}{\partial t}.

This acceleration should imply a velocity too. It’s easy to see that the velocity so implied is nothing but

\vec{v}_{T}(\vec{r}_j,t) = \dfrac{1}{m} \vec{p}_{T}(\vec{r}_j,t).

Yes, we went through a “circle,” because we basically had defined the force on the basis of momentum, and we had given the more basic definition of momentum itself on the basis of the kinetic energy fields.

11.8. Representing complex-valued fields as spatial entities is logically consistent with everything we know:

Notice that all the fields we considered in the force-based analysis: the momentum field, the force-field, the acceleration field, and the velocity field are complex-valued. This is where the 3D-ness of our PIB model comes handy.

Think of any arbitrary yz-planes in the domain as representing the mathematical Argand-plane. Then, the \Psi(x,t) field at an arbitrary point \vec{r}_j would be a phasor of constant length, but rotating in the same yz-plane at a constant angular velocity, given by the time-dependent part \tau(t).

Homework: Write a Python simulation to show an animation of a few representative phasors for a few points in the domain, following the above convention.

11.9. Time evolution, and the spatial directions of the \Psi(x,t)-based vector fields:

Consider the changes in the \Psi(x,t) field, distributed in the physical 3D space.

Consider that as \tau(t) evolves in time, even if the IC had only a real-valued function like \cos t specified for it, considering the full-fledged complex-valued nature of \tau(t), it would soon enough (with the passage of an infinitesimal amount of time), acquire a so-called “imaginary” component.

Following our idea of representing the real- and imaginary-components in the y– and z-axes, the \Psi(x,t) field no longer remains confined to a variation along the x-axis alone. It also has variations along the plane normal to the x-axis.

Accordingly, the unit vectors for the grad operator, and hence for all the vector quantities (of momentum, velocity, force and acceleration) also acquire a definite orientation in the physical 3D space—without causing any discomfort to the “math” of the mainstream quantum mechanics.

Homework: Consider the case when \Psi(x,t) varies along all three spatial axes. An easy example would be that of the hydrogen atom wavefunction. Verify that the spatial representation of the vector fields (momentum, velocity, force or acceleration) proposed by us causes no harm to the the “math” of the mainstream quantum mechanics.

If doing simulations, you can integrate in time (using a suitable time-stepping technique), and come to calculate the instantaneous displacements of the particle, too. Exercise left for the reader.

Homework: Perform both analytical integration and numerical for the PIB model. Verify that your simulation is correct.

Homework: Build an animation for the motion of the point-particle of the EC Object, together with the time-variations of all the complex-valued fields: \Psi(x,t), and all the complex-valued vector fields derived from it.

11.10. Too much of homework?

OK. I’ve been assigning so many pieces for the homework today. Have I completed any one of them for myself? Well, actually not. But read on, anyway.

The locus of all possible particle-positions would converge to a point only at the boundary points (because \Psi(x,t) = 0 there. At all the internal points in the domain, the particle-position should be away from the x-axis.

That’s my anticipation, but I have not checked it. In fact, I have not built even a single numerical simulation of the sort mentioned here.

So, take this chance to prove me wrong!

Please do the homework and let me know if I am going wrong. Thanks in advance. (I have to finish this series first, somehow!)


12. What the PIB model tells about the wave-particle duality:

What happened to the world-famous wave-particle duality? If you build the animations, you would know!

There is a point-particle of the electron (which we regard as the point of the singularity in the \vec{\mathcal{F}} field), and there is an actual, 3D field of the internal energy fields—and hence of \Psi(x,t). And, assuming our hypothesis of representing phasors of the complex numbers via a spatial representation, of all the complex-valued fields—including the vector fields like displacement.

The particle motion is governed by both the potential energy-forces and the kinetic energy-forces. That is, the aetherial wavefunction “guides” etc. the particle. In our view, the kinetic energy field too forces the particle.

“Ah, smart!,” you might object. “And what happened to the Born rule? If the wavefunction is a field, then there is a probability for finding the particle anywhere—not just at the position where it is, as predicted in this model. So, your model is obviously dumb!! It’s not quantum mechanics at all!!!”

Hmmm… We have not solved the measurement problem yet, have we?

We will need to cover the many-particle QM first, and then go to the nonlinearity implied by the kinetic energy field-forces, and only then would we be able to present our solution to the measurement problem. Since I got tired of typing (this post is already ~9,500 words), I will cover it in some other post. I will also try to touch on entanglement, because it would come in the flow of the coverage.

But in the meanwhile, try to play with something.

Homework: “Invert” the time-displacement function/relationship you obtain for the PIB model, and calculate the time spent by the particle in each infinitesimally small CV of the 3D domain, during a complete round-trip across the domain. Find its x-component. See if you can relate the motion, in any way, to the probability rule given by Born (i.e., try to anticipate our next development).

Do that. This way, you will stay prepared to spot if I have made any mistakes in this post, and also if I make any further mistakes in the next—and have made any mistakes in the last post as well.

Really. I could easily have made a mistake or two. … These matters still are quite new to me, and I really haven’t worked out the maths of everything ahead of writing these posts. That’s why I say so.


13. A preview of the things to come:

I had planned to finish this series in this post. In a sense, it is over.

The most crucial ontological aspects have already been given. Starting from the comprehensive list of the QM objects, we also saw that the quantum mechanical aetherial fields are all complex-valued; that there is an additional kinetic energy field too, not just potential; and also saw our new ideas concerning how to visualize the complex-valued fields by regarding the Argand plane as a mathematical abstraction of a real physical plane in 3D. We also saw how these QM ontological objects come together in a simple but fairly well illustrative problem of the PIB. We even touched on the wave-particle duality.

So, as far as ontology is concerned, even the QM ontology is now essentially over. There might be important repercussions of the ontological points we discussed here (and, also before, in this series). But as far as I can see, these should turn out to be mostly consequences, not any new fundamental points.

Of course, a lot of physics issues still remain to be clarified. I would like to address them too.

So, while I am at it, I would also like to say something about the following topics: (i) Multi-particle quantum systems. (ii) Issue of the 3D vs. 3ND nature of the wavefunction field. (iii) Physics of entanglement. (iv) Measurement problem.

All these topics use the same ontology as used here. But saying something about them would, I hope, help understand it better. Applications always serve to understand the exact scope and the nuances of a theory. In their absence, a theory, even if well specified, still runs the risk of being misunderstood.

That’s why I would like to pick up the above four topics.

No promises, but I will try to write an “extra” post in this series, and finish off everything needed to understand the points touched upon in the Outline document (which I had uploaded at iMechanica in February this year, see here [^]). Unlike until now, this next post would be mostly geared towards QM experts, and so, it would progress rapidly—even unevenly or in a seeming “broken” manner. (Experts would always be free to get in touch with me; none has, in the 8+ months since the uploading of the Outline document at iMechanica.)

I would like it if this planned post (on the four physics topics from QM) forms the next post on this blog, but then again, as I said, no promises. There might be an interruption with other topics in the meanwhile (though I would try to keep them at the bay). Plus, I am plain tired and need a break too. So, no promises regarding the time-frame of when it might come.

OK.

So, do the homework, and think about the whole thing. Also, brush up on the topic of coupled oscillations, say from David Morin/Walter Fox Smith/Howard Georgi, or even as covered in the FEM modeling of idealized spring-mass systems. Do that, so that you are ready for the next post in this series—whenever it comes.

In the meanwhile, sure feel free to drop in a comment or email if you find that I am going wrong somewhere—especially in the maths of it or its implications. Thanks in advance.

Take care, and bye for now.


A song I like:

(Marathi) “aalee kuThoonashee kanee taaLa mrudungaachi dhoona”
Music and Singer: Vasant Ajgaonkar
Lyrics: Sopandev Chaudhari

 


History:
— First published: 2019.11.05 17:19 IST.
— Added the sub-section 10.5. and the songs section. Corrected LaTeX typos. the same day at 20:31 IST.
— Expanded the section 11. considerably, and also added sub-section titles to it. Revised also the sections 12. and 13. Overall, a further addition of approx. 1,500 words. Also corrected typos. Now, unless there is an acute need even for typo-corrections (i.e. if something goes blatantly in an opposite direction than the meaning I had in mind), I would leave this post in the shape in which it is. 2019.11.06 11:06 IST.