Python scripts for simulating QM, part 2: Vectorized code for the H atom in a 1D/2D/3D box. [Also, un-lockdown and covid in India.]

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


1.The first cut for the H atom in a 3D box:

The last time [^], I spoke of an enjoyable activity, namely, how to make the tea (and also how to have it).

Talking of other, equally enjoyable things, I have completed the Python code for simulating the H atom in a 3D box.

In the first cut for the 3D code (as also in the previous code in this series [^]), I used NumPy’s dense matrices, and the Python ``for” loops. Running this preliminary code, I obtained the following colourful diagrams, and twitted them:

H atom in a 3D box of 1 angstrom sides. Ground state (i.e., 1s, eigenvalue index = 0). Contour surfaces with wiremesh. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. Ground state (i.e., 1s, eigenvalue index = 0). All contours taken together show a single stationary state. Contour surfaces plotted with wiremesh. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. A 'p' state (eigenvalue index = 2). Contour surfaces with the Gouraud interpolation. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. A ‘p’ state (eigenvalue index = 2). All contours taken together show a single stationary state. Contour surfaces with the Gouraud interpolation. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. A 'p' state (eigenvalue index = 2). Contour surfaces with wiremesh. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. A ‘p’ state (eigenvalue index = 2). All contours taken together show a single stationary state. Contour surfaces with wiremesh. Plotted with Mayavi’s mlab.contour3d().

 

H atom in a 3D box of 1 angstrom sides. Another 'p' state (eigenvalue index = 3). Contour surfaces with the Gourauad interpolation. Plotted with Mayavi's mlab.contour3d().

H atom in a 3D box of 1 angstrom sides. Another ‘p’ state (eigenvalue index = 3). All contours taken together show a single stationary state. Contour surfaces with the Gourauad interpolation. Plotted with Mayavi’s mlab.contour3d().

 

OK, as far as many (most?) of you are concerned, the enjoyable part of this post is over. So, go read something else on the ‘net.


Coming back to my enjoyment…

2. Sparse matrices. Vectorization of code:

After getting to the above plots with dense matrices and Python “for” loops, I then completely rewrote the whole code using SciPy‘s sparse matrices, and put a vectorized code in place of the Python “for” loops. (As a matter of fact, in the process of rewriting, I seem to have deleted the plots too. So, today, I took the above plots from my twitter account!)

2.1 My opinions about vectorizing code, other programmers, interviews, etc:

Vectorization was not really necessary in this problem (an eigenvalue problem), because even if you incrementally build the FD-discretized Hamiltonian matrix, it takes less than 1 percent of the total execution time. 99 % of the execution time is spent in the SciPy library calls.

Python programmers have a habit of always looking down on the simple “for” loops—and hence, on any one who writes them. So, I decided to write this special note.

The first thing you do about vectorization is not to begin wondering how best to implement it for a particular problem. The first thing you do is to ask yourself: Is vectorization really necessary here? Ditto, for lambda expressions. Ditto, for list comprehension. Ditto for itertools. Ditto for almost anything that is a favourite of the dumb interviewers (which means, most Indian Python programmers).

Vectorized codes might earn you brownie points among the Python programmers (including those who interview you for jobs). But such codes are more prone to bugs, harder to debug, and definitely much harder to understand even by you after a gap of time. Why?

That’s because practically speaking, while writing in Python, you hardly if ever define C-struct like things. Python does have classes. But these are rather primitive classes. You are not expected to write code around classes, and then put objects into containers. Technically, you can do that, but it’s not at all efficient. So, practically speaking, you are almost always into using the NumPy ndarrays, or similar things (like Pandas, xarrays, dasks, etc.).

Now, once you have these array-like thingies, indexing becomes important. Why? Because, in Python, it is the design of a number of arrays, and the relationships among their indexing scheme which together “defines” the operative data structures. Python, for all its glory, has this un-removable flaw: The design of the data structures is always implicit; never directly visible through a language construct.

So, in Python, it’s the indexing scheme which plays the same part as the classes, inheritance, genericity play in C++. But it’s implicit. So, how you implement the indexing features becomes of paramount importance.

And here, in my informed opinion, the Python syntax for the slicing and indexing operations has been made unnecessarily intricate. I, for one, could easily design an equally powerful semantics that comes with a syntax that’s much easier on the eye.

In case some professionally employed data scientist (especially a young Indian one) takes an offence to my above claim: Yes, I do mean what I say above. And, I also know what I am talking about.

Though I no longer put it on my CV, once, in the late 1990s, I had implemented a Yacc-like tool to output table-driven parser for the LALR-1 languages (like Java and C++). It would take a language specification in the EBNF (Extended Backus-Noor Form) as the input file, and produce the tables for table-driven parsing of that language. I had implemented this thing completely on my own, looking just at the Dragon Book (Aho, Sethi, Ullman). I haven’t had a CS university education. So, I taught myself the compilers theory, and then, began straight implementing it.

I looked at no previous code. And even if I were to look at something, it would have been horrible. These were ancient projects, written in C, not in C++, and written using arrays, no STL containers like “map”s. A lot of hard-coding, pre-proc macros, and all that. Eventually, I did take a look at the others’ code, but it was only in the verification stage. How did my code fare? Well, I didn’t have to change anything critical.

I had taken about 8 months for this exercise (done part time, on evenings, as a hobby). The closest effort was by some US mountain-time university group (consisting of a professor, one or two post-docs, and four-five graduate students). They had taken some 4 years to reach roughly the same place. To be fair, their code had many more features. But yes, both their code and mine addressed those languages which belonged to the same class of grammar specification, and hence, both would have had the same parsing complexity.

I mention it all it here mainly in order to “assure” the Indian / American programmers (you know, those BE/BS CS guys, the ones who are right now itching to fail me in any interview, should their HR arrange one for me in the first place) that I do know a bit about it when I was talking about the actual computing operations on one hand, and the mere syntax for those operations on the other. There are a lot of highly paid Indian IT professionals who never do learn this difference (but take care to point out that your degree isn’t from the CS field).

So, my conclusion is that despite all its greatness (and I do actually love Python), its syntax does have some serious weaknesses. Not just idiosyncrasies (which are fine) but actual weaknesses. The syntax for slicing and indexing is a prominent part of it.

Anyway, coming back to my present code (for the H atom in the 3D box, using finite difference method), if the execution time was so short, and if vectorization makes a code prone to bugs (and difficult to maintain), why did I bother implementing it?

Two reasons:

  1. I wanted to have a compact-looking code. I was writing this code mainly for myself, so maintenance wasn’t an issue.
  2. In case some programmer / manager interviewing me began acting over-smart, I wanted to have something which I could throw at his face. (Recently, I ran into a woman who easily let out: “What’s there in a PoC (proof of concept, in case you don’t know)? Any one can do a PoC…” She ranted on a bit, but it was obvious that though she has been a senior manager and all, and lists managing innovations and all, she doesn’t know. There are a lot of people like her in the Indian IT industry. People who act over-smart. An already implemented vectorized code, especially one they find difficult to read, would be a nice projectile to have handy.

2.2 Sparse SciPy matrices:

Coming back to the present code for the H atom: As I was saying, though vectorization was not necessary, I have anyway implemented the vectorization part.

I also started using sparse matrices.

In case you don’t know, SciPy‘s and NumPy‘s sparse matrix calls look identical, but they go through different underlying implementations.

From what I have gathered, it seems safe to conclude this much: As a general rule, if doing some serious work, use SciPy’s calls, not NumPy’s. (But note, I am still learning this part.)

With sparse matrices, now, I can easily go to a 50 \times 50 \times  50 domain. I haven’t empirically tested the upper limit on my laptop, though an even bigger mesh should be easily possible. In contrast, earlier, with dense matrices, I was stuck at at most at a 25 \times 25 \times 25 mesh. The execution time too reduced drastically.

In my code, I have used only the dok_matrix() to build the sparse matrices, and only the tocsr(), and tocoo() calls for faster matrix computations in the SciPy eigenvalue calls. These are the only functions I’ve used—I haven’t tried all the pathways that SciPy opens up. However, I think that I have a pretty fast running code; that the execution time wouldn’t improve to any significant degree by using some other combination of calls.

2.3 A notable curiosity:

I also tried, and succeeded to a great degree, in having an exactly identical code for all dimensions: 1D, 2D, 3D, and then even, in principle, ND. That is to say, no “if–else” statements that lead to different execution paths depending on the dimensionality.

If you understand what I just stated, then you sure would want to have a look at my code, because nothing similar exists anywhere on the ‘net (i.e., within the first 10 pages thrown up by Google during several differently phrased searches covering many different domains).

However, eventually, I abandoned this approach, because it made things too complicated, especially while dealing with computing the Coulomb fields. The part dealing with the discretized Laplacian was, in contrast, easier to implement, and it did begin working fully well, which was when I decided to abandon this entire approach. In case you know a bit about this territory: I had to liberally use numpy.newaxis.

Eventually, I came to abandon this insistence on having only a single set of code lines regardless of the dimensionality, because my programmer’s/engineer’s instincts cried against it. (Remember I don’t even like the slicing syntax of Python?) And so, I scrapped it. (But yes, I do have a copy, just in case someone wants to have a look.)

2.4 When to use the “for” loops and when to use slicing + vectorization: A good example:

I always try to lift code if a suitable one is available ready made. So, I did a lot of search for Python/MatLab code for such things.

As far as the FD implementations of the Laplacian go, IMO, the best piece of Python code I saw (for this kind of a project) was that by Prof. Christian Hill [^]. His code is available for free from the site for a book he wrote; see here [^] for an example involving the finite difference discretization of the Laplacian.

Yes, Prof. Hill has wisely chosen to use only the Python “for” loops when it comes to specifying the IC. Thus, he reserves the vectorization only for the time-stepping part of the code.

Of course, unlike Prof. Hill’s code (transient diffusion), my code involves only eigenvalue computations—no time-stepping. So, one would be even more amply justified in using only the “for” loops for building the Laplacian matrix. Yet, as I noted, I vectorized everything in my code, merely because I felt like doing so. It’s during vectorization that the problem of differing dimensionality came up, which I solved, and then abandoned.

2.5 Use of indexing matrices:

While writing my code, I figured out that a simple trick with using index matrices and arrays makes the vectorization part even more compact (and less susceptible to bugs). So, I implemented this approach—indexing matrices and arrays.

“Well, this is a very well known approach. What’s new?” you might ask. The new part is the use of matrices for indexing, not arrays. Very well known, sure. But very few people use it anyway.

Again, I was cautious. I wrote the code, saw it a couple of days later again, and made sure that using indices really made the code easier to understand—to me, of course. Only then I decided to retain it.

By using the indexing matrices, the code indeed becomes very clean-looking. It certainly looks far better (i.e. easier to grasp structure) than the first lines of code in Prof. Hill’s “do_timestep” function [^].

2.6 No code-drop:

During my numerous (if not exhaustive) searches, I found that no one posts a 3D code for quantum simulation that also uses finite differences (i.e. the simplest numerical technique).

Note, people do post codes for 3D, but these are only for more complicated approaches like: FDTD (finite difference time domain), FEM, (pseudo)spectral methods, etc. People also post code for FDM, when the domain is 1D. But none posts a code that is both FD and 2D/3D. People only post the maths for such a case. Some rare times, they also post the results of the simulations. But they don’t post the 3D FDM code. I don’t know the reason for this.

May be there is some money to be made if you keep some such tricks all to yourself?

Once this idea occurred to me, it was impossible for me not to take it seriously. … You know that I have been going jobless for two years by now. And, further, I did have to invest a definite amount of time and effort in getting those indexing matrices working right so that the vectorization part becomes intuitive.

So, I too have decided not to post my 3D code anywhere on the ‘net for free. Not immediately anyway. Let me think about it for a while before I go, post my code.


3. Covid in India:

The process of unlocking down has begun in India. However, the numbers simply aren’t right for any one to get relaxed (except for the entertainment sections of the Indian media like the Times of India, Yahoo!, etc.).

In India, we are nowhere near turning the corner. The data about India are such that even the time when the flattening might occur, is not just hard to predict, but with the current data, it is impossible.

Yes, I said impossible. I could forward reasoning grounded in sound logic and good mathematics (e.g., things like Shannon’s theorem, von Neumann’s errors analysis, etc.), if you want. But I think to any one who really knows a fair amount of maths, it’s not necessary. I think they will understand my point.

Let me repeat: The data about India are such that even the time when the flattening might occur, is not just hard to predict, but with the current data, it is impossible.

India’s data show a certain unique kind of a challenge for the data scientist—and it definitely calls for some serious apprehension by every one concerned. The data themselves are such that predictions have to be made very carefully.

If any one is telling you that India will cross (or has already crossed), say, more than 20 lakh cases, then know that he/she is not speaking from the data, the population size, the social structures, the particular diffusive dynamics of this country, etc. He/she is talking purely from imagination—or very poor maths.

Ditto, if someone tells you that there are going be so many cases in this city or that, by this date or that, if the date runs into, say, August.

Given the actual data, in India, projections about number of cases in the future are likely to remain very tentative (having very big error bands).

Of course,  you may still make some predictions, like those based on the doubling rate. You would be even justified in using this measure, but only for a very short time-span into the future. The reason is that India’s data carry these two peculiarities:

  1. The growth rate has been, on a large enough scale, quite steady for a relatively longer period of time. In India, there has been no exponential growth with a very large log-factor, not even initially (which I attribute to an early enough a lock-down). There also has been no flattening (for whatever reasons, but see the next one).
  2. The number of cases per million population still remains small.

Because of 1., the doubling rate can serve as a good short-term estimator when it comes to activities like large-scale resource planning (but it would be valid only for the short term). You will have to continuously monitor the data, and be willing to adjust your plans. Yet, the fact is also that the doubling rate has remained steady long enough that it can certainly be used for short-term planning (including by corporates).

However, because of 2., everyone will have to revise their estimates starting from the third week of June, when the effects of the un-locking down begin to become visible (not just in the hospitals or the quarantine centres, but also in terms of aggregated numbers).

Finally, realize that 1. matters only to the policy-makers (whether in government or in corporate sectors).

What matters to the general public at large is this one single question: Have we turned around the corner already? if not, when will we do that?

The short answers are: “No” and  “Can’t Tell As of Today.”

In India’s case the data themselves are such that no data scientist worth his salt would be able to predict the time of flattening with any good accuracy—as of today. Nothing clear has emerged, even after 2.5 months, in the data. Since this sentence is very likely to be misinterpreted, let me explain.

I am not underestimating the efforts of the Indian doctors, nurses, support staff, police, and even other government agencies. If they were not to be in this fight, the data would’ve been far simpler to analyse—and far more deadly.

Given India’s population size, its poverty, its meagre medical resources, the absence of civic discipline, the illiteracy (which makes using symbols for political parties indispensable at the time of elections)… Given all such factors, the very fact that India’s data even today (after 2.5 months) still manages to remain hard to analyse suggests, to my mind, this conclusion:

There has been a very hard tussle going on between man and the virus so that no definitive trend could emerge either way.

There weren’t enough resources so that flattening could occur by now. If you kept that expectation to begin with, you were ignoring reality. 

However, in India, the fight has been such that it must have been very tough on the virus too—else, the exponential function is too bad for us, and it is too easy for the virus.

The inability to project the date by which the flattening might be reached, must be seen in such a light.

The picture will become much clearer starting from two weeks in the future, because it would then begin reflecting the actual effects that the unlocking is producing right now.

So, if you are in India, take care even if the government has now allowed you to step out, go to office, and all that. But remember, you have to take even more care than you did during the lock-down, at least for the next one month or so, until the time that even if faint, some definitely discernible trends do begin to emerge, objectively speaking.

I sincerely hope that every one takes precautions so that we begin to see even just an approach towards the flattening. Realize, number of cases and number deaths increase until the flattening occurs. So, take extra care, now that the diffusivity of people has increased.

Good luck!


A song I like:

(Western, instrumental): Mozart, Piano concerto 21, k. 467, second movement (andante in F major).

Listen, e.g., at this [^] YouTube viedo.

[ I am not too much into Western classical, though I have listened to a fair deal of it. I would spend hours in UAB’s excellent music library listening to all sorts of songs, though mostly Western classical. I would also sometimes make on-the-fly requests to the classical music channel of UAB’s radio station (or was it a local radio station? I no longer remember). I didn’t always like what I listened to, but I continuing listening a lot anyway.

Then, as I grew older, I began discovering that, as far as the Western classical music goes, very often, I actually don’t much appreciate even some pieces that are otherwise very highly regarded by others. Even with a great like Mozart, there often are places where I can’t continue to remain in the flow of the music. Unknowingly, I come out of the music, and begin wondering: Here, in this piece, was the composer overtaken by a concern to show off his technical virtuosity rather than being absorbed in the music? He does seem to have a very neat tune somewhere in the neighbourhood of what he is doing here. Why doesn’t he stop tinkling the piano or stretching the violin, stop, think, and resume? I mean, he was composing music, not just blogging, wasn’t he?

The greater the composer or the tune suggested by the piece, the greater is this kind of a disappointment on my part.

Then, at other times, these Western classical folks do the equivalent of analysis-paralysis. They get stuck into the same thing for seemingly forever. If composing music is difficult, composing good music in the Western classical style is, IMHO, exponentially more difficult. That’s the reason why despite showing a definite “cultured-ness,” purely numbers-wise, most Western classical music tends to be boring. … Most Indian classical music also tends to be very boring. But I will cover it on some other day. Actually, one day won’t be enough. But still, this post is already too big…

Coming to the Western classical, Mozart, and the song selected for this time: I think that if Mozart were to do something different with his piano concerto no. 20 (k. 466), then I might have actually liked it as much as k. 467, perhaps even better. (For a good YouTube video on k. 466, see here [^].)

But as things stand, it’s k. 467. It is one of the rarest Western (or Eastern) classical pieces that can auto ride on my mind at some unpredictable moments; also one of the rare pieces that never disappoint me when I play it. Maybe that’s because I don’t play it unless I am in the right mood. A mood that’s not at all bright; a mood that suggests as if someone were plaintively raising the question “why? But why?”. (Perhaps even: “But why me?”) It’s a question not asked to any one in particular. It’s a question raised in the midst of continuing to bear either some tragic something, or, may be, a question raised while in the midst of having to suffer the consequences of someone else’s stupidity or so. … In fact, it’s not even a question explicitly raised. It’s to do with some feeling which comes before you even become aware of it, let alone translate it into a verbal question.  I don’t know, the mood is something like that. … I don’t get in that kind of a mood very often. But sometimes, this kind of a mood is impossible to avoid. And then, if the outward expression of such a mood also is this great, sometimes, you even feel like listening to it… The thing here is, any ordinary composer can evoke pathos. But what Mozart does is in an entirely different class. He captures the process of forming that question clearly, alright. But he captures the whole process in such a subdued manner. Extraordinary clarity, and extraordinary subdued way of expressing it. That’s what appeals to me in this piece… How do I put it?… It’s the epistemological clarity or something like that—I don’t know. Whatever be the reason, I simply love this piece. Even if I play it only infrequently.

Coming back to the dynamic k. 466 vs. the quiet, sombre, even plaintive k. 467, I think, the makers of the “Elvira Madigan” movie were smart; they correctly picked up the k. 467, and only the second movement, not others. It’s the second movement that’s musically extraordinary. My opinion, anyway…

Bye for now.

]


 

Python scripts for simulating QM, part 1: Time evolution of a particle in the infinite potential box

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].


What’s been happening?

OK, with that special note done, let me now turn my attention to the two of you who regularly read this blog.

… After the MNIST classification problem, I turned my attention to using LSTM’s for time-series predictions, simply because I hadn’t tried much on this topic any time earlier. … So, I implemented a few models. I even seemed to get good accuracy levels.

However, after having continuously worked on Data Science straight for about 2.5 months, I began feeling like taking a little break from it.

I had grown tired a bit though I had not realized it while actually going through all those tedious trials. (The fact of my not earning any money might have added to some draining of the energy too.) In fact, I didn’t have the energy to pick up something on the reading side either.

Basically, after a lot of intense effort, I wanted something that was light but engaging.

In the end, I decided to look into Python code for QM.

Earlier, in late 2018, I had written a few scripts on QM. I had also blogged about it; see the “part 0” of this series [^]. (Somehow, it has received unusually greater number of hits after I announced my MNIST result.) However, after a gap of 1.5 years, I could not easily locate those scripts. … So, like any self-respecting programmer, I decided to code them again!

Below is the first result, a movie. … Though a movie, it should be boring to any one who is not interested in QM.


Movie of the wavefunction of an electron inside an infinite potential box:

 

An electron in an infinite potential box.

An electron inside a 3 cm long 1D box with infinite potentials on the sides. Time evolution of the second excited state (n = 3). In the standard QM, such states are supposed to be“stationary”.

 


The Python code: Main file:

Now, the code. It should be boring to any one who is not a Python programmer.

Update on 2020.05.30 16:00 IST: The code below has been updated to fix a bug in the square-normalization procedure. Earlier, the code was performing the Simpson integration over
the interior lattice points alone, which was incorrect. Now, the Simpson integration is performed over the entire domain, which is the correct procedure. Now, numerical and analytical solutions
are truly close.

"""
01.AJ_PIB_1D_Class.py

Written by and copyright (c) Ajit R. Jadhav. All rights reserved.

Particle in a Box.

Solves the Time-Independent Schrodinger Equation in 1D,
using the Finite Difference Method. Eigenvalues are found using
the direct matrix method.

Also shows the changes in the total wavefunction with time.
[The stationarity in the TISE is not static. In the mainstream 
QM, the stationarity is kinematical. In my new approach, it 
has been proposed to be kinetic. However, this simulation concerns
itself only with the standard, mainstream, QM.]

Environment: Developed and tested using:
Python 3.7.6 64-bit. All packages as in Anaconda 4.8.3 
on Ubuntu-MATE 20.04 (focal fossa) LTS of the date (below).

TBD: It would be nice to use sparse matrices, and to use 
eigenvalue functions from scipy (instead of those from numpy).

History:
0. This file begun: Friday 2020 May 22 19:50:26 IST 
1. Initially posted on the blog: Saturday 2020 May 23 16:07:52 IST 
2. This version: Saturday 2020 May 30 15:53:13 IST 
    -- Corrected a bug in ComputeNormalizedStationaryStates() 
    Earlier, the code was performing the Simpson integration over 
    the *interior* lattice points alone, which was incorrect. Now, the 
    Simpson integration is performed over the entire domain, which is 
    the correct procedure. Now, numerical and analytical solutions 
    are truly close.
"""

import numpy as np 
from scipy.integrate import simps
import matplotlib.pyplot as plt 
from matplotlib.animation import ImageMagickFileWriter

# SEE THE ACCOMPANYING FILE. THE NUMERICAL VALUES OF CONSTANTS 
# ARE DEFINED IN IT.
from FundaConstants import h, hbar, me, mP, eV2J

################################################################################
# THE MAIN CLASS 

class AJ_PIB_1D( object ):

    def __init__( self, nInteriorNodes, dMass, dh ):
        self.nInteriorNodes = nInteriorNodes
        self.nDomainNodes = nInteriorNodes + 2
        self.dMass = dMass # Mass associated with the QM particle
        self.dh = dh # cell-size ( \Delta x ).

        # The following numpy ndarray's get allocated 
        # during computations.
        self.aaT = None 
        self.aaV = None 
        self.aaH = None 

        self.aaPsi = None 
        self.aE_n = None 

        self.A_ana = None 
        self.ak_ana = None 
        self.aE_ana = None 
        self.aPsi_ana = None 
        return 

    # Creates the kinetic energy matrix for the interior points of the
    # domain.
    def ComputeKE( self ):
        self.aaT = np.zeros( (self.nInteriorNodes, self.nInteriorNodes) )
        for i in range( self.nInteriorNodes ):
            self.aaT[ i, i ] = -2.0
        for i in range( self.nInteriorNodes-1 ):
            self.aaT[ i, i+1 ] = 1.0
        for i in range( 1, self.nInteriorNodes ):
            self.aaT[ i, i-1 ] = 1.0 
        dFactorKE = - hbar**2 / ( 2.0 * self.dMass * self.dh**2 )
        self.aaT *= dFactorKE
        return

    # Creates the potential energy matrix for the interior points of the
    # domain. You can supply an arbitrary potential function via the array
    # aV of size = interior points count, and values in joule.
    def ComputePE( self, aV= None ):
        self.aaV = np.zeros( (self.nInteriorNodes, self.nInteriorNodes) )
        if None != aV:
            for i in range( self.nInteriorNodes ):
                self.aaV[ i, i ] = aV[ i ]
        return

    def ComputeHamiltonian( self, aV= None ):
        self.ComputeKE() 
        self.ComputePE( aV )
        self.aaH = self.aaT + self.aaV 
        return 

    # Note, the argument aX has the size = the count of domain points, not 
    # the count of interior points.
    # QM operators are Hermitian. We exploit this fact by using the 
    # numpy.linalg.eigh function here. It is faster than numpy.linalg.eig, 
    # and, unlike the latter, also returns results sorted in the ascending 
    # order. 
    # HOWEVER, NOTE, the eigenvectors returned can have signs opposite 
    # of what the analytial solution gives. The eigh (or eig)-returned 
    # vectors still *are* *eigen* vectors. However, for easier comparison 
    # with the analytical solution, we here provide a quick fix. 
    # See below in this function.
    def ComputeNormalizedStationaryStates( self, aX, bQuickFixForSigns= False ):
        assert( self.nDomainNodes == len( aX ) )
        
        # Use the LAPACK library to compute the eigenvalues
        aEigVals, aaEigVecs = np.linalg.eigh( self.aaH )
        
        # SQUARE-NORMALIZE THE EIGENVECTORS

        # Note:
        # The eigenvectors were found on the interior part of the domain, 
        # i.e., after dropping the boundary points at extreme ends. But the 
        # wavefunctions are defined over the entire domain (with the 
        # Dirichlet condition of 0.0 specified at the boundary points).
        
        nCntVecs = aaEigVecs.shape[ 1 ]
        assert( nCntVecs == self.nInteriorNodes )

        # eigh returns vectors in *columns*. We prefer to store the 
        # normalized vectors in *rows*.
        aaPsi = np.zeros( (nCntVecs, self.nDomainNodes) )
        for c in range( nCntVecs ):
            aPsi = aaPsi[ c ] # For ease of readability
            aPsi[ 1 : self.nDomainNodes-1 ] = aaEigVecs[ :, c ]
            # Find the area under the prob. curve
            aPsiSq = aPsi * aPsi
            dArea = simps( aPsiSq, aX )
            # Use it to normalize the wavefunction
            aPsi /= np.sqrt( dArea )
            # The analytical solution always has the curve going up 
            # (with a +ve gradient) at the left end of the domain. 
            # We exploit this fact to have a quick fix for the signs.
            if bQuickFixForSigns is True:
                d0, d1 = aPsi[ 0 ], aPsi[ 1 ]
                if d1 < d0:
                    aPsi *= -1
            aaPsi[ c ] = aPsi
        self.aaPsi = aaPsi
        self.aE_n = aEigVals
        return 

    # Standard analytical solution. See, e.g., the Wiki article: 
    # "Particle in a box"
    def ComputeAnalyticalSolutions( self, aX ):

        xl = aX[ 0 ]
        xr = aX[ self.nDomainNodes-1 ]
        L = xr - xl 
        A = np.sqrt( 2.0 / L )
        self.A_ana = A 

        # There are as many eigenvalues as there are interior points
        self.ak_ana = np.zeros( self.nInteriorNodes )
        self.aE_ana = np.zeros( self.nInteriorNodes )
        self.aPsi_ana = np.zeros( (self.nInteriorNodes, self.nDomainNodes) )
        for n in range( self.nInteriorNodes ):
            # The wavevector. (Notice the absence of the factor of '2'. 
            # Realize, the 'L' here refers to half of the wavelength of 
            # the two travelling waves which make up the standing wave. 
            # That's why.)
            k_n = n * np.pi / L 
            # The energy.
            E_n = n**2 * h**2 / (8.0 * self.dMass * L**2)

            # A simplest coordinate transformation:
            # For x in [0,L], phase angle is given as
            # Phase angle = n \pi x / L = k_n x. 
            # We have arbitrary coordinates for the left- and 
            # right-boundary point. So, 
            # Phase angle = k_n (x - xl)
            ap = k_n * (aX - xl)
            
            aPsiAna = A * np.sin( ap )
            self.ak_ana[ n ] = k_n 
            self.aE_ana[ n ] = E_n 
            # We prefer to store the normalized wavefunction 
            # in rows. (Contrast: linalg.eigh and eig store the 
            # eigen vectors in columns.)
            self.aPsi_ana[ n, : ] = aPsiAna
        return 
        
    # This function gets the value that is the numerical equivalent to the 
    # max wave amplitude 'A', i.e., sqrt(2/L) in the analytical solution. 
    def GetMaxAmplNum( self ):
        dMax = np.max( np.abs(self.aaPsi) )
        return dMax


################################################################################
# Utility functions

# NOTE: SAVING MOVIES CAN TAKE A LOT MORE TIME (7--10 MINUTES).
def Plot( model, n, nTimeSteps, bSaveMovie= False, sMovieFileName= None ):
    # The class computes and stores only the space-dependent part.
    aPsiSpace = model.aaPsi[ n-1 ]

    # Period = 2 \pi / \omega = 1 / \nu
    # Since E = h \nu, \nu = E/h, and so, Period = h/E
    nu = model.aE_n[ n-1 ] / h 
    dPeriod = 1.0 / nu 

    dt = dPeriod / (nTimeSteps-1) 

    # Plotting...

    plt.style.use( 'ggplot' )
    # Plot size is 9 inches X 6 inches. Reduce if you have smaller 
    # screen size.
    fig = plt.figure( figsize=(9,6) ) 

    if bSaveMovie is True:
        movieWriter = ImageMagickFileWriter()
        movieWriter.setup( fig, sMovieFileName )

    dMaxAmpl = model.GetMaxAmplNum() # Required for setting the plot limits.
    dTime = 0.0 # How much time has elapsed in the model?
    for t in range( nTimeSteps ):
        # TIME-DEPENDENT PART: 
        # \psi_t = e^{-i E_n/\hbar t} = e^{-i \omega_n t} = e^{-i 2 \pi nu t}
        # Compute the phase factor (which appears in the exponent).
        dTheta = 2.0 * np.pi * nu * dTime 
        # The Euler identity. Compute the *complete* wavefunction (space and time)
        # at this instant.
        aPsi_R_t = aPsiSpace * np.cos( dTheta )
        aPsi_I_t = - aPsiSpace * np.sin( dTheta )
        
        plt.clf()
        sTitle = "Particle in an infinite-potential box (n = %d)\n" % (n)
        sTitle += "Domain size: %7.4lf m. Oscillation period: %7.4lf s.\n" % (L, dPeriod)
        sTitle += "Time step: %3d/%3d. Time elapsed in simulation: %7.4lf s." % (t+1, nTimeSteps, dTime) 
        plt.title( sTitle )

        plt.xlabel( "Distance, m" )
        plt.ylabel( "Wavefunction amplitude, $m^{-1/2}$" )

        plt.grid( True )

        plt.xlim( (xl - L/10), (xr + L/10) )
        plt.ylim( -1.1*dMaxAmpl, 1.1*dMaxAmpl )

        plt.plot( aX, aPsi_R_t , color= 'darkcyan', label= r'Re($\Psi$)' )
        plt.plot( aX, aPsi_I_t , color= 'purple', label= r'Im($\Psi$)' )

        plt.legend( loc= 'upper right', shadow= True, fontsize= 'small' ) 
        
        if bSaveMovie is True:
            movieWriter.grab_frame()
        else:
            plt.pause( 0.001 )

        dTime += dt

    if bSaveMovie is True:
        movieWriter.finish()
    else:
        plt.show()


################################################################################
# MAIN DRIVER CODE
# We use the SI system throughout. [This is a program. It runs on a computer.]

# DOMAIN GEOMETRY

xl = -1.0e-02 # Left end (min. x)
xr = 2.0e-02 # Right end (max. x)
L = xr - xl # Length of the domain
xc = (xl + xr )/ 2.0 # Center point

# MESH
# Count of cells = Count of nodes in the domain - 1. 
# It's best to take an odd number for the count of domain nodes. This way, 
# the peak(s) of the wavefunction will not be missed.
nDomainNodes = 101
aX, dh = np.linspace(   start= xl, stop= xr, num= nDomainNodes, 
                        endpoint= True, retstep= True, dtype= np.float )

# In the PIB model, infinite potential exists at either ends. So, we apply 
# the Dirichlet BC of \Psi(x,t) = 0 at all times. Even if the discretized 
# Laplacian were to be computed for the entire domain, in handling the 
# homogeneous BC, both the boundary-points would get dropped during the
# matrix partitioning. Similarly, V(x) would be infinite there. That's why,
# we allocate the Laplacian and Potential Energy matrices only for the
# interior points. 
nInteriorNodes = nDomainNodes - 2 

# We model the electron here. 
# Constants are defined in a separate file: 'FundaConstants.py'
# Suggestion: Try mP for the proton, and check the \Psi amplitudes.
dMass = me 

# Instantiate the main model class.
model = AJ_PIB_1D( nInteriorNodes, dMass, dh )

# Compute the system Hamiltonian.

# If you want, supply a custom-made potential function as an ndarray of 
# size nInteriorNodes, as an argument. Values should be in joules.
# 'None' means 0 joules everywhere inside the box.
model.ComputeHamiltonian( None )

# Compute the stationary states. For the second argument, see the 
# note in the function implementation.
model.ComputeNormalizedStationaryStates( aX, True )

# You can also have the analytical solution computed. Uncomment the 
# line below. The numerical and analytical solutions are kept in 
# completely different arrays inside the class. However, the plotting
# code has to be careful.
### model.ComputeAnalyticalSolutions( aX )


# PLOT THE STATIONARY STATES, AND SHOW THEIR OSCILLATIONS WITH TIME.
# (TISE *is* dynamic; the stationarity is dynamical.) 

# Note, here, we choose n to be a 1-based index, as is the practice
# in physics. Thus, the ground state is given by n = 1, and not n = 0.
# However, NOTE, the computed arrays of wavefunctions have 0-based 
# indices. If such dual-usage for the indices gets confusing, simple! 
# Just change the code!

n = 3 
# No. of frames to be plotted for a single period of oscillations
# The 0-th and the (nTimeSteps-1)th state is identical because the 
# Hamiltonian here is time-independent. 
nTimeSteps = 200 

# You can save a movie, but note, animated GIFs take a lot more time, even 
# ~10 minutes or more, depending on the screen-size and dpi.
# Note, ImageMagickFileWriter will write the temp .png files in the current 
# directory (i.e. the same directory where this Python file resides). 
# In case the program crashes (or you stop the program before it finishes), 
# you will have to manually delete the temporary .png files from the 
# program directory! (Even if you specify a separate directory for the 
# movie, the *temporary* files still get generated in the program directory.)
### Plot( model, n, nTimeSteps, True, './AJ_PIB_e_%d.gif' % (n) )

Plot( model, n, nTimeSteps, False, None )

################################################################################
# EOF
################################################################################


The ancillary file:

The main file imports the following file. It has nothing but the values of fundamental physical constants noted in it (together with the sources). Here it is:

"""
FundaConstants.py

Written by and copyright (c) Ajit R. Jadhav. All rights reserved.

Begun: Thursday 2020 May 21 20:55:37 IST 
This version: Saturday 2020 May 23 20:39:22 IST 
"""

import numpy as np 

"""
Planck's constant
https://en.wikipedia.org/wiki/Planck_constant 
``The Planck constant is defined to have the exact value h = 6.62607015×10−34 J⋅s in SI units.'' 
"""
h = 6.62607015e-34 # J⋅s. Exact value.
hbar = h / (2.0 * np.pi) # J⋅s. 

"""
Electron rest mass
https://en.wikipedia.org/wiki/Electron_rest_mass
``9.1093837015(28)×10−31'' 2018 CODATA value. NIST
"""
me = 9.1093837015e-31 # kg. 


"""
Proton rest mass
https://en.wikipedia.org/wiki/Proton 
1.67262192369(51)×10−27 kg
"""
mP = 1.67262192369e-27 # kg

"""
eV to Joule
https://en.wikipedia.org/wiki/Electronvolt 
1 eV = 1.602176634×10−19 J
"""
eV2J = 1.602176634e-19 # Conversion factor

And, that’s about it, folks! No documentation. But I have added a lot of (otherwise unnecessary) comments.

Take care, and bye for now.


A song I like:

(Marathi) सावली उन्हामध्ये (“saavali unaamadhe”)
Music: Chinar Mahesh
Lyrics: Chandrashekhar Sanekar
Singer: Swapnil Bandodkar

Equations using the matrix notation for a simple artificial neural network

Update on 2020.01.30 16:58 IST:

I have taken the document offline. Yes, it was too incomplete, tentative, and in fact also had errors. The biggest and most obvious error was about the error vector. 🙂

No, no one pointed any of the errors or flaws to me.

Yes, I will post an expanded and revised version later, hopefully in the first week of February. (I started work on this document on 24th Jan., but also was looking into other issues.) When I am done, I will delete this entire post, and make a new entry to announce the availability of the corrected, expanded and revised document.

The original post appears below.


Go, read [^].

Let me know about the typo’s. Also, errors. [Though I don’t expect you to do that. [I will eat my estimate of your moral character, on this count, at least for the time being.]]

I can, and might, take it out of the published domain, any time I want. [Yes, I am irresponsible, careless, unreliable, etc. [Also, “imposing” type. Without “team-spirit”. One who looks down on his colleagues as being way, way, inferior to me.]]

I will also improve on it—I mean the document. In fact, I even intend to expand it, with some brief notes to be added on various activation- and loss-functions.

Eventually, I may even publish it at GitHub. Or, at arXiv. [If they let me do that. But then, another consideration: there are physicists there!]

 


A song I like:

(Hindi) “कभी तो मिलेगी” (kabhee to milegi)
Singer: Lata
Music: Roshan
Lyrics: Majrooh Sultanpuri

[Credits happily listed in a mostly random order. [The issue was only with the last two; it was clear which one had to appear first.]