Do you really need a QC in order to have a really unpredictable stream of bits?

0. Preliminaries:

This post has reference to Roger Schlafly’s recent post [^] in which he refers to Prof. Scott Aaronson’s post touching on the issue of the randomness generated by a QC vis-a-vis that obtained using the usual classical hardware [^], in particular, to Aaronson’s remark:

“the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random.”

I do think (based on my new approach to QM [(PDF) ^]) that building a scalable QC is an impossible task.

I wonder if they (the QC enthusiasts) haven’t already begun realizing the hopelessness of their endeavours, and thus haven’t slowly begun preparing for a graceful exit, say via the QC-as-a-RNG route.

While Aaronson’s remarks also saliently involve the element of the “faraway” skeptic, I will mostly ignore that consideration here in this post. I mean to say, initially, I will ignore the scenario in which you have to transmit random bits over a network, and still have to assure the skeptic that what he was getting at the receiving end was something coming “straight from the oven”—something which was not tampered with, in any way, during the transit. The skeptic would have to be specially assured in this scenario, because a network is inherently susceptible to a third-party attack wherein the attacker seeks to exploit the infrastructure of the random keys distribution to his advantage, via injection of systematic bits (i.e. bits of his choice) that only appear random to the intended receiver. A system that quantum-mechanically entangles the two devices at the two ends of the distribution channel, does logically seem to have a very definite advantage over a combination of ordinary RNGs and classical hardware for the network. However, I will not address this part here—not for the most part, and not initially, anyway.

Instead, for most of this post, I will focus on just one basic question:

Can any one be justified in thinking that an RNG that operates at the QM-level might have even a slightest possible advantage, at least logically speaking, over another RNG that operates at the CM-level? Note, the QM-level RNG need not always be a general purpose and scalable QC; it can be any simple or special-purpose device that exploits, and at its core operates at, the specifically QM-level.

Even if I am a 100% skeptic of the scalable QC, I also think that the answer on this latter count is: yes, perhaps you could argue that way. But then, I think, your argument would still be pointless.

Let me explain, following my approach, why I say so.


2. RNGs as based on nonlinearities. Nonlinearities in QM vs. those in CM:

2.1. Context: QM involves IAD:

QM does involve either IAD (instantaneous action a distance), or very, very large (decidedly super-relativistic) speeds for propagation of local changes over all distant regions of space.

From the experimental evidence we have, it seems that there have to be very, very high speeds of propagation, for even smallest changes that can take place in the \Psi and V fields. The Schrodinger equation assumes infinitely large speeds for them. Such obviously cannot be the case—it is best to take the infinite speeds as just an abstraction (as a mathematical approximation) to the reality of very, very high actual speeds. However, the experimental evidence also indicates that even if there has to be some or the other upper bound to the speeds v, with v \gg c, the speeds still have to be so high as to seemingly approach infinity, if the Schrodinger formalism is to be employed. And, of course, as you know it, Schrodinger’s formalism is pretty well understood, validated, and appreciated [^]. (For more on the speed limits and IAD in general, see the addendum at the end of this post.)

I don’t know the relativity theory or the relativistic QM. But I guess that since the electric fields of massive QM particles are non-uniform (they are in fact singular), their interactions with \Psi must be such that the system has to suddenly snap out of some one configuration and in the same process snap into one of the many alternative possible configurations. Since there are huge (astronomically large) number of particles in the universe, the alternative configurations would be {astronomically large}^{very large}—after all, the particles positions and motions are continuous. Thus, we couldn’t hope to calculate the propagation speeds for the changes in the local features of a configuration in terms of all those irreversible snap-out and snap-in events taken individually. We must take them in an ensemble sense. Further, the electric charges are massive, identical, and produce singular and continuous fields. Overall, it is the ensemble-level effects of these individual quantum mechanical snap-out and snap-in events whose end-result would be: the speed-of-light limitation of the special relativity (SR). After all, SR holds on the gross scale; it is a theory from classical electrodynamics. The electric and magnetic fields of classical EM can be seen as being produced by the quantum \Psi field (including the spinor function) of large ensembles of particles in the limit that the number of their configurations approaches infinity, and the classical EM waves i.e. light are nothing but the second-order effects in the classical EM fields.

I don’t know. I was just loud-thinking. But it’s certainly possible to have IAD for the changes in \Psi and V, and thus to have instantaneous energy transfers via photons across two distant atoms in a QM-level description, and still end up with a finite limit for the speed of light (c) for large collections of atoms.

OK. Enough of setting up the context.

2.2: The domain of dependence for the nonlinearity in QM vs. that in CM:

If QM is not linear, i.e., if there is a nonlinearity in the \Psi field (as I have proposed), then to evaluate the merits of the QM-level and CM-level RNGs, we have to compare the two nonlinearities: those in the QM vs. those in the CM.

The classical RNGs are always based on the nonlinearities in CM. For example:

  • the nonlinearities in the atmospheric electricity (the “static”) [^], or
  • the fluid-dynamical nonlinearities (as shown in the lottery-draw machines [^], or the lava lamps [^]), or
  • some or the other nonlinear electronic circuits (available for less than $10 in hardware stores)
  • etc.

All of them are based on two factors: (i) a large number of components (in the core system generating the random signal, not necessarily in the part that probes its state), and (ii) nonlinear interactions among all such components.

The number of variables in the QM description is anyway always larger: a single classical atom is seen as composed from tens, even hundreds of quantum mechanical charges. Further, due to the IAD present in the QM theory, the domain of dependence (DoD) [^] in QM remains, at all times, literally the entire universe—all charges are included in it, and the entire \Psi field too.

On the other hand, the DoD in the CM description remains limited to only that finite region which is contained in the relevant past light-cone. Even when a classical system is nonlinear, and thus gets crazy very rapidly with even small increases in the number of degrees of freedom (DOFs), its DoD still remains finite and rather very small at all times. In contrast, the DoD of QM is the whole universe—all physical objects in it.

2.3 Implication for the RNGs:

Based on the above-mentioned argument, which in my limited reading and knowledge Aaronson has never presented (and neither has any one else either, basically because they all continue to believe in von Neumann’s characterization of QM as a linear theory), an RNG operating at the QM level does seem to have, “logically” speaking, an upper hand over an RNG operating at the CM level.

Then why do I still say that arguing for the superiority of a QM-level RNG is still pointless?


3. The MVLSN principle, and its epistemological basis:

If you apply a proper epistemology (and I have in my mind here the one by Ayn Rand), then the supposed “logical” difference between the two descriptions becomes completely superfluous. That’s because the quantities whose differences are being examined, themselves begin to lose any epistemological standing.

The reason for that, in turn, is what I call the MVLSN principle: the law of the Meaninglessness of the Very Large or very Small Numbers (or scales).

What the MVLSN principle says is that if your argument crucially depends on the use of very large (or very small) quantities and relationships between them, i.e., if the fulcrum of your argument rests on some great extrapolations alone, then it begins to lose all cognitive merit. “Very large” and “very small” are contextual terms here, to be used judiciously.

Roughly speaking, if this principle is applied to our current situation, what it says is that when in your thought you cross a certain limit of DOFs and hence a certain limit of complexity (which anyway is sufficiently large as to be much, much beyond the limit of any and every available and even conceivable means of predictability), then any differences in the relative complexities (here, of the QM-level RNGs vs. the CM-level RNGs) ought to be regarded as having no bearing at all on knowledge, and therefore, as having no relevance in any practical issue.

Both QM-level and CM-level RNGs would be far too complex for you to devise any algorithm or a machine that might be able to predict the sequence of the bits coming out of either. Really. The complexity levels already grow so huge, even with just the classical systems, that it’s pointless trying to predict the the bits. Or, to try and compare the complexity of the classical RNGs with the quantum RNGs.

A clarification: I am not saying that there won’t be any systematic errors or patterns in the otherwise random bits that a CM-based RNG produces. Sure enough, due statistical testing and filtering is absolutely necessary. For instance, what the radio-stations or cell-phone towers transmit are, from the viewpoint of a RNG based on radio noise, systematic disturbances that do affect its randomness. See random.org [^] for further details. I am certainly not denying this part.

All that I am saying is that the sheer number of DOF’s involved itself is so huge that the very randomness of the bits produced even by a classical RNG is beyond every reasonable doubt.

BTW, in this context, do see my previous couple of posts dealing with probability, indeterminism, randomness, and the all-important system vs. the law distinction here [^], and here [^].


4. To conclude my main argument here…:

In short, even “purely” classical RNGs can be way, way too complex for any one to be concerned in any way about their predictability. They are unpredictable. You don’t have to go chase the QM level just in order to ensure unpredictability.

Just take one of those WinTV lottery draw machines [^], start the air flow, get your prediction algorithm running on your computer (whether classical or quantum), and try to predict the next ball that would come out once the switch is pressed. Let me be generous. Assume that the switch gets pressed at exactly predictable intervals.

Go ahead, try it.


5. The Height of the Tallest Possible Man (HTPM):

If you still insist on the supposedly “logical” superiority of the QM-level RNGs, make sure to understand the MVLSN principle well.

The issue here is somewhat like asking this question:

What could possibly be the upper limit to the height of man, taken as a species? Not any other species (like the legendary “yeti”), but human beings, specifically. How tall can any man at all get? Where do you draw the line?

People could perhaps go on arguing, with at least some fig-leaf of epistemological legitimacy, over numbers like 12 feet vs. 14 feet as the true limit. (The world record mentioned in the Guinness Book is slightly under 9 feet [^]. The ceiling in a typical room is about 10 feet high.) Why, they could even perhaps go like: “Ummmm… may be 12 feet is more likely a limit than 24 feet? whaddaya say?”

Being very generous of spirit, I might still describe this as a borderline case of madness. The reason is, in the act of undertaking even just a probabilistic comparison like that, the speaker has already agreed to assign non-zero probabilities to all the numbers belonging to that range. Realize, no one would invoke the ideas of likelihood or probability theory if he thought that the probability for an event, however calculated, was always going to be zero. He would exclude certain kinds of ranges from his analysis to begin with—even for a stochastic analysis. … So, madness it is, even if, in my most generous mood, I might regard it as a borderline madness.

But if you assume that a living being has all the other characteristic of only a human being (including being naturally born to human parents), and if you still say that in between the two statements: (A) a man could perhaps grow to be 100 feet tall, and (B) a man could perhaps grow to be 200 feet tall, it is the statement (A) which is relatively and logically more reasonable, then what the principle (MVLSN) says is this: “you basically have lost all your epistemological bearing.”

That’s nothing but complex (actually, philosophic) for saying that you have gone mad, full-stop.

The law of the meaningless of the very large or very small numbers does have a certain basis in epistemology. It goes something like this:

Abstractions are abstractions from the actually perceived concretes. Hence, even while making just conceptual projections, the range over which a given abstraction (or concept) can remain relevant is determined by the actual ranges in the direct experience from which they were derived (and the nature, scope and purpose of that particular abstraction, the method of reaching it, and its use in applications including projections). Abstractions cannot be used in disregard of the ranges of the measurements over which they were formed.

I think that after having seen the sort of crazy things that even simplest nonlinear systems with fewest variables and parameters can do (for instance, which weather agency in the world can make predictions (to the accuracy demanded by newspapers) beyond 5 days? who can predict which way is the first vortex going to be shed even in a single cylinder experiment?), it’s very easy to conclude that the CM-level vs. QM-level RNG distinction is comparable to the argument about the greater reasonableness of a 100 feet tall man vs. that of a 200 feet tall man. It’s meaningless. And, madness.


6. Aaronson’s further points:

To be fair, much of the above write-up was not meant for Aaronson; he does readily grant the CM-level RNGs validity. What he says, immediately after the quote mentioned at the beginning of this post, is that if you don’t have the requirement of distributing bits over a network,

…then generating random bits is obviously trivial with existing technology.

However, since Aaronson believes that QM is a linear theory, he does not even consider making a comparison of the nonlinearities involved in QM and CM.

I thought that it was important to point out that even the standard (i.e., Schrodinger’s equation-based) QM is nonlinear, and further, that even if this fact leads to some glaring differences between the two technologies (based on the IAD considerations), such differences still do not lead to any advantages whatsoever for the QM-level RNG, as far as the task of generating random bits is concerned.

As to the task of transmitting them over a network is concerned, Aaronson then notes:

If you do have the requirement, on the other hand, then you’ll have to do something interesting—and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

Sure, it will have to involve QM. But then, why does it have to be only a QC? Why not have just special-purpose devices that are quantum mechanically entangled over wires / EM-waves?

And finally, let me come to yet another issue: But why would you at all have to have that requirement?—of having to transmit the keys over a network, and not using any other means?

Why does something as messy as a network have to get involved for a task that is as critical and delicate as distribution of some super-specially important keys? If 99.9999% of your keys-distribution requirements can be met using “trivial” (read: classical) technologies, and if you can also generate random keys using equipment that costs less than $100 at most, then why do you have to spend billions of dollars in just distributing them to distant locations of your own offices / installations—especially if the need for changing the keys is going to be only on an infrequent basis? … And if bribing or murdering a guy who physically carries a sealed box containing a thumb-drive having secret keys is possible, then what makes the guys manning the entangled stations suddenly go all morally upright and also immortal?

From what I have read, Aaronson does consider such questions even if he seems to do so rather infrequently. The QC enthusiasts, OTOH, never do.

As I said, this QC as an RNG thing does show some marks of trying to figure out a respectable exit-way out of the scalable QC euphoria—now that they have already managed to wrest millions and billions in their research funding.

My two cents.


Addendum on speed limits and IAD:

Speed limits are needed out of the principle that infinity is a mathematical concept and cannot metaphysically exist. However, the nature of the ontology involved in QM compels us to rethink many issues right from the beginning. In particular, we need to carefully distinguish between all the following situations:

  1. The transportation of a massive classical object (a distinguishable, i.e. finite-sized, bounded piece of physical matter) from one place to another, in literally no time.
  2. The transmission of the momentum or changes in it (like forces or changes in them) being carried by one object, to a distant object not in direct physical contact, in literally no time.
  3. Two mutually compensating changes in the local values of some physical property (like momentum or energy) suffered at two distant points by the same object, a circumstance which may be viewed from some higher-level or abstract perspective as transmission of the property in question over space but in no time. In reality, it’s just one process of change affecting only one object, but it occurs in a special way: in mutually compensating manner at two different places at the same time.

Only the first really qualifies to be called spooky. The second is curious but not necessarily spooky—not if you begin to regard two planets as just two regions of the same background object, or alternatively, as two clearly different objects which are being pulled in various ways at the same time and in mutually compensating ways via some invisible strings or fields that shorten or extend appropriately. The third one is not spooky at all—the object that effects the necessary compensations is not even a third object (like a field). Both the interacting “objects” and the “intervening medium” are nothing but different parts of one and the same object.

What happens in QM is the third possibility. I have been describing such changes as occurring with an IAD (instantaneous action at a distance), but now I am not too sure if such a usage is really correct or not. I now think that it is not. The term IAD should be reserved only for the second category—it’s an action that gets transported there. As to the first category, a new term should be coined: ITD (instantaneous transportation to distance). As to the third category, the new term could be IMCAD (instantaneous and mutually compensating actions at a distance). However, this all is an afterthought. So, in this post, I only have ended up using the term IAD even for the third category.

Some day I will think more deeply about it and straighten out the terminology, may be invent some or new terms to describe all the three situations with adequate directness, and then choose the best… Until then, please excuse me and interpret what I am saying in reference to context. Also, feel free to suggest good alternative terms. Also, let me know if there are any further distinctions to be made, i.e., if the above classification into three categories is not adequate or refined enough. Thanks in advance.


A song I like:

[A wonderful “koLi-geet,” i.e., a fisherman’s song. Written by a poet who hailed not from the coastal “konkaN” region but from the interior “desh.” But it sounds so authentically coastal… Listening to it today instantly transported me back to my high-school days.]

(Marathi) “suTalaa vaadaLi vaaraa…”
Singing, Music and Lyrics: Shaahir Amar Sheikh

 


History: Originally published on 2019.07.04 22:53 IST. Extended and streamlined considerably on 2019.07.05 11:04 IST. The songs section added: 2019.07.05 17:13 IST. Further streamlined, and also further added a new section (no. 6.) on 2019.07.5 22:37 IST. … Am giving up on this post now. It grew from about 650 words (in a draft for a comment at Schlafly’s blog) to 3080 words as of now. Time to move on.

Still made further additions and streamlining for a total of ~3500 words, on 2019.07.06 16:24 IST.

Advertisements

A neat experiment concerning quantum jumps. Also, an update on the data science side.

1. A new paper on quantum jumps:

This post has a reference to a paper published yesterday in Nature by Z. K. Minev and pals [^]; h/t Ash Joglekar’s twitter feed (he finds this paper “fascinating”). The abstract follows; the emphasis in bold is mine.

In quantum physics, measurements can fundamentally yield discrete and random results. Emblematic of this feature is Bohr’s 1913 proposal of quantum jumps between two discrete energy levels of an atom[1]. Experimentally, quantum jumps were first observed in an atomic ion driven by a weak deterministic force while under strong continuous energy measurement[2,3,4]. The times at which the discontinuous jump transitions occur are reputed to be fundamentally unpredictable. Despite the non-deterministic character of quantum physics, is it possible to know if a quantum jump is about to occur? Here we answer this question affirmatively: we experimentally demonstrate that the jump from the ground state to an excited state of a superconducting artificial three-level atom can be tracked as it follows a predictable ‘flight’, by monitoring the population of an auxiliary energy level coupled to the ground state. The experimental results demonstrate that the evolution of each completed jump is continuous, coherent and deterministic. We exploit these features, using real-time monitoring and feedback, to catch and reverse quantum jumps mid-flight—thus deterministically preventing their completion. Our findings, which agree with theoretical predictions essentially without adjustable parameters, support the modern quantum trajectory theory[5,6,7,8,9] and should provide new ground for the exploration of real-time intervention techniques in the control of quantum systems, such as the early detection of error syndromes in quantum error correction.

Since the paper was behind the paywall, I quickly did a bit of googling and then (very) rapidly browsed through the following three: [^], [^] and [(PDF) ^].

Since I didn’t find the words “modern quantum trajectory theory” explained in simple enough terms in these references, I did some further googling on “quantum trajectory theory”, high-speed browsed through them a bit, in the process browsing jumping through [^], [^], and landed first at [^], then at the BKS paper [(PDF) ^]. Then, after further googling on “H. J. Carmichael”, I high-speed browsed through the Wiki on Prof. Carmichael [^], and from there, through the abstract of his paper [^], and finally took the link to [^] and to [^].

My initial and rapid judgment:

Ummm… Minev and pals might have concluded that their experimental work lends “support” to “the modern quantum trajectory theory” [MQTT for short.] However, unfortunately, MQTT itself is not sufficiently deep a theory.

…  As an important aside, despite the word “trajectory,” thankfully, MQTT is, as far as I gather it, not Bohmian in nature either. [Lets out a sigh of relief!]

Still, neither is MQTT deep enough. And quite naturally so… After all, MQTT is a theory that focuses only on the optical phenomena. However, IMO, a proper quantum mechanical ontology would have the photon as a derived object—i.e., a higher-level abstraction of an object. This is precisely the position I adopted in my Outline document as well [^].

Realize, there  can be no light in an isolated system if there are no atoms in it. Light is always emitted from, and absorbed in, some or the other atoms—by phenomena that are centered around nuclei, basically. However, there can always be atoms in an isolated system even if there never occurs any light in it—e.g., in an extremely rare gas of inert gas atoms, each of which is in the ground state (kept in an isolated system, to repeat).

Naturally, photons are the derived or higher-level objects. And that’s why, any optical theory would have to assume some theory of electrons lying at even deeper a level. That’s the reason why MQTT cannot be at the deepest level.

So, my overall judgment is that, yes, Minev and pals’ work is interesting. Most important, they don’t take Bohr’s quantum jumps as being in principle un-analyzable, and this part is absolutely delightful. Still, if you ask me, for the reasons given above, this work also does not deal with the quantum mechanical reality at its deepest possible level. …

So, in that sense, it’s not as fascinating as it sounds on the first reading. … Sorry, Ash, but that’s how the things are here!

…Today was the first time in a couple of weeks or so that I read anything regarding QM. And, after this brief rendezvous with it in this post, I am once again choosing to close that subject right here. … In the absence of people interacting with me on QM (computational QChem, really speaking), and having already reached a very definite point of development concerning my new approach, I don’t find QM to be all that interesting these days.

Addendum on 2019.06.06:

For some good pop. sci-level coverage of the paper, see Chris Lee’s post at his ArsTechnica blog [^], and Phillip Ball’s story at the Quanta Magazine [^].


2. An update on the Data Science side:

As you know, these days, I have been pursuing data science full-time.

Earlier, in the second half of 2018, I had gone through Michael Nielsen’s online book on ANNs and DL [^]. At that time, I had also posted a few entries here on this blog concerning ANNs and DL [^]. For instance, see my post explaining, with real-time visualization, why deep learning is hard [^].

Now, in the more recent times, I have been focusing more on the other (“canonical”) machine learning techniques in general—things like (to list in a more or less random an order) regression, classification, clustering, dimensionality reduction, etc. It’s been fun. In particular, I have come to love scikit-learn. It’s a neat library. More about it all, later—may be I should post some of the toy Python scripts which I tried.

… BTW, I am also searching for one or two good, “industrial scale” projects from data science. So, if you are from industry and are looking for some data-science related help, then feel free to get in touch. If the project is of the right kind, I may even work on it on a pro-bono basis.

… Yes, the fact is that I am actively looking out for a job in data science. (Have uploaded my resume at naukri.com too.) However, at the same time, if a topic is interesting enough, I don’t mind lending some help on a pro bono basis either.

The project topic could be anything from applications in manufacturing engineering (e.g. NDT techniques like radiography, ultrasonics, eddy current, etc.) to financial time-series predictions, to some recommendation problem, to… I am open for virtually anything in data science. It’s just that I have to find the project to be interesting enough, that’s all… So, feel free to get in touch.

… Anyway, it’s time to wrap up. … So, take care and bye for now.


A song I like

(Western, pop) “Money, money, money…”
Band: ABBA

 

Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)


Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.


A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.


Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.


Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.


“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.


Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.


Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics


The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.


Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.


Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

  • Physical laws are always causal.
  • Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
  • The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
  • Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
  • Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
  • Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
  • Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
  • Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.


A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi

 

 

Wrapping up my research on QM—without having to give up on it

Guess I am more or less ready to wrap up my research on QM. Here is the exact status as of today.


1. The status today:

I have convinced myself that my approach (viz. the idea of singular potentials anchored into electronic positions, and with a 3D wave-field) is entirely correct, as far as QM of non-interacting particles is concerned. That is to say, as far as the abstract case of two particles in a 0-potential 1D box, or a less abstract but still hypothetical case of two non-interacting electrons in the helium atom, and similar cases are concerned. (A side note: I have worked exclusively with the spinless electrons. I don’t plan to include spin right away in my development—not even in my first paper on it. Other physicists are welcome to include it, if they wish to, any time they like.)

As to the actual case of two interacting particles (i.e., the interaction term in the Hamiltonian for the helium atom), I think that my approach should come to reproduce the same results as those obtained using the perturbation theory or the variational approach. However, I need to verify this part via discussions with physicists.

All in all, I do think that the task which I had intended to complete (and to cross-check) before this month-end, is already over—and I find that I don’t have to give up on QM (as suspected earlier [^]), because I don’t have to abandon my new approach in the first place.


2. A clarification on what had to be worked out and what had to be left alone:

To me, the crucial part at this stage (i.e., for the second-half of March) was verifying whether working with the two ideas of (i) a 3D wavefield, and (ii) electrons as “particles” having definite positions (or more correctly, as points of singularities in the potential field), still leads to the same mathematical description as in the mainstream (linear) quantum mechanics or not.

I now find that my new approach leads to the same maths—at least for the QM of the non-interacting particles. And further, I also have very definite grounds to believe that my new approach should also work out for two interacting particles (as in the He atom).

The crucial part at this stage (i.e., for the second half of March) didn’t have so much to do with the specific non-linearity which I have proposed earlier, or the details of the measurement process which it implies. Working out the details of these ideas would have been impossible—certainly beyond the capacities of any single physicist, and over such a short period. An entire team of PhD physicists would be needed to tackle the issues arising in pursuing this new approach, and to conduct the simulations to verify it.

BTW, in this context, I do have some definite ideas regarding how to hasten this process of unraveling the many particular aspects of the measurement process. I would share them once physicists show readiness to pursue this new approach. [Just in case I forget about it in future, let me note just a single cue-word for myself: “DFT”.]


3. Regarding revising the Outline document issued earlier:

Of course, the Outline document (which was earlier uploaded at iMechanica, on 11th February 2019) [^] needs to be revised extensively. A good deal of corrections and modifications are in order, and so are quite a few additions to be made too—especially in the sections on ontology and entanglement.

However, I will edit this document at my leisure later; I will not allocate a continuous stretch of time exclusively for this task any more.

In fact, a good idea here would be to abandon that Outline document as is, and to issue a fresh document that deals with only the linear aspects of the theory—with just a sketchy conceptual idea of how the measurement process is supposed to progress in a broad background context. Such a document then could be converted as a good contribution to a good journal like Nature, Science, or PRL.


4. The initial skepticism of the physicists:

Coming to the skepticism shown by the couple of physicists (with whom I had had some discussions by emails), I think that, regardless of their objections (hollers, really speaking!), my main thesis still does hold. It’s they who don’t understand the quantum theory—and let me hasten to add that by the words “quantum theory,” here I emphatically mean the mainstream quantum theory.

It is the mainstream QM which they themselves don’t understood as well as they should. What my new approach then does is to merely uncover some of these weaknesses, that’s all. … Their weakness pertains to a lack of understanding of the 3D \Leftrightarrow 3ND correspondence in general, for any kind of physics: classical or quantum. … Why, I even doubt whether they understand even just the classical vibrations themselves right or not—coupled vibrations under variable potentials, that is—to the extent and depth to which they should.

In short, it is now easy for me to leave their skepticism alone, because I can now clearly see where they failed to get the physics right.


5. Next action-item:

In the near future, I would like to make short trips to some Institutes nearby (viz., in no particular order, one or more of the following: IIT Bombay, IISER Pune, IUCAA Pune, and TIFR Mumbai). I would like to have some face-to-face discussions with physicists on this one single topic: the interaction term in the Hamiltonian for the helium atom. The discussions will be held strictly in the context that is common to us, i.e., in reference to the higher-dimensional Hilbert space of the mainstream QM.

In case no one from these Institutes responds to my requests, I plan to go and see the heads of these Institutes (i.e. Deans and Directors)—in person, if necessary. I might also undertake other action items. However, I also sincerely hope and think that such things would not at all be necessary. There is a reason why I think so. Professors may or may not respond to an outsider’s emails, but they do entertain you if you just show up in their cabin—and if you yourself are smart, courteous, direct, and well… also experienced enough. And if you are capable of holding discussions on the “common” grounds alone, viz. in terms of the linear, mainstream QM as formulated in the higher-dimensional spaces (I gather it’s John von Neumann’s formulation), that is to say, the “Copenhagen interpretation.” (After doing all my studies—and, crucially, after the development of what to me is a satisfactory new approach—I now find that I no longer am as against the Copenhagen interpretation as some of the physicists seem to be.) … All in all, I do hope and think that seeing Diro’s and all won’t be necessary.

I also equally sincerely hope that my approach comes out unscathed during / after these discussions. … Though the discussions externally would be held in terms of mainstream QM, I would also be simultaneously running a second movie of my approach, in my mind alone, cross-checking whether it holds or not. (No, they wouldn’t even suspect that I was doing precisely that.)

I will be able to undertake editing of the Outline document (or leaving it as is and issuing a fresh document) only after these discussions.


6. The bottom-line:

The bottom-line is that my main conceptual development regarding QM is more or less over now, though further developments, discussions, simulations, paper-writing and all can always go on forever—there is never an end to it.


7. Data Science!

So, I now declare that I am free to turn my main focus to the other thing that interests me, viz., Data Science.

I already have a few projects in mind, and would like to initiate work on them right away. One of the “projects” I would like to undertake in the near future is: writing very brief notes, written mainly for myself, regarding the mathematical techniques used in data science. Another one is regarding applying ML techniques to NDT (nondestructive testing). Stay tuned.


A song I like:

(Western, instrumental) “Lara’s theme” (Doctor Zhivago)
Composer: Maurice Jarre

 

 

 

 

 

The self-field, and the objectivity of the classical electrostatic potentials: my analysis

This blog post continues from my last post, and has become overdue by now. I had promised to give my answers to the questions raised last time. Without attempting to explain too much, let me jot down the answers.


1. The rule of omitting the self-field:

This rule arises in electrostatic interactions basically because the Coulombic field has a spherical symmetry. The same rule would also work out in any field that has a spherical symmetry—not just the inverse-separation fields, and not necessarily only the singular potentials, though Coulombic potentials do show both these latter properties too.

It is helpful here to think in terms of not potentials but of forces.

Draw any arbitrary curve. Then, hold one end of the curve fixed at the origin, and sweep the curve through all possible angles around it, to get a 3D field. This 3D field has a spherical symmetry, too. Hence, gradients at the same radial distance on opposite sides of the origin are always equal and opposite.

Now you know that the negative gradient of potential gives you a force. Since for any spherical potential the gradients are equal and opposite, they cancel out. So, the forces cancel out to.

Realize here that in calculating the force exerted by a potential field on a point-particle (say an electron), the force cannot be calculated in reference to just one point. The very definition of the gradient refers to two different points in space, even if they be only infinitesimally separated apart. So, the proper procedure is to start with a small sphere centered around the given electron, calculate the gradients of the potential field at all points on the surface of this sphere, calculate the sum of the forces exerted on the domain contained inside the spherical surface by these forces, and then take the sphere to the limiting of vanishing size. The sum of the forces thus exerted is the net force acting on that point-particle.

In case of the Coulombic potentials, the forces thus calculated on the surface of any sphere (centered on that particle) turn out to be zero. This fact holds true for spheres of all radii. It is true that gradients (and forces) progressively increase as the size of the sphere decreases—in fact they increase without all bounds for singular potentials. However, the aforementioned cancellation holds true at any stage in the limiting process. Hence, it holds true for the entirety of the self-field.

In calculating motions of a given electron, what matters is not whether its self-field exists or not, but whether it exerts a net force on the same electron or not. The self-field does exist (at least in the sense explained later below) and in that sense, yes, it does keep exerting forces at all times, also on the same electron. However, due to the spherical symmetry, the net force that the field exerts on the same electron turns out to be zero.

In short:

Even if you were to include the self-field in the calculations, if the field is spherically symmetric, then the final net force experienced by the same electron would still have no part coming from its own self-field. Hence, to economize calculations without sacrificing exactitude in any way, we discard it out of considerations.The rule of omitting the self-field is just a matter of economizing calculations; it is not a fundamental law characterizing what field may be objectively said to exist. If the potential field due to other charges exists, then, in the same sense, the self-field too exists. It’s just that for the motions of the self field-generating electron, it is as good as non-existent.

However, the question of whether a potential field physically exists or not, turns out to be more subtle than what might be thought.


2. Conditions for the objective existence of electrostatic potentials:

It once again helps to think of forces first, and only then of potentials.

Consider two electrons in an otherwise empty spatial region of an isolated system. Suppose the first electron (e_1), is at a position x_1, and a second electron e_2 is at a position x_2. What Coulomb’s law now says is that the two electrons mutually exert equal and opposite forces on each other. The magnitudes of these forces are proportional to the inverse-square of the distance which separates the two. For the like charges, the forces is repulsive, and for unlike charges, it is attractive. The amount of the electrostatic forces thus exerted do not depend on mass; they depend only the amounts of the respective charges.

The potential energy of the system for this particular configuration is given by (i) arbitrarily assigning a zero potential to infinite separation between the two charges, and (ii) imagining as if both the charges have been brought from infinity to their respective current positions.

It is important to realize that the potential energy for a particular configuration of two electrons does not form a field. It is merely a single number.

However, it is possible to imagine that one of the charges (say e_1) is held fixed at a point, say at \vec{r}_1, and the other charge is successively taken, in any order, at every other point \vec{r}_2 in the infinite domain. A single number is thus generated for each pair of (\vec{r}_1, \vec{r}_2). Thus, we can obtain a mapping from the set of positions for the two charges, to a set of the potential energy numbers. This second set can be regarded as forming a field—in the 3D space.

However, notice that thus defined, the potential energy field is only a device of calculations. It necessarily refers to a second charge—the one which is imagined to be at one point in the domain at a time, with the procedure covering the entire domain. The energy field cannot be regarded as a property of the first charge alone.

Now, if the potential energy field U thus obtained is normalized by dividing it with the electric charge of the second charge, then we get the potential energy for a unit test-charge. Another name for the potential energy obtained when a unit test-charge is used for the second charge is: the electrostatic potential (denoted as V).

But still, in classical mechanics, the potential field also is only a device of calculations; it does not exist as a property of the first charge, because the potential energy itself does not exist as a property of that fixed charge alone. What does exist is the physical effect that there are those potential energy numbers for those specific configurations of the fixed charge and the test charge.

This is the reason why the potential energy field, and therefore the electrostatic potential of a single charge in an otherwise empty space does not exist. Mathematically, it is regarded as zero (though it could have been assigned any other arbitrary, constant value.)

Potentials arise only out of interaction of two charges. In classical mechanics, the charges are point-particles. Point-particles exist only at definite locations and nowhere else. Therefore, their interaction also must be seen as happening only at the locations where they do exist, and nowhere else.

If that is so, then in what sense can we at all say that potential energy (or electrostaic potential) field does physically exist?

Consider a single electron in an isolated system, again. Assume that its position remains fixed.

Suppose there were something else in the isolated system—-something—some object—every part of which undergoes an electrostatic interaction with the fixed (first) electron. If this second object were to be spread all over the domain, and if every part of it were able to interact with the fixed charge, then we could say that the potential energy field exists objectively—as an attribute of this second object. Ditto, for the electric potential field.

Note three crucially important points, now.

2.1. The second object is not the usual classical object.

You cannot regard the second (spread-out) object as a mere classical charge distribution. The reason is this.

If the second object were to be actually a classical object, then any given part of it would have to electrostatically interact with every other part of itself too. You couldn’t possibly say that a volume element in this second object interacts only with the “external” electron. But if the second object were also to be self-interacting, then what would come to exist would not be the simple inverse-distance potential field energy, in reference to that single “external” electron. The space would be filled with a very weird field. Admitting motion to the property of the local charge in the second object, every locally present charge would soon redistribute itself back “to” infinity (if it is negative), or it all would collapse into the origin (if the charge on the second object were to be positive, because the fixed electron’s field is singular). But if we allow no charge redistributions, and the second field were to be classical (i.e. capable of self-interacting), then the field of the second object would have to have singularities everywhere. Very weird. That’s why:

If you want to regard the potential field as objectively existing, you have to also posit (i.e. postulate) that the second object itself is not classical in nature.

Classical electrostatics, if it has to regard a potential field as objectively (i.e. physically) existing, must therefore come to postulate a non-classical background object!

2.2. Assuming you do posit such a (non-classical) second object (one which becomes “just” a background object), then what happens when you introduce a second electron into the system?

You would run into another seeming contradiction. You would find that this second electron has no job left to do, as far as interacting with the first (fixed) electron is concerned.

If the potential field exists objectively, then the second electron would have to just passively register the pre-existing potential in its vicinity (because it is the second object which is doing all the electrostatic interactions—all the mutual forcings—with the first electron). So, the second electron would do nothing of consequence with respect to the first electron. It would just become a receptacle for registering the force being exchanged by the background object in its local neighborhood.

But the seeming contradiction here is that as far as the first electron is concerned, it does feel the potential set up by the second electron! It may be seen to do so once again via the mediation of the background object.

Therefore, both electrons have to be simultaneously regarded as being active and passive with respect to each other. They are active as agents that establish their own potential fields, together with an interaction with the background object. But they also become passive in the sense that they are mere point-masses that only feel the potential field in the background object and experience forces (accelerations) accordingly.

The paradox is thus resolved by having each electron set up a field as a result of an interaction with the background object—but have no interaction with the other electron at all.

2.3. Note carefully what agency is assigned to what object.

The potential field has a singularity at the position of that charge which produces it. But the potential field itself is created either by the second charge (by imagining it to be present at various places), or by a non-classical background object (which, in a way, is nothing but an objectification of the potential field-calculation procedure).

Thus, there arises a duality of a kind—a double-agent nature, so to speak. The potential energy is calculated for the second charge (the one that is passive), in the sense that the potential energy is relevant for calculating the motion of the second charge. That’s because the self-field cancels out for all motions of the first charge. However,

 The potential energy is calculated for the second charge. But the field so calculated has been set up by the first (fixed) charge. Charges do not interact with each other; they interact only with the background object.

2.4. If the charges do not interact with each other, and if they interact only with the background object, then it is worth considering this question:

Can’t the charges be seen as mere conditions—points of singularities—in the background object?

Indeed, this seems to be the most reasonable approach to take. In other words,

All effects due to point charges can be regarded as field conditions within the background object. Thus, paradoxically enough, a non-classical distributed field comes to represent the classical, massive and charged point-particles themselves. (The mass becomes just a parameter of the interactions of singularities within a 3D field.) The charges (like electrons) do not exist as classical massive particles, not even in the classical electrostatics.


3. A partly analogous situation: The stress-strain fields:

If the above situation seems too paradoxical, it might be helpful to think of the stress-strain fields in solids.

Consider a horizontally lying thin plate of steel with two rigid rods welded to it at two different points. Suppose horizontal forces of mutually opposite directions are applied through the rods (either compressive or tensile). As you know, as a consequence, stress-strain fields get set up in the plate.

From an external viewpoint, the two rods are regarded as interacting with each other (exchanging forces with each other) via the medium of the plate. However, in reality, they are interacting only with the object that is the plate. The direct interaction, thus, is only between a rod and the plate. A rod is forced, it interacts with the plate, the plate sets up stress-strain field everywhere, the local stress-field near the second rod interacts with it, and the second rod registers a force—which balances out the force applied at its end. Conversely, the force applied at the second rod also can be seen as getting transmitted to the first rod via the stress-strain field in the plate material.

There is no contradiction in this description, because we attribute the stress-strain field to the plate itself, and always treat this stress-strain field as if it came into existence due to both the rods acting simultaneously.

In particular, we do not try to isolate a single-rod attribute out of the stress-strain field, the way we try to ascribe a potential to the first charge alone.

Come to think of it, if we have only one rod and if we apply force to it, no stress-strain field would result (i.e. neglecting inertia effects of the steel plate). Instead, the plate would simply move in the rigid body mode. Now, in solid mechanics, we never try to visualize a stress-strain field associated with a single rod alone.

It is a fallacy of our thinking that when it comes to electrostatics, we try to ascribe the potential to the first charge, and altogether neglect the abstract procedure of placing the test charge at various locations, or the postulate of positing a non-classical background object which carries that potential.

In the interest of completeness, it must be noted that the stress-strain fields are tensor fields (they are based on the gradients of vector fields), whereas the electrostatic force-field is a vector field (it is based on the gradient of the scalar potential field). A more relevant analogy for the electrostatic field, therefore might the forces exchanged by two point-vortices existing in an ideal fluid.


4. But why bother with it all?

The reason I went into all this discussion is because all these issues become important in the context of quantum mechanics. Even in quantum mechanics, when you have two charges that are interacting with each other, you do run into these same issues, because the Schrodinger equation does have a potential energy term in it. Consider the following situation.

If an electrostatic potential is regarded as being set up by a single charge (as is done by the proton in the nucleus of the hydrogen atom), but if it is also to be regarded as an actually existing and spread out entity (as a 3D field, the way Schrodinger’s equation assumes it to be), then a question arises: What is the role of the second charge (e.g., that of the electron in an hydrogen atom)? What happens when the second charge (the electron) is represented quantum mechanically? In particular:

What happens to the potential field if it represents the potential energy of the second charge, but the second charge itself is now being represented only via the complex-valued wavefunction?

And worse: What happens when there are two electrons, and both interacting with each other via electrostatic repulsions, and both are required to be represented quantum mechanically—as in the case of the electrons in an helium atom?

Can a charge be regarded as having a potential field as well as a wavefunction field? If so, what happens to the point-specific repulsions as are mandated by the Coulomb law? How precisely is the V(\vec{r}_1, \vec{r}_2) term to be interpreted?

I was thinking about these things when these issues occurred to me: the issue of the self-field, and the question of the physical vs. merely mathematical existence of the potential fields of two or more quantum-mechanically interacting charges.

Guess I am inching towards my full answers. Guess I have reached my answers, but I need to have them verified with some physicists.


5. The help I want:

As a part of my answer-finding exercises (to be finished by this month-end), I might be contacting a second set of physicists soon enough. The issue I want to learn from them is the following:

How exactly do they do computational modeling of the helium atom using the finite difference method (FDM), within the context of the standard (mainstream) quantum mechanics?

That is the question. Once I understand this part, I would be done with the development of my new approach to understanding QM.

I do have some ideas regarding the highlighted question. It’s just that I want to have these ideas confirmed from some physicists before (or along-side) implementing the FDM code. So, I might be approaching someone—possibly you!

Please note my question once again. I don’t want to do perturbation theory. I would also like to avoid the variational method.

Yes, I am very comfortable with the finite element method, which is basically based on the variational calculus. So, given a good (detailed enough) account of the variational method for the He atom, it should be possible to translate it into the FEM terms.

However, ideally, what I would like to do is to implement it as an FDM code.

So there.

Please suggest good references and / or people working on this topic, if you know any. Thanks in advance.


A song I like:

[… Here I thought that there was no song that Salil Chowdhury had composed and I had not listened to. (Well, at least when it comes to his Hindi songs). That’s what I had come to believe, and here trots along this one—and that too, as a part of a collection by someone! … The time-delay between my first listening to this song, and my liking it, was zero. (Or, it was a negative time-delay, if you refer to the instant that the first listening got over). … Also, one of those rare occasions when one is able to say that any linear ordering of the credits could only be random.]

(Hindi) “mada bhari yeh hawaayen”
Music: Salil Chowdhury
Lyrics: Gulzaar
Singer: Lata Mangeshkar