No entanglement is possible in one-particle QM systems. [A context-specific reply touching on superposition and entanglement.]

Update alert!: Several addenda have been inserted inline on 21st and 22nd May 2021, IST.

Special Note: This post is just a reply to a particular post made by Dr. Roger Schlafly at his blog.

Those of you who’ve come here to check out the general happenings from my side, please see my previous post (below this one); I posted it just a couple of days ago.

1. Context for this post:

This is an unplanned post. In fact, it’s a reply to an update to a post by Dr. Roger Schlafly. His own post can be found here [^]. Earlier, I had made a couple of comments below that post. Then, later on, Schlafly added an update to the same post, in order to clarify how he was thinking like. 

As I began writing a reply to that update, at his blog, my write-up became way too big. Also, I couldn’t completely avoid LaTeX. So, I decided to post my reply here, with a link to be noted at Schlafly’s blog too. …

… I could’ve, perhaps, shortened my reply and posted it right at Schlafly’s blog. However, I also think that the points being discussed here are of a more general interest too.

Many beginners in QM carry exactly the same or very similar kind of misconceptions concerning superposition and entanglement. Further, R&D investments in the field of Quantum Computers have grown very big, especially in the recent few years. Many of the QC enthusiasts come with a CS background and almost nothing on the QM side. In any case, a lot of them seem to be carrying similar misconceptions. Even pop-sci write-ups about quantum computing show a similar lack of understanding—all too often.

Hence this separate, albeit very context-specific, post. … This post does not directly focus on the difference between superposition and entanglement (which will take a separate post/document). However, it does touch upon many points concerning the two related, but separate, phenomena. [Done!

2. What Dr. Schlafly said in his update:

Since Schlafly’s update is fairly stand-alone, let me copy-paste it here for ease of reference. However, it’s best if you also go through the entirety of his post, and also the earlier replies, for the total context.

Anyway, the update Schlafly noted is this:

Update: Reader Ajit suggests that I am confusing entanglement with superposition. Let me explain further. Consider the double-slit experiment with electrons being fired thru a double-slit to a screen, and the screen is divided into ten regions. Shortly before an electron hits the screen, there is an electron-possibility-thing that is about to hit each of the ten regions. Assuming locality, these electron-possibility-things cannot interact with each other. Each one causes an electron-screen detection event to be recorded, or disappears. These electron-possibility-things must be entangled, because each group of ten results in exactly one event, and the other nine disappear. There is a correlation that is hard to explain locally, as seeing what happens to one electron-possibility-thing tells you something about what will happen to the others. You might object that the double-slit phenomenon is observed classically with waves, and we don’t call it entanglement. I say that when a single electron is fired, that electron is entangled with itself. The observed interference pattern is the result.

Let me cite some excerpts from this passage as we go along…

3. My reply:

3.1. I will state how the mainstream QM (MSQM) conceptualizes the scenario Schlafly describes, and leave any comments from the viewpoint of my own new approach, for some other day (after my document is done)…

So, let’s get going with MSQM (I mean the non-relativistic version, unless otherwise noted):



“Consider the double-slit experiment with electrons being fired thru a double-slit to a screen, and the screen is divided into ten regions.”

To simplify our discussion, let’s assume that the interference chamber forms an isolated system. Then we can prescribe the system wavefunction \Psi to be zero outside the chamber.

(MSQM can handle open systems, but doing so only complicates the maths involved; it doesn’t shed any additional light on the issues under the discussion. OTOH, MSQM agrees that there is no negative impact if we make this simplification.)

So, let’s say that we have an isolated system.

Electrons are detected at the screen in spatially and temporally discrete events. In MSQM, detectors are characterized classically, and so, these can be regarded as being spatially finite. (The “particle” aspect.)

Denote the time interval between two consecutive electron detection events as T. In experiment, such time-durations (between two consecutive detections) appear to be randomly distributed. So, let T be a random variable. The PDF (probability distribution function) which goes with T can be reasonably modeled with a distribution having a rapidly decaying and long tail. For bosons (e.g. photons), the detection events are independent and so can be modeled with a Poisson distribution. However, for electrons (fermions), the Poisson distribution won’t apply. Yet, when the electron “gas” is so thin as to have just a few electrons in a volume that is \gg the scale of the wavelength of electrons as in the experiment, the tail of PDF is very long—indefinitely long.

That’s why, when you detect some electron at the screen, you can never be 100\ \% sure that the next electron hadn’t already been emitted and hadn’t made its way into the interference chamber.

Practically, however, observing that the distribution decays rapidly, people consider the average (i.e. expectation) value for the time-gap T, and choose some multiple of it that is reasonably large. In other words, a lot of “screening” is effected (by applying an opposite potential) after the electron gun, before the electrons enter the big interference chamber proper (Five sigma? I don’t know the criterion!)

Thus, assuming a large enough a time-gap between consecutive events, we can make a further simplifying assumption: There is only one electron in the chamber at a time.



“Shortly before an electron hits the screen, there is an electron-possibility-thing that is about to hit each of the ten regions.”

In the MSQM, before the lone electron hits the screen, the state of the electron is described by a wavefunction of the form: \Psi(\vec{x},t).

If, statistically, there are two electrons in the chamber at the same time (i.e. a less effective screening), then the assumed system wavefunction would have the form:

\Psi(\vec{x}_1, \vec{x}_2, t),

where \vec{x}_1 and \vec{x}_2 are not the positions of the two electrons, but the two 3D vector coordinates of the configuration space (i.e. six degrees of spatial freedom in all).

Should we assume some such a thing?

If you literally apply MSQM to the universe, then in principle, all electrons in the universe are always interacting with each other, no matter how far apart. Further, in the non-relativistic QM, all the interactions are instantaneous. In the relativistic QM the interactions are not instantaneous, but we need not consider relativity here, simply because the chamber is so small in extent. [I am not at all sure about this part though! I don’t have any good intuition about relativity; in fact I don’t know it! I should have just said: Let’s ignore the relativistic considerations, as a first cut!]

So, keeping out relativity, the electron-to-electron interactions are modeled via the Coulomb force. This force decays rapidly with distance, and hence, is considered negligibly small if the distance is of the order of the chamber (i.e., practically speaking, the internal cavity of a TEM (transmission electron microscope)).

Aside: In the scenarios where the interaction is not negligibly small, then the two-particle state \Psi(\vec{x}_1, \vec{x}_2, t) cannot be expressed as a tensor product of two one-particle states \Psi_1(\vec{x}_1,t) \otimes \Psi_2(\vec{x}_2,t). In other words, entanglement between the two electrons can no longer be neglected.

Let us now assume that in between emission and absorption there is only one electron in the chamber. 

Now, sometimes, it can so happen that, due to some statistical fluke, there may be two (or even three, four…) electrons in the chamber. However, we now have a stronger argument for assuming that there is always only one particle in the chamber, when detection occurs. Reason: We are now saying is that the magnitude of the interaction between the two electrons (the one which was intended to be in the chamber, and the additional one(s) which came by fluke) is so small that these interactions can be assumed to be zero. We can make that assumption simply because the electrons are so far apart in the TEM chamber—as compared to their wavelengths as realized in this experiment.

So, at this point, we assume that a wavefunction of the form \Psi(\vec{x},t) applies.

Note, the configuration space now has a single variable vector \vec{x}, and so, there is no problem interpreting it as the coordinate of the ordinary physical space. So, we can say that wavefunction (which describes a wave—a distributed entity) is, in this case, defined right over the physical space (the same space as is used in NM / EM). Note: We still aren’t interpreting this \vec{x} as the particle-position of the electron!



“Assuming locality, these electron-possibility-things cannot interact with each other.”

The wavefunction for the lone electron \Psi(\vec{x},t) always acts as a single entity over the entire 3D domain at the same time. (The “wave” aspect.)

The wavefunction has support all over the domain, and the evolution of each of the energy eigenstates comprising it occurs, by Fourier theory, at all points of space simultaneously.

In short: The wavefunction evolution is necessarily “global”. That’s how the theory works—I mean, the classical theory of Fourier’s.

[Addendum made on 2021.05.21: BTW, there can be no interaction between the energy eigen-states comprising the total wavefunction \Psi(\vec{x},t)  because all eigenfunctions of a given basis are always orthogonal to each other. Addendum over.]


“Each one causes an electron-screen detection event to be recorded, or disappears.”

Great observation! I mean this part: “or disappears”. Most (may be 99.9999\,\% or more, including some PhD physicists) would miss it!


Assume that the detector efficiency is 100\ \%.

Assuming a less-than-perfect detector-efficiency doesn’t affect the foundational arguments in any way; it only makes the maths a bit more complicated. Not much, but a shade more complicated. Like, by a multiplying factor of the square-root of something… But why have any complications if we can avoid them?

[Addendum made on 2021.05.21: Clarification: May be, I mis-interpreted Schlafly’s write up here. He could easily be imagining here that there are ten components in the total wavefunction of a single electron, and that only one component remains and the other disappear. OTOH, I took the “disappearing” part to be the electron itself, and not the components entering into that superposition which is the system wavefunction \Psi(\vec{x},t). … So, please read these passages accordingly. The explanation I wrote anyway has covered decomposing the system wavefunction \Psi(\vec{x},t) into two different eigenbases: (i) the total energy (i.e. the Hamiltonian) operator, and (ii) the position operator. Addendum over.]



“These electron-possibility-things must be entangled, because each group of ten results in exactly one event, and the other nine disappear.”


Bohr insisted that the detector be described classically (i.e. using the ideas of classical EM), by insisting on his Correspondence principle. (BTW, Correspondence is not the same idea as the Complementarity principle. (BTW, IMO, the abstract idea of the Correspondence principle is good, though not how it is concretely applied, as we shall soon touch upon.)) 

This is the reason why the MSQM does not describe the ten detectors at the screen quantum mechanically, to begin with.

MSQM also cannot. Even if we were to describe the ten detectors quantum mechanically, problems would remain.

According to MSQM, the quantum-mechanical system would now consist of {1 electron + 10 detectors (with all their constituent quantum mechanical particles)}.

This entire huge system would be described via a single wavefunction. Just keep adding \vec{x}_i, as many of them as needed. Since there no longer is a classical-mechanical detector in the description, the system would forever go oscillating, with its evolution exactly as dictated by the Schrodinger evolution. Which implies that there won’t be this one-time big change of a detection event, in such a description. MSQM cannot accomodate an irreversible change in the state of the {the 1e + 10 detectors} system. By postulation, it’s linear. (Show some love to Bohr, Dirac, and von Neumann, will you?)

Following the lead supplied by Bohr (and all Nobel laureates since), the MSQM models our situation as the following:

There is a single quantum-mechanically described electron. It is described by a wavefunction which evolves according to the Schrodinger equation. Then, there are those 10 classical detectors that do not quantum mechanically interact with the electron (the system wavefunction) at all, for any and all instants, until the detection event actually happens.

Then, the detection event happens, and it occurs at one and only one detector. Which detector in particular? “At random”. What is the mechanism to describe it? Blank-out!

But let’s continue with the official view (i.e. MSQM)…

The detection event has two parts: (1) The randomly “chosen” detector irreversibly changes its always classical state, from “quiscent” to “detected”. At the same time, (2) the quantum-mechanical wavefunction “collapses” into that particular eigenfunction of the position operator which has the associated eigenvalue of that Dirac’s delta which is situated at the detector (which happened to undergo the detection event).

What is a collapse? Refer to the above. It refers to a single eigenfunction remaining from among a superposition of all eigenfunctions that were there. (The wave was spread out, i.e. having an infinity of Dirac’s delta positions; after the collapse, it became a single Dirac’s delta.)

What happened to the other numerous (here, an infinity of) eigenfunctions that were not selected? Blank out.

What is the mechanism for the collapse? Blank out. (No, really!)

How much time does it take for the detection event to occur? Blank out. (No, really!)

To my limited knowledge, MSQM is actually silent about the time lapse. Even Bohr himself, I think, skirted around the issue in his more official pronouncements. However, he also never gave up the idea of those sudden “quantum jumps”—which idea Schrodinger hated. 

So, MSQM is silent on the time taken for collapse. But people (especially the PhD physicists) easily rush in, and will very confidently tell you: “infinitesimally small”. Strictly speaking, that’s their own interpretation. (Check out the QM Postulates document [^], or the original sources.)

One more point.

Carefully note: There were no ten events existing prior to a detection anywhere in the above description. That’s why the question of the nine of them then disappearing simply cannot arise. MSQM doesn’t describe the scenario the way Schlafly has presented (and many people believe it does)—at all.

IMO, MSQM does that with good reason. You can’t equate a potential event with an actual event.

Perhaps, one possible source of the confusion is this: People often seem to think that probabilities superpose. But it’s actually only the complex amplitudes (the wavefunctions) that superpose.

[Addendum made on 2021.05.21: Clarification: Even if we assume that by ten things we mean ten components of the wavefunction and not ten events, the rest of the write-up adequately indicates the decomposition of \Psi(\vec{x},t) into eigenbasis of the Hamiltonian (total energy) operator as well the position operator. Addendum over.]



“There is a correlation that is hard to explain locally, as seeing what happens to one electron-possibility-thing tells you something about what will happen to the others.”

There are no ten events in the first place; there is only one. So, there is no correlation to speak of.

[Addendum made on 2021.05.21:  Clarification. Just in case the ten things refer to the ten components (not a complete eigenbasis, but components in their own right, nevertheless) of the wavefunction and not ten events, there still wouldn’t be correlations to speak of between them, because all of them would collapse to a single Dirac’s delta at the time of the single detection event. Addendum over.]

That’s why, we can’t even begin talking of any numerical characteristics (or relative “strengths”) of the so-supposed correlations. Not in single-particle experiments.

In one-particle situations, we can’t even address issues like: Whether the correlations are of the same strengths as what QM predicts (as for the entangled particles); or they are weaker than what QM predicts (which is what happens with the predictions made using some NM- / EM-inspired “classical” models of the kind Bell indicated, i.e., with the NM / EM ontologies), or they are stronger than what QM predicts. (Experiments say that the correlations are not stronger either!)

Correlations become possible once you have at least two electrons at the same time in a system.

Even if, in MSQM, the two electros have a single wavefunction governing their evolution, the configuration space then has two 3D vectors as independent variables. That’s how the theory changes (in going from one particle to two particles).

As to experiments: There is always only one detection event per particle. Also, all detection events must occur—i.e. all particles must get detected—before the presence or absence of entanglement can be demonstrated.

One final point. Since all particles in the universe are always interconnected, they are always interacting. So, the “absence of entanglement” is only a theoretical abstraction. The world is not like that. When we say that entanglement is absent, all that we say is that the strength of the correlation is so weak that it can be neglected.

[Addendum made on 2021.05.21:

BTW, even in the classical theories like the Newtonian gravity, and even the Maxwell-Lorentz EM, all particles in the universe are always interconnected. In Newtonian gravity, the interactions are instantaneous. In EM (and even in GR for that matter), the interactions are time-delayed, but the amount of delay for any two particles a finite distance apart is always finite, not infinite.

So, the idea of the universe as being fully interconnected is not special to QM.

One classical analog for the un-entangled particles is this: Kepler’s law says that each planet moves around the Sun in a strictly elliptical orbit. If we model this empirical law with the Newtonian mechanics, we have to assume that the interactions in between the planets are to be neglected (because they are relatively so small). We also neglect the interactions of the planets with everything else in the universe like the distant stars and galaxies. In short, each planet independently interacts with the Sun and only with the Sun.

So, even in classical mechanics, for the first cut in our models, for simplification, we do neglect some interactions even if they are present in reality. Such models are abstractions, not reality. Ditto, for the un-entangled states. They are abstractions, not reality.

Addendum over.]

4. But what precisely is the difference?

This section (# 4.) is actually a hurriedly written addendum. It was not there in my comment/reply. I added it only while writing this post.

I want to make only this point:

All non-trivial entangled states are superposition states. But superposition does not necessarily mean entanglement. Entanglement is a special kind of a superposition.

Here is a brief indication of how it goes, in reference to a concrete example.

Consider the archetypical example of an entangled state involving the spins of two electrons (e.g., as noted in this paper [^], which paper was noted in Prof. (and Nobel laureate) Franck Wiczek’s Quanta Mag article [^]). Suppose the spin-related system state is given as:

|\Psi_{\text{two electrons}}\rangle = \tfrac{1}{\sqrt{2}} \left(\ |\uparrow \downarrow\rangle \ +\  |\downarrow \uparrow\rangle \ \right)               [Eq. 1].

The state of the system, noted on the left hand-side of the above equation, is an entangled state. It consists of a a linear superposition of the following two states, each of which, taken by itself, is un-entangled:

|\uparrow \downarrow\rangle = |\uparrow\rangle \otimes |\downarrow\rangle,           [Eq. 2.1]


| \downarrow \uparrow \rangle = |\downarrow\rangle \otimes |\uparrow\rangle           [Eq. 2.2].

The preceding two states are un-entangled because as the right hand-sides of the above two equations directly show, each can be expressed—in fact, each is defined—as a tensor product of two one-particle states, which are: |\uparrow\rangle, and |\downarrow\rangle. Thus, the states which enter into the superposition themselves are factorizable into one-particle states; so, they themselves are un-entangled. But once we superpose them, the resulting state (given on the left hand-side) turns out to be an entangled state.

So, the entangled state in this example is a superposition state.

Let’s now consider a superposition state that is not also an entangled state. Simple!

|\Psi_{\text{one particle}}\rangle = \tfrac{1}{\sqrt{2}} \left(\ |\uparrow\rangle + |\downarrow\rangle\ \right)            [Eq. 3].

This state is in a superposition of two states; it is a spin-related analog of the single-particle double-slit interference experiment.

So, what is the essential difference between entangled states from the “just” superposition states?

If the “total” state of a two- (or more-) particle system can be expressed as a single tensor product of two (or more) one-particle states (as in Eqs. 2.1 and 2.2], i.e., if the total state is “separable”/”factorizable” into one-particle states, then it is an independent i.e. un-entangled state.

All other two-particle states (like that in Eq. 1) are entangled states.

Finally, all one-particle states (including the superpositions states as in Eq. 3) are un-entangled states.

One last thing:

The difference between the respective superpositions involved in the two-particle states vs. one-particle states is this:

The orthonormal eigenbasis vectors for two-particle states themselves are not one-particle states.

The eigenvectors for any two-particle states (including those for the theoretically non-interacting particles), themselves are, always, two-particle states.

[Addendum made on 2021.05.22:

But why bother with this difference? I mean, the one between superpositions of two-particle states vs. superpositions of one-particle states?

Recall the postulates. The state of the system prior to measurement can always be expressed as a superposition of the eigenstates of any suitable operator. Then, in any act of measurement of an observable, the only states that can at all be observed are the eigenstates of the operator associated with that particular observable. Further, in any single measurement, one and only one of these eigenstates can ever be observed. That’s what the postulates say (and every one else tells you anyway).

Since every eigenfunction for a two-particle system is a two-particle state, what a theoretically single measurement picks out is not a one-particle state like |\uparrow\rangle or |\downarrow\rangle, but a two-particle state like |\uparrow\downarrow\rangle or |\downarrow\uparrow\rangle. Only one of them, but it’s a two-particle state.

So, the relevant point (which no one ever tells you) is this:

A theoretically (or postulates-wise) single measurement, on a two-particle system, itself refers to two distinct observations made in the actual experiment—one each for the two particles. For an N-particle systems, N number of one-particle detections are involved—for what the theory calls a single measurement!

In entanglement studies, detectors are deliberately kept as far apart as they can manage. Often, the detectors are on the two opposite sides of the initial (source) point. But this need always be the case. The theory does not demand it. The two detectors could be spatially anywhere (wherever the spatial part of the total wavefunction is defined). The detectors could be right next to each other. The theory is completely silent about how far the detectors should be.

In short:

All that the theory says is:

Even for an N-particle system, the state which is picked out in a single measurement itself is one of the eigenstates (of the operator in question).

But you are supposed to also know that:

Every eigenstate for such a system necessarily is an N-particle state.

Hence the implication is:

For a single observation during an actual experiment, you still must make N number of separate observation events, anyway!


There are N number of particles and N number of events. But the theory is still going to conceptualize it as a single measurement of a single eigenfunction.

Every one knows it, but no one tells you—certainly not in textbooks / lecture notes / tutorials / YouTube videos / blogs / Twitter / FaceBook / Instagram / whatever. [Yes, please feel challenged. Please do bring to my notice any source which tells it like it is—about this issue, I mean.]

Addendum over.]

For a more general discussion of the mathematical criterion for un-entangled (or factorizable) vs. entangled states (which discussion also is simple enough, i.e. not involving the most general case that can arise in QM), then check out the section “Pure states” in the Wiki on “Quantum entanglement”, here [^].

And, another, last-last, thing!:

Yes, the states comprising the eigenbasis of any two non-interacting particles always consist of tensor-product states (i.e. they are separable, i.e. non-entangled).

However, when it comes to interacting particles: Especially for systems of large number of particles that interact, and talking of their “total” wavefunctions (including both: the spatial Schrodinger wavefunctions defined over an infinite spatial domain, and their spinor functions), I am not sure if all their eigenvectors for all observables are always represent-able as tensor-product states or not. … I mean to say, I am not clear whether the Schmidt decomposition always applies or not. My studies fall short. The status of my knowledge is such that I am unable to take a definitive position here (for uncountably infinite-dimensional Hilbert spaces of very large number of particles). May be there is some result that does prove something one way or the other, but I am not sure.

That’s why, let me now stop acting smart, and instead turn back to my studies!


5. To conclude this post…

….Phew!… So, that was (supposed to be) my “comment” i.e. “reply”. …Actually, the first draft of my “reply” was “only” about 1,500 words long. By the time of publication, this post has now become more than 3,300 word long…

If there is any further correspondence, I plan to insert it too, right here, by updating this post.

… I will also update this post if (and when!) I spot any typo’s or even conceptual / mathematical errors in my reply. [Always possible!] Also, if you spot any error(s), thanks in advance for letting me know.

OK, take care and bye for now…

[No songs section this time around. Will return with the next post.]

— 2021.05.20 21:00 IST: Originally published
— 2021.05.21 17:17 IST: Added some clarifications inline. Streamlined a bit. Corrected some typo’s.
— 2021.05.22 13:15 and then also 22:00 IST: Added one more inline explanation, in section 4. Also added a confession of ignorance about relativity, and a missing normalization constant. …Now I am going to leave this post in whatever shape it is in; I am done with it…

“Simulating quantum ‘time travel’ disproves butterfly effect in quantum realm”—not!

A Special note for the Potential Employers from the Data Science field:

Recently, in April 2020, I achieved a World Rank # 5 on the MNIST problem. The initial announcement can be found here [^], and a further status update, here [^].

All my data science-related posts can always be found here [^].

This post is based on a series of tweets I made today. The original Twitter thread is here [^]. I have made quite a few changes in posting the same thought here. Further, I am also noting some addenda here (which are not there in the original thread).

Anyway, here we go!

1. The butterfly effect and QM: a new paper that (somehow) caught my fancy:

1.1. Why this news item interested me in the first place:

Nonlinearity in the wavefunction \Psi, as proposed by me, forms the crucial ingredient in my new approach to solving the QM measurement problem. So, when I spotted this news item [^] today, it engaged my attention immediately.

The opening line of the news item says:

Using a quantum computer to simulate time travel, researchers have demonstrated that, in the quantum realm, there is no “butterfly effect.”

[Emphasis in bold added by me.]

The press release by LANL itself is much better worded (PDF) [^]. In the meanwhile, I also tried to go through the arXiv version of the paper, here [^].

I don’t think I understand the paper in its entirety. (QC and all is not a topic of my main interests.) However, I do think that the following analogy applies.:

1.2. A (way simpler) analogy to understand the situation described in the paper:

The whole thing is to do with your passport-size photo, called “P”.

Alice begins with “P”, which is given in the PNG/BMP format. [Should the usage be the Alice? I do tend to think so! Anyway…]

She first applies a 2D FFT to it, and saves the result, called “FFT-P”, in a folder called “QC” on her PC. Aside: FFT’ed photos look like dots that show a “+”-like visual structure. Note, Alice saves both the real and the imaginary parts of the FFT-ed image. This assumption is important.

She then applies a further sequence of linear, lossless, image transformations to “FFT-P”. Let’s call this ordered set of transformations “T”. Note, “T” is applied to “FFT-P”, not to “P” itself.

As a result of applying the “T” transformations, she obtains an image which she saves to a file called “SCR-FFT-P”. This image totally looks like random dots to the rest of us, because the “T” transformations are such that they scramble whatever image is fed to them. Hence the prefix “SCR”, short for “scrambled”, in the file-name.

But Alice knows! She can always apply the same sequence of transformation but in the reverse direction. Let’s call this reverse transformation “T-inverse”.

Each step of “T” is reversible—that’s what “linear” and “lossless” transformation means! (To contrast, algorithms like “hi-pass” or “low-pass” filtering, or operators like the gradient or Laplacian are not loss-less.)

Since “T” is reversible, starting with “SCR-FFT-P”, Alice can always apply “T-inverse”, and get back to the original 2D FFT representation, i.e., to “FFT-P”.

All this is the normal processing—whether in the forward direction or in the reverse direction.

1.3. Enter [the [?]] Bob:

As is customary in the literature on the QC/entanglement, Bob enters the scene now! Alice and Bob work together.

Bob hates you. That’s because he believes in every claim made about QC, but you don’t. That’s why, he experiences an inner irrepressible desire to do some damage to your photograph, during its processing.

So, to, err…, “express” himself, Bob comes early to office, gains access to Alice’s “QC” folder, and completely unknown to her, he modifies a single pixel of the “FFT-P” image stored there, and even saves it. Remember, this is the FFT-ed version of your original photo “P”.

Let’s call the tampered version: “B-FFT-P”. On the hard-disk, it still carries the name “FFT-P”. But its contents are modified, and so, we need another name to denote this change of the state of the image.

1.4. What happens during Alice’s further processing?

Alice comes to the office a bit later, and soon begins her planned work for the day, which consists of applying the “T” transformation to the “FFT-P” image. But since the image has been tampered with by Bob, what she ends up manipulating is actually the “B-FFT-P” image. As a result of applying the (reversible) scrambling operations of “T”, she obtains a new image, and saves it to the hard-disk as “SCR-B-FFT-P”.

But something is odd, she feels. So, just to be sure, she decides to check that everything is OK, before going further.

So, she applies “T-inverse” operation to the “SCR-B-FFT-P” file, and obtains the “B-FFT-P” image back, which she saves to a file of name “Recovered FFT-P”. Observe, contents-wise, it is exactly the same as “B-FFT-P”, though Alice still believes it is identical to “FFT-P”.

Now, on a spur of the moment, she decides also to apply the reverse-FFT operation to “Recovered FFT-P”, i.e., to the Bob-tampered version of the FFT-ed version of your original photo. She saves the fully reversed image as “Recovered P”.

Just to be sure, she then runs some command that does a binary bit-wise comparison between “Recovered P” and the original “P”.

We know that they are not the same. Alice discovers this fact, but only at this point of time.

1.5. The question that the paper looks into:

If I understand it right, what the paper now ponders over is this question:

How big or small is the difference between the two images: “Recovered P” and the original “P”?

The expected answer, of course, is:

Very little.

The reason to keep such an expectation is this: FFT distributes the original information of any locality over the entire domain in the FFT-ed image. Hence, during reverse processing, each single pixel in the FFT-ed image maps back to all the pixels in the original image. [Think “holographic” whatever.] Therefore, tampering with just one pixel of the FFT-ed representation does not have too much effect in the recovered original image.

Hence, Alice is going to recover most of the look and feel of your utterly lovely, Official, passport-size visage! That is what is going to happen even if she in reality starts only from the scrambled tampered state “SCR-B-FFT-P”, and not the scrambled un-tampered state “SCR-FFT-P”. You would still be very much recognizable.

In fact, due to the way FFT works, the difference between the original photo and the recovered photo goes on reducing as the sheer pixel size of the original image goes on increasing. That’s because, regardless of the image size, Bob always tampers only one pixel at a time. So, the percentage tampering goes on reducing with an increase in the resolution of the original image.

1.6. The conclusion that the paper draws from the above:

Let’s collect the undisputable facts together:

  • There is very little difference between the recovered image, and the original image.
  • Whatever be the difference, it goes on reducing with the size of the original image.

The paper now says, IMO quite properly, that Bob’s tampering of the single pixel is analogous to his making a QM measurement, and thereby causing a permanent change to the concerned (“central”) qubit.

But then, the paper draws the following conclusion:

The Butterfly Effect does not apply to QM as such; it applies only to classical mechanics.

Actually, the paper is a bit more technical than that. In fact, I didn’t go through it fully because even if I were to, I wouldn’t understand all of it. QC is not a topic of my primary research interests, and I have never studied it systematically.

But still, yes, I do think that the above is the sort of logic on which the paper relies, to draw the conclusion which it draws.

2. My take on the paper:

2.1 It’s an over-statement:

Based on what I know, and my above, first, take, I do think that:

The paper makes an over-statement. The press release then highlights this “over” part. Finally, the news item fully blows up the same, “over” part.

Why do I think so? Here is my analysis..

If the Butterfly Effect produced due to nonlinearity is fully confined only to making an irreversible (or at least exponentially divergent) change only to a single pixel in the FFT representation of the original image (or alternatively, even if we didn’t look into alternative analgoy in which what Bob tampers is the original photograph but each subsequent processing involves only an FFT-ed version), then any and all of the further steps of linear and reversible transformations wouldn’t magnify the said tampering.

Why not?

Because all the further steps are prescribed to be linear (and in fact even reversible), that’s why!

In other words, what the paper says boils down to a redundancy (or, a re-statement of the same facts):

A linear and reversible transformation is emphatically not a non-linear and exponential divergent one (as in the butterfly effect).

That’s what the whole point of the paper seems to be!

2.2. The actual processing described in the paper does not at all involve the butterfly effect:

Realize, the only place the butterfly effect can at all occur during the entire processing is as a mechanism by which Bob might tamper with that single pixel.

Now, of course, the paper doesn’t say so. The paper only says that there is a tampering of a qubit via a measurement effected on it (with all other qubits, constituting “the bath” being left alone).

But, yes, I have proposed this idea that the measurement process itself progresses, within the detector, via the butterfly effect. I identified it as such during my Outline document posted at iMechanica, here (PDF) [^].

Of course, I stand ready to be corrected, if I am wrong anywhere in the fundamentals of my analysis.

2.3. I didn’t say anything about the “time-travel” part:

That’s right. The reason is: there is no real time-travel here anyway!

Hmmm… Explaining why would unnecessarily consume my time. … Forget it! Just remember: There is no time-travel here, not even a time-reversal, for that matter. In the first half of the processing by Alice (and may be with tampering by Bob), each step occurs some finite time after the completion of previous step. In the second half of the processing, again, each step of the inverse-processing occurs some finite time after the completion of the previous step. What reverses is the sequence of operators, but not time. Time always flows steadily in the forward direction.

Enough said.

2.4. Does my critique reflect on the paper taken as a whole?

I did manage to avoid Betteridge’s law [^] thus far, but can’t, any more!

The answer seems to be: “no”, or at least: “I didn’t mean that“.

The thing is this: This is a paper from the field of Quantum Computer/Quantum Information Science—which is not at all a field of my knowledge (let alone expertize). The paper reports on a simulation the authors conducted. I am unable to tell how valuable this particular simulation is, in the overall framework of QInfoScience.

However, as a computational modelling and simulation engineer myself, I can tell this much: Some times, even a simple (stupid!)-looking simulation actually is implemented merely in order to probe on some aspect that no one else has thought of. The simulation is not an end in itself, but merely a step in furthering research. The idea is to explore a niche and to find / highlight some gap in knowledge. In topics that are quite complicated, isolation of one aspect at a time, afforded by a simulation, can be of great help.

(I can cite an example of a very simple-looking simulation, actually a stupid-looking simulation, from my own PhD time research: I had a conference paper on simulating a potential field using random walk and comparing its results with a self-implemented FEM solver. The rather coluorful Gravatar icon which you see (the one which appears in the browser bar when you view my posts here) was actually one of the results I had reported in this preliminary exploration of what eventually became my PhD research.)

Coming back to this paper, it’s not just possible but quite likely that the authors are reporting something that has implications for much more “heavy-duty” topics, say topics like quantum error corrections, where and when they are necessary, the minimum efficiency they must possess, in what kind of architecture/processing, and whatnot. I can’t tell, but this is the nature of simulations. Sometimes, they look simple, but their implications can be quite profound. I am in no position to judge the merits of this paper, from this viewpoint.

At the same time, I also think that probing this idea of measuring just one qubit and tracing its effects on the nearby “bath” of qubits can have good merits. (I vaguely recall the discussions, some time ago, of “pointer states” and all that.)

Yet, of course, I do have a critical comment to make regarding this paper. But my comment is entirely limited to what the paper says regarding the foundational aspects of QM and the relevance of chaos / nonlinear science in QM. With the kind of nonlinearity in \Psi which I have proposed [^], I can clearly see that you can’t say that just because the mainstream QM theory is linear, therefore everything about quantum phenomena has to be linear. No, this is an unwarranted assumption. It was from this viewpoint that I thought that the implication concerning the foundational aspects was not acceptable. That’s why (and how) I wrote the tweets-series and this post.

All in all, my critique is limited to saying that a nonlinearity in \Psi, and hence the butterfly effect, is not only possible in QM, but it is crucial in correctly addressing the measurement problem right. I don’t have any other critique to offer regarding any of the other aspects of the reported work.

Hope this clarifies.

And, to repeat: Of course, I stand ready to be corrected, if I have gone wrong anywhere in the fundamentals of my analysis regarding the foundational issues too.

3. An update on my own research:

3.1. My recent tweets:

Recently (on 23 July 2020), I also tweeted a series [^] regarding the on-going progress in my new approach. Let me copy-paste the tweets (not because the wording is great, but because I have to finish writing this post, somehow!). I have deleted the tweet-continuation numbers, but otherwise kept the content as is:

Regarding my new approach to QM. I think I still have a lot of work to do. Roughly, these steps:

1. Satisfy myself that in simplest 1D toy systems (PIB, QHO), x-axis motion of the particle (charge-singularity) occurs as it should, i.e., such that operators for momentum, 1/

position, & energy have *some* direct physical meaning.

2. Using these ideas, model the H atom in a box with PBCs (i.e., an infinite lattice of repeating finite volumes/cells), and show that the energy of electron obtained in new approach is identical to that in the std. QM (which uses reduced mass, nucleus-relative x, no explicit particle positions, only electron’s energy).

3. Possibly, some other work/model.

4. Repeat 2. for modelling two *interacting* electrons (or the He atom) in a box with PBCs.

Turned out that I got stuck for the past one month+ right at step no. 1!

However, looks like I might have finally succeeded in putting things together right—with one e- in a box, at least.

In the process, found some errors in my post from the ontologies series, part 10.

Will post the corrections at my blog a bit later.

Tentatively, have decided to try and wrap up everything within 4–6 weeks.

So, either I report success with my new approach by, say 1st week of September or so, or I give up (i.e. stop work on QM for at least few months).

But yes, there does seem to be something to it—to the modelling ideas I tried recently. Worth pursuing for a few weeks at least.

… At least I get energy and probability right, which in a way means also position. But I am not fully happy with momentum, even though I get the right numerical values for it, and so, the thinking required is rather on the conceptual–physical side … There *are* “small” issues like these.

But yes,

(1) I’m happy to have spotted definite errors in my own previous documentation—Ontologies series, part 10, as also in the Outline doc. (PDF here [^]) ,
(2) I’m happy to have made a definite progress in the modelling with the new approach.

Bottomline: I don’t have to give up my new approach. Not right away. Have to work on it for another month at least.

3.2. Some comments on the tweets:

I need to highlight the fact that I have spotted some definite errors, both in the Ontologies series (part 10), and in the Outline document.

In particular:

3.2.1. In the Ontologies series, part 10, I had put forth an argument that it’s a complex-valued energy that gets conserved. I am not so sure of that any more, and am going over the whole presentation once again. (The part up covering the PIB modelling from that post is more or less OK.)

3.3.2. In the Outline document, I had said:

“The measurement process is nondestructive of the state of the System. It produces catastrophic changes only in the Instrument”

I now think that this description is partly wrong. Yes, a measurement has to produce catastrophic changes in the Instrument. But now, the view I am developing amounts to saying that the state of the System also undergoes a permanent change during measurement, though such a change is only partial.

3.3. Status of my research:

I am working through the necessary revision of all such points. I am also working through simulations and all. I hope to have another document and a small set of simulations (as spelt in the immediately preceding Twitter thread) soon. The document would still be preliminary, but it is going to be more detailed.

In particular, I would be covering the topic of the differences between the tensor product states (e.g. two non-interacting electrons) in a box vs. the entangled states of two electrons. Alternatively, may be, treating the proton as a quantum object (having its own wavefunction), and thus, simulating only the Hydrogen atom, but with my new approach. Realize, when you treat the proton quantum mechanically, allowing its singularity to move, it becomes a two-particle system.

So, a two-particle system is the minimum required for validation of my new approach.

For convenience of simulation, i.e. especially to counter the difficulties (introduced by boundaries due to discretization of space via FDM mesh), I am going to put the interacting pair of particles inside a box but with periodic boundary conditions (PBCs for short). Ummm… This word, “PBCs”, has been used in two different senses: 1. To denote a single, representative, finite-sized unit cell from an infinitely repeated lattice of such cells, and 2. Ditto, but with further physical imagination that this constitutes a “particle on a ring” so that computations of the orbital angular momentum too must enter the simulation. Here, I am going to stick to PBCs in the sense 1., and not in the sense 2.

I hope to have something within a few weeks. May be 3–4 weeks, perhaps as early as 2–3 weeks. The trouble is, as I implement some simulation, some new conceptual aspects crop up, and by the time I finish ironing out the wrinkles in the conceptual framework, the current implementation turns out to be not very suitable to accommodate further changes, and so, I have to implement much of the whole thing afresh again. Re-implementation is not a problem, at least not a very taxing one (though it can get tiring). The real problems are the conceptual ones.

For instance, it’s only recently that I’ve realized that there is actually a parallel in my approach to Feynman’s idea of an electron “smelling” its neighbourhood around. In Feynman’s version, the electron not only “smells” but also “runs everywhere” at the same time, with the associated “amplitudes” cancelling out / reinforcing at various places. So, he had a picture of the electron that is not a localized particle and yet smells only a local neighbourhood at each point in the domain. He could not remove this contradiction.

I thought that I had fully removed such contradictions, but only to realize, at this (relatively “late”) stage that while “my electron” is a point-particle (in the sense that the singularity in the potential energy field is localized at a point), it still retains the sense of the “smell”. The difference being, now it can smell the entire universe (action at a distance, i.e. IAD). I knew that so long as I use the Fourier theory the IAD would be there. But it was part-surprise and part-delight for me to notice that even “my” electron must have such a “nose”.

Another thing I learnt was that even if I am addressing only the spinless electron, looks like, my framework seems to very easily and naturally incorporates the spin too, at least so long as I remain in 1D. I had just realized it, and soon (within days) came Dr. Woit’s post “What is “spin”?” [^]. I don’t understand it fully, but now that I see this way of putting things, that’s another detour for me.

All in all, working out conceptual aspects is taking time. Further, simply due to the rich inter-connections of concepts, I am afraid that even if I publish a document, it’s not going to be “complete”, in the sense, I wouldn’t be able to insert in it everything that I have understood by now. So, I am aiming to simply put out something new, rather than something comprehensive. (I am not even thinking of having anything “well polished” for months, even an year or so!)

Alright, so there. May be I won’t be blogging for a couple of weeks. But hopefully, I will have something to put out within a month’s time or so…

In the meanwhile, take care, and bye for now…

A song I like:

(Hindi) जादूगर तेरे नैना, दिल जायेगा बच के कहाँ (“jaadugar tere nainaa, dil jaayegaa…”)
Singers: Kishore Kumar, Lata Mangeshkar
Music: Laxmikant-Pyarelal
Lyrics: Rajinder Krishen

[Another song from my high-school days that somehow got thrown up during the recent lockdowns. … When there’s a lockdown in Pune, the streets (and the traffic) look (and “hear”) more like the small towns of my childhood. May be that’s why!]

[Some very minor editing may be effected, but I really don’t have much time—rather, any enthusiasm—for it! So, drop a line if you find something confusing… Take care and bye for now…]


Do you really need a QC in order to have a really unpredictable stream of bits?

0. Preliminaries:

This post has reference to Roger Schlafly’s recent post [^] in which he refers to Prof. Scott Aaronson’s post touching on the issue of the randomness generated by a QC vis-a-vis that obtained using the usual classical hardware [^], in particular, to Aaronson’s remark:

“the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random.”

I do think (based on my new approach to QM [(PDF) ^]) that building a scalable QC is an impossible task.

I wonder if they (the QC enthusiasts) haven’t already begun realizing the hopelessness of their endeavours, and thus haven’t slowly begun preparing for a graceful exit, say via the QC-as-a-RNG route.

While Aaronson’s remarks also saliently involve the element of the “faraway” skeptic, I will mostly ignore that consideration here in this post. I mean to say, initially, I will ignore the scenario in which you have to transmit random bits over a network, and still have to assure the skeptic that what he was getting at the receiving end was something coming “straight from the oven”—something which was not tampered with, in any way, during the transit. The skeptic would have to be specially assured in this scenario, because a network is inherently susceptible to a third-party attack wherein the attacker seeks to exploit the infrastructure of the random keys distribution to his advantage, via injection of systematic bits (i.e. bits of his choice) that only appear random to the intended receiver. A system that quantum-mechanically entangles the two devices at the two ends of the distribution channel, does logically seem to have a very definite advantage over a combination of ordinary RNGs and classical hardware for the network. However, I will not address this part here—not for the most part, and not initially, anyway.

Instead, for most of this post, I will focus on just one basic question:

Can any one be justified in thinking that an RNG that operates at the QM-level might have even a slightest possible advantage, at least logically speaking, over another RNG that operates at the CM-level? Note, the QM-level RNG need not always be a general purpose and scalable QC; it can be any simple or special-purpose device that exploits, and at its core operates at, the specifically QM-level.

Even if I am a 100% skeptic of the scalable QC, I also think that the answer on this latter count is: yes, perhaps you could argue that way. But then, I think, your argument would still be pointless.

Let me explain, following my approach, why I say so.

2. RNGs as based on nonlinearities. Nonlinearities in QM vs. those in CM:

2.1. Context: QM involves IAD:

QM does involve either IAD (instantaneous action a distance), or very, very large (decidedly super-relativistic) speeds for propagation of local changes over all distant regions of space.

From the experimental evidence we have, it seems that there have to be very, very high speeds of propagation, for even smallest changes that can take place in the \Psi and V fields. The Schrodinger equation assumes infinitely large speeds for them. Such obviously cannot be the case—it is best to take the infinite speeds as just an abstraction (as a mathematical approximation) to the reality of very, very high actual speeds. However, the experimental evidence also indicates that even if there has to be some or the other upper bound to the speeds v, with v \gg c, the speeds still have to be so high as to seemingly approach infinity, if the Schrodinger formalism is to be employed. And, of course, as you know it, Schrodinger’s formalism is pretty well understood, validated, and appreciated [^]. (For more on the speed limits and IAD in general, see the addendum at the end of this post.)

I don’t know the relativity theory or the relativistic QM. But I guess that since the electric fields of massive QM particles are non-uniform (they are in fact singular), their interactions with \Psi must be such that the system has to suddenly snap out of some one configuration and in the same process snap into one of the many alternative possible configurations. Since there are huge (astronomically large) number of particles in the universe, the alternative configurations would be {astronomically large}^{very large}—after all, the particles positions and motions are continuous. Thus, we couldn’t hope to calculate the propagation speeds for the changes in the local features of a configuration in terms of all those irreversible snap-out and snap-in events taken individually. We must take them in an ensemble sense. Further, the electric charges are massive, identical, and produce singular and continuous fields. Overall, it is the ensemble-level effects of these individual quantum mechanical snap-out and snap-in events whose end-result would be: the speed-of-light limitation of the special relativity (SR). After all, SR holds on the gross scale; it is a theory from classical electrodynamics. The electric and magnetic fields of classical EM can be seen as being produced by the quantum \Psi field (including the spinor function) of large ensembles of particles in the limit that the number of their configurations approaches infinity, and the classical EM waves i.e. light are nothing but the second-order effects in the classical EM fields.

I don’t know. I was just loud-thinking. But it’s certainly possible to have IAD for the changes in \Psi and V, and thus to have instantaneous energy transfers via photons across two distant atoms in a QM-level description, and still end up with a finite limit for the speed of light (c) for large collections of atoms.

OK. Enough of setting up the context.

2.2: The domain of dependence for the nonlinearity in QM vs. that in CM:

If QM is not linear, i.e., if there is a nonlinearity in the \Psi field (as I have proposed), then to evaluate the merits of the QM-level and CM-level RNGs, we have to compare the two nonlinearities: those in the QM vs. those in the CM.

The classical RNGs are always based on the nonlinearities in CM. For example:

  • the nonlinearities in the atmospheric electricity (the “static”) [^], or
  • the fluid-dynamical nonlinearities (as shown in the lottery-draw machines [^], or the lava lamps [^]), or
  • some or the other nonlinear electronic circuits (available for less than $10 in hardware stores)
  • etc.

All of them are based on two factors: (i) a large number of components (in the core system generating the random signal, not necessarily in the part that probes its state), and (ii) nonlinear interactions among all such components.

The number of variables in the QM description is anyway always larger: a single classical atom is seen as composed from tens, even hundreds of quantum mechanical charges. Further, due to the IAD present in the QM theory, the domain of dependence (DoD) [^] in QM remains, at all times, literally the entire universe—all charges are included in it, and the entire \Psi field too.

On the other hand, the DoD in the CM description remains limited to only that finite region which is contained in the relevant past light-cone. Even when a classical system is nonlinear, and thus gets crazy very rapidly with even small increases in the number of degrees of freedom (DOFs), its DoD still remains finite and rather very small at all times. In contrast, the DoD of QM is the whole universe—all physical objects in it.

2.3 Implication for the RNGs:

Based on the above-mentioned argument, which in my limited reading and knowledge Aaronson has never presented (and neither has any one else either, basically because they all continue to believe in von Neumann’s characterization of QM as a linear theory), an RNG operating at the QM level does seem to have, “logically” speaking, an upper hand over an RNG operating at the CM level.

Then why do I still say that arguing for the superiority of a QM-level RNG is still pointless?

3. The MVLSN principle, and its epistemological basis:

If you apply a proper epistemology (and I have in my mind here the one by Ayn Rand), then the supposed “logical” difference between the two descriptions becomes completely superfluous. That’s because the quantities whose differences are being examined, themselves begin to lose any epistemological standing.

The reason for that, in turn, is what I call the MVLSN principle: the law of the Meaninglessness of the Very Large or very Small Numbers (or scales).

What the MVLSN principle says is that if your argument crucially depends on the use of very large (or very small) quantities and relationships between them, i.e., if the fulcrum of your argument rests on some great extrapolations alone, then it begins to lose all cognitive merit. “Very large” and “very small” are contextual terms here, to be used judiciously.

Roughly speaking, if this principle is applied to our current situation, what it says is that when in your thought you cross a certain limit of DOFs and hence a certain limit of complexity (which anyway is sufficiently large as to be much, much beyond the limit of any and every available and even conceivable means of predictability), then any differences in the relative complexities (here, of the QM-level RNGs vs. the CM-level RNGs) ought to be regarded as having no bearing at all on knowledge, and therefore, as having no relevance in any practical issue.

Both QM-level and CM-level RNGs would be far too complex for you to devise any algorithm or a machine that might be able to predict the sequence of the bits coming out of either. Really. The complexity levels already grow so huge, even with just the classical systems, that it’s pointless trying to predict the the bits. Or, to try and compare the complexity of the classical RNGs with the quantum RNGs.

A clarification: I am not saying that there won’t be any systematic errors or patterns in the otherwise random bits that a CM-based RNG produces. Sure enough, due statistical testing and filtering is absolutely necessary. For instance, what the radio-stations or cell-phone towers transmit are, from the viewpoint of a RNG based on radio noise, systematic disturbances that do affect its randomness. See [^] for further details. I am certainly not denying this part.

All that I am saying is that the sheer number of DOF’s involved itself is so huge that the very randomness of the bits produced even by a classical RNG is beyond every reasonable doubt.

BTW, in this context, do see my previous couple of posts dealing with probability, indeterminism, randomness, and the all-important system vs. the law distinction here [^], and here [^].

4. To conclude my main argument here…:

In short, even “purely” classical RNGs can be way, way too complex for any one to be concerned in any way about their predictability. They are unpredictable. You don’t have to go chase the QM level just in order to ensure unpredictability.

Just take one of those WinTV lottery draw machines [^], start the air flow, get your prediction algorithm running on your computer (whether classical or quantum), and try to predict the next ball that would come out once the switch is pressed. Let me be generous. Assume that the switch gets pressed at exactly predictable intervals.

Go ahead, try it.

5. The Height of the Tallest Possible Man (HTPM):

If you still insist on the supposedly “logical” superiority of the QM-level RNGs, make sure to understand the MVLSN principle well.

The issue here is somewhat like asking this question:

What could possibly be the upper limit to the height of man, taken as a species? Not any other species (like the legendary “yeti”), but human beings, specifically. How tall can any man at all get? Where do you draw the line?

People could perhaps go on arguing, with at least some fig-leaf of epistemological legitimacy, over numbers like 12 feet vs. 14 feet as the true limit. (The world record mentioned in the Guinness Book is slightly under 9 feet [^]. The ceiling in a typical room is about 10 feet high.) Why, they could even perhaps go like: “Ummmm… may be 12 feet is more likely a limit than 24 feet? whaddaya say?”

Being very generous of spirit, I might still describe this as a borderline case of madness. The reason is, in the act of undertaking even just a probabilistic comparison like that, the speaker has already agreed to assign non-zero probabilities to all the numbers belonging to that range. Realize, no one would invoke the ideas of likelihood or probability theory if he thought that the probability for an event, however calculated, was always going to be zero. He would exclude certain kinds of ranges from his analysis to begin with—even for a stochastic analysis. … So, madness it is, even if, in my most generous mood, I might regard it as a borderline madness.

But if you assume that a living being has all the other characteristic of only a human being (including being naturally born to human parents), and if you still say that in between the two statements: (A) a man could perhaps grow to be 100 feet tall, and (B) a man could perhaps grow to be 200 feet tall, it is the statement (A) which is relatively and logically more reasonable, then what the principle (MVLSN) says is this: “you basically have lost all your epistemological bearing.”

That’s nothing but complex (actually, philosophic) for saying that you have gone mad, full-stop.

The law of the meaningless of the very large or very small numbers does have a certain basis in epistemology. It goes something like this:

Abstractions are abstractions from the actually perceived concretes. Hence, even while making just conceptual projections, the range over which a given abstraction (or concept) can remain relevant is determined by the actual ranges in the direct experience from which they were derived (and the nature, scope and purpose of that particular abstraction, the method of reaching it, and its use in applications including projections). Abstractions cannot be used in disregard of the ranges of the measurements over which they were formed.

I think that after having seen the sort of crazy things that even simplest nonlinear systems with fewest variables and parameters can do (for instance, which weather agency in the world can make predictions (to the accuracy demanded by newspapers) beyond 5 days? who can predict which way is the first vortex going to be shed even in a single cylinder experiment?), it’s very easy to conclude that the CM-level vs. QM-level RNG distinction is comparable to the argument about the greater reasonableness of a 100 feet tall man vs. that of a 200 feet tall man. It’s meaningless. And, madness.

6. Aaronson’s further points:

To be fair, much of the above write-up was not meant for Aaronson; he does readily grant the CM-level RNGs validity. What he says, immediately after the quote mentioned at the beginning of this post, is that if you don’t have the requirement of distributing bits over a network,

…then generating random bits is obviously trivial with existing technology.

However, since Aaronson believes that QM is a linear theory, he does not even consider making a comparison of the nonlinearities involved in QM and CM.

I thought that it was important to point out that even the standard (i.e., Schrodinger’s equation-based) QM is nonlinear, and further, that even if this fact leads to some glaring differences between the two technologies (based on the IAD considerations), such differences still do not lead to any advantages whatsoever for the QM-level RNG, as far as the task of generating random bits is concerned.

As to the task of transmitting them over a network is concerned, Aaronson then notes:

If you do have the requirement, on the other hand, then you’ll have to do something interesting—and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

Sure, it will have to involve QM. But then, why does it have to be only a QC? Why not have just special-purpose devices that are quantum mechanically entangled over wires / EM-waves?

And finally, let me come to yet another issue: But why would you at all have to have that requirement?—of having to transmit the keys over a network, and not using any other means?

Why does something as messy as a network have to get involved for a task that is as critical and delicate as distribution of some super-specially important keys? If 99.9999% of your keys-distribution requirements can be met using “trivial” (read: classical) technologies, and if you can also generate random keys using equipment that costs less than $100 at most, then why do you have to spend billions of dollars in just distributing them to distant locations of your own offices / installations—especially if the need for changing the keys is going to be only on an infrequent basis? … And if bribing or murdering a guy who physically carries a sealed box containing a thumb-drive having secret keys is possible, then what makes the guys manning the entangled stations suddenly go all morally upright and also immortal?

From what I have read, Aaronson does consider such questions even if he seems to do so rather infrequently. The QC enthusiasts, OTOH, never do.

As I said, this QC as an RNG thing does show some marks of trying to figure out a respectable exit-way out of the scalable QC euphoria—now that they have already managed to wrest millions and billions in their research funding.

My two cents.

Addendum on speed limits and IAD:

Speed limits are needed out of the principle that infinity is a mathematical concept and cannot metaphysically exist. However, the nature of the ontology involved in QM compels us to rethink many issues right from the beginning. In particular, we need to carefully distinguish between all the following situations:

  1. The transportation of a massive classical object (a distinguishable, i.e. finite-sized, bounded piece of physical matter) from one place to another, in literally no time.
  2. The transmission of the momentum or changes in it (like forces or changes in them) being carried by one object, to a distant object not in direct physical contact, in literally no time.
  3. Two mutually compensating changes in the local values of some physical property (like momentum or energy) suffered at two distant points by the same object, a circumstance which may be viewed from some higher-level or abstract perspective as transmission of the property in question over space but in no time. In reality, it’s just one process of change affecting only one object, but it occurs in a special way: in mutually compensating manner at two different places at the same time.

Only the first really qualifies to be called spooky. The second is curious but not necessarily spooky—not if you begin to regard two planets as just two regions of the same background object, or alternatively, as two clearly different objects which are being pulled in various ways at the same time and in mutually compensating ways via some invisible strings or fields that shorten or extend appropriately. The third one is not spooky at all—the object that effects the necessary compensations is not even a third object (like a field). Both the interacting “objects” and the “intervening medium” are nothing but different parts of one and the same object.

What happens in QM is the third possibility. I have been describing such changes as occurring with an IAD (instantaneous action at a distance), but now I am not too sure if such a usage is really correct or not. I now think that it is not. The term IAD should be reserved only for the second category—it’s an action that gets transported there. As to the first category, a new term should be coined: ITD (instantaneous transportation to distance). As to the third category, the new term could be IMCAD (instantaneous and mutually compensating actions at a distance). However, this all is an afterthought. So, in this post, I only have ended up using the term IAD even for the third category.

Some day I will think more deeply about it and straighten out the terminology, may be invent some or new terms to describe all the three situations with adequate directness, and then choose the best… Until then, please excuse me and interpret what I am saying in reference to context. Also, feel free to suggest good alternative terms. Also, let me know if there are any further distinctions to be made, i.e., if the above classification into three categories is not adequate or refined enough. Thanks in advance.

A song I like:

[A wonderful “koLi-geet,” i.e., a fisherman’s song. Written by a poet who hailed not from the coastal “konkaN” region but from the interior “desh.” But it sounds so authentically coastal… Listening to it today instantly transported me back to my high-school days.]

(Marathi) “suTalaa vaadaLi vaaraa…”
Singing, Music and Lyrics: Shaahir Amar Sheikh


History: Originally published on 2019.07.04 22:53 IST. Extended and streamlined considerably on 2019.07.05 11:04 IST. The songs section added: 2019.07.05 17:13 IST. Further streamlined, and also further added a new section (no. 6.) on 2019.07.5 22:37 IST. … Am giving up on this post now. It grew from about 650 words (in a draft for a comment at Schlafly’s blog) to 3080 words as of now. Time to move on.

Still made further additions and streamlining for a total of ~3500 words, on 2019.07.06 16:24 IST.