# Still loitering around…

As noted in the last post, I’ve been browsing a lot. However, I find that the signal-to-noise ratio is, in a way, too low. There are too few things worth writing home about. Of course, OTOH, some of these things are so deep that they can keep one occupied for a long time.

Anyway, let me give many (almost all?) of the interesting links I found since my last post. These are being noted in no particular order. In most cases, the sub-title says it all, and so, I need not add comments. However, for a couple of videos related to QM, I do add significant amount of comments. … BTW, too many hats to do the tipping to. So let me skip that part and directly give you the URLs…

“A `digital alchemist’ unravels the mysteries of complexity”:

“Computational physicist Sharon Glotzer is uncovering the rules by which complex collective phenomena emerge from simple building blocks.” [^]

“Up and down the ladder of abstraction. A systematic approach to interactive visualization.” [^]

The tweet that pointed to this URL had this preface: “One particular talent stands out among the world-class programmers I’ve known—namely, an ability to move effortlessly between different levels of abstraction.”—Donald Knuth.

My own thinking processes are such that I use visualization a lot. Nay, I must. That’s the reason I appreciated this link. Incidentally, it also is the reason why I did not play a lot with the interactions here! (I put it in the TBD / Some other day / Etc. category.)

“The 2021 AI index: major growth despite the pandemic.”

“This year’s report shows a maturing industry, significant private investment, and rising competition between China and the U.S.” [^]

“Science relies on constructive criticism. Here’s how to keep it useful and respectful.” [^]

The working researcher, esp. the one who blogs / interacts a lot, probably already knows most all this stuff. But for students, it might be useful to have such tips collected in one place.

“How to criticize with kindness: Philosopher Daniel Dennett on the four steps to arguing intelligently.” [^].

Ummm… Why four, Dan? Why not, say, twelve? … Also, what if one honestly thinks that retards aren’t ever going to get any part of it?… Oh well, let me turn to the next link though…

“Susan Sontag on censorship and the three steps to refuting any argument” [^]

I just asked about four steps, and now comes Sontag. She comes down to just three steps, and also generalizes the applicability of the advice to any argument… But yes, she mentions a good point about censorship. Nice.

“The needless complexity of modern calculus: How 18th century mathematicians complicated calculus to avoid the criticisms of a bishop.” [^]

Well, the article does have a point, but if you ask me, there’s no alternative to plain hard work. No alternative to taking a good text-book or two (like Thomas and Finney, as also Resnick and Halliday (yes, for maths)), paper and pen / pencil, and working your way through. No alternative to that… But if you do that once for some idea, then every idea which depends on it, does become so simple—for your entire life. A hint or a quick reference is all you need, then. [Hints for the specific topic of this piece: the Taylor series, and truncation thereof.] But yes, the article is worth a fast read (if you haven’t read / used calculus in a while). … Also, Twitterati who mentioned this article also recommended the wonderful book from the next link (which I had forgotten)…

The above link is to the Wiki article, which in turn gives the link to the PDF of the book. Check out the preface of the book, first thing.

“The first paper published in the first overlay journal (JTCAM) in Solid Mechanics” [^]

It’s too late for me (I have left mechanics as a full-time field quite a while ago) but I do welcome this development. … A few years ago, Prof. Timothy Gowers had begun an overlay journal in maths, and then, there also was an overlay journal for QM, and I had welcomed both these developments back then; see my blog post here [^].

“The only two equations that you should know: Part 1” [^].

Dr. Joglekar makes many good points, but I am not sure if my choice for the two equations is going to be the same.

[In fact, I don’t even like the restriction that there should be just two equations. …And, what’s happenning? Four steps. Then, three steps. Now, two equations… How long before we summarily turn negative, any idea?]

But yes, a counter-balance like the one in this article is absolutely necessary. The author touches on $E = mc^2$ and Newton’s laws, but I will go ahead and add topics like the following too: Big Bang, Standard Model, (and, Quantum Computers, String Theory, Multiverses, …).

“Turing award goes to creators of computer programming building blocks” [^] “Jeffrey Ullman and Alfred Aho developed many of the fundamental concepts that researchers use when they build new software.”

Somehow, there wasn’t as much of excitement this year as the Turing award usually generates.

Personally, though, I could see why the committee might have decided to recognize Aho and Ullman’s work. I had once built a “yacc”-like tool that would generate the tables for a table-driver parser, given the abstract grammar specification in the extended Backus-Noor form (EBNF). I did it as a matter of hobby, working in the evenings. The only resource I used was the “dragon book”, which was written by Profs. Aho, Sethi, and Ullman. It was a challenging but neat book. (I am not sure why they left Sethi out. However, my knowledge of the history of development of this area is minimal. So, take it as an idle wondering…)

Congratulations to Profs. Aho and Ullman.

“Stop calling everything AI, machine-learning pioneer says” [^] “Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent”

Well, “every one” knows that, but the fact is, it still needs to be said (and even explained!)

“How a gene for fair skin spread across India” [^] “A study of skin color in the Indian subcontinent shows the complex movements of populations there.”

No, the interesting thing about this article, IMO, was not that it highlighted Indians’ fascination / obsession for fairness—the article actually doesn’t even passingly mention this part. The real interesting thing, to me, was: the direct visual depiction, as it were, of Indian Indologists’ obsession with just one geographical region of India, viz., the Saraswati / Ghaggar / Mohan Ja Daro / Dwaarkaa / Pakistan / Etc. And, also the European obsession with the same region! … I mean check out how big India actually is, you know…

H/W for those interested: Consult good Sanskrit dictionaries and figure out the difference between निल (“nila”) and नील (“neela”). Hint: One of the translations for one of these two words is “black” in the sense “dark”, but not “blue”, and vice-versa for the other. You only have to determine which one stands for what meaning.

Want some more H/W? OK… Find out the most ancient painting of कृष्ण (“kRSNa”) or even राम (“raama”) that is still extant. What is the colour of the skin as shown in the painting? Why? Has the painting been dated to the times before the Europeans (Portugese, Dutch, French, Brits, …) arrived in India (say in the second millennium AD)?

“Six lessons from the biotech startup world” [^]

Dr. Joglekar again… Here, I think every one (whether connected with a start-up or not) should go through the first point: “It’s about the problem, not about the technology”.

Too many engineers commit this mistake, and I guess this point can be amplified further—the tools vs. the problem. …It’s but one variant of the “looking under the lamp” fallacy, but it’s an important one. (Let me confess: I tend to repeat the same error too, though with experience, one does also learn to catch the drift in time.)

“The principle of least action—why it works.” [^].

Neat article.

I haven’t read the related book [“The lazy universe: an introduction to the principle of least action”] , but looking at the portions available at Google [^], even though I might have objections to raise (or at least comments to make) on the positions taken by the author in the book, I am definitely going to add it to the list of books I recommend [^].

Let me mention the position from which I will be raising my objections (if any), in the briefest (and completely on-the-fly) words:

The principle of the least action (PLA) is a principle that brings out what is common to calculations in a mind-bogglingly large variety of theoretical contexts in physics. These are the contexts which involve either the concept of energy, or some suitable mathematical “generalizations” of the same concept.

As such, PLA can be regarded as a principle for a possible organization of our knowledge from a certain theoretical viewpoint.

However, PLA itself has no definite ontological content; whatever ontological content you might associate with PLA would go on changing as per the theoretical context in which it is used. Consequently, PLA cannot be seen as capturing an actual physical characteristic existing in the world out there; it is not a “thing” or “property” that is shared in common by the objects, facts or phenomena in the physical world.

Let me give you an example. The differential equation for heat conduction has exactly the same form as that for diffusion of chemical species. Both are solved using exactly the same technique, viz., the Fourier theory. Both involve a physical flux which is related to the gradient vector of some physically existing scalar quantity. However, this does not mean that both phenomena are produced by the same physical characteristic or property of the physical objects. The fact that both are parabolic PDEs can be used to organize our knowledge of the physical world, but such organization proceeds by making appeal to what is common to methods of calculations, and not in reference to some ontological or physical facts that are in common to both.

Further, it must also be noted, PLA does not apply to all of physics, but only to the more fundamental theories in it. In particular, try applying it to situations where the governing differential equation is not of the second-order, but is of the first- or the third-order [^]. Also, think about the applicability of PLA for dissipative / path-dependent processes.

… I don’t know whether the author (Dr. Jennifer Coopersmith) covers points like these in the book or not… But even if she doesn’t (and despite any differences I anticipate as of now, and indeed might come to keep also after reading the book), I am sure, the book is going to be extraordinarily enlightening in respect of an array of topics. … Strongly recommended.

Muon $g-2$.

I will give some the links I found useful. (Not listed in any particular order)

• Dennis Overbye covers it for the NYT [^],
• Natalie Wolchoever for the Quanta Mag [^],
• Dr. Luboš Motl for his blog [^],
• Dr. Peter Woit for his blog [^],
• Dr. Adam Falkowski (“Jester”) for his blog [^],
• Dr. Ethan Siegel for the Forbes [^], and,
• Dr. Sabine Hossenfelder for Sci-Am [^].

If you don’t want to go through all these blog-posts, and only are looking for the right outlook to adopt, then check out the concluding parts of Hossenfelder’s and Siegel’s pieces (which conveniently happen to be the last two in the above list).

As to the discussions: The Best Comment Prize is hereby being awarded, after splitting it equally into two halves, to “Manuel Gnida” for this comment [^], and to “Unknown” for this comment [^].

The five-man quantum mechanics (aka “super-determinism”):

By which, I refer to this video on YouTube: “Warsaw Spacetime Colloquium #11 – Sabine Hossenfelder (2021/03/26)” [^].

In this video, Dr. Hossenfelder talks about… “super-determinism.”

Incidentally, this idea (of super-determinism) had generated a lot of comments at Prof. Dr. Scott Aaronson’s blog. See the reader comments following this post: [^]. In fact, Aaronson had to say in the end: “I’m closing this thread tonight, honestly because I’m tired of superdeterminism discussion.” [^].

Hossenfelder hasn’t yet posted this video at her own blog.

There are five people in the entire world who do research in super-determinism, Hossenfelder seems to indicate. [I know, I know, not all of them are men. But I still chose to say the five-man QM. It has a nice ring to it—if you know a certain bit from the history of QM.]

Given the topic, I expected to browse through the video really rapidly, like a stone that goes on skipping on the surface of water [^], and thus, being done with it right within 5–10 minutes or so.

Instead, I found myself listening to it attentively, not skipping even a single frame, and finishing the video in the sequence presented. Also, going back over some portions for the second time…. And that’s because Hossenfelder’s presentation is so well thought out. [But where is the PDF of the slides?]

It’s only after going through this video that I got to understand what the idea of “super-determinism” is supposed to be like, and how it differs from the ordinary “determinism”. Spoiler: Think “hidden variables”.

My take on the video:

No, the idea (of super-determinism) isn’t at all necessary to explain QM.

However, it still was neat to get to know what (those five) people mean by it, and also, more important: why these people take it seriously.

In fact, given Hossenfelder’s sober (and intelligent!) presentation of it, I am willing to give them a bit of a rope too. …No, not so long that they can hang themselves with it, but long enough that they can perform some more detailed simulations. … I anticipate that when they conduct their simulations, they themselves are going to understand the query regarding the backward causation (raised by a philosopher during the interactive part of the video) in a much better manner. That’s what I anticipate.

Another point. Actually, “super-determinism” is supposed to be “just” a theory of physics, and hence, it should not have any thing to say about topics like consciousness, free-will, etc. But I gather that at least some of them (out of the five) do seem to think that the free-will would have to be denied, may be as a consequence of super-determinism. Taken in this sense, my mind has classified “super-determinism” as being the perfect foil to (or the other side of) panpsychism. … As to panpsychism, if interested, check out my take on it, here [^].

All along, I had always thought that super-determinism is going to turn out to be a wrong idea. Now, after watching this video, I know that it is a wrong idea.

However, precisely for the same reason (i.e., coming to know what they actually have in mind, and also, how they are going about it), I am not going to attack them, their research program. … Not necessary… I am sure that they would want to give up their program on their own, once (actually, some time after) I publish my ideas. I think so. … So, there…

“Video: Quantum mechanics isn’t weird, we’re just too big” YouTube video at: [^]

The speaker is Dr. Phillip Ball; the host is Dr. Zlatko Minev. Let me give some highlights of their bio’s: Ball has a bachelor’s in chemistry from Oxford and a PhD in physics from Bristol. He was an editor at Nature for two decades. Minev has a BS in physics from Berkeley and a PhD in applied physics from Yale. He works in the field of QC at IBM (which used to be the greatest company in the computers industry (including software)).

The abstract given at the YouTube page is somewhat misleading. Ignore it, and head towards the video itself.

The video can be divided into two parts: (i) the first part, ~47 minutes long, is a presentation by Ball; (ii) the second part is a chat between the host (Minev) and the guest (Ball). IMO, if you are in a hurry, you may ignore the second part (the chat).

The first two-third portion of the first part (the presentation) is absolutely excellent. I mean the first 37 minutes. This good portion (actually excellent) gets over once Ball goes to the slide which says “Reconstructing quantum mechanics from informational rules”, which occurs at around 37 minutes. From this point onward, Ball’s rigour dilutes a bit, though he does recover by the 40:00 minutes mark or so. But from ~45:00 to the end (~47:00), it’s all down-hill (IMO). May be Ball was making a small little concession to his compatriots.

However, the first 37 minutes are excellent (or super-excellent).

But even if you are absolutely super-pressed for time, then I would still say: Check out at least the first 10 odd minutes. … Yes, I agree 101 percent with Ball, when it comes to the portion from ~5:00 through 06:44 through 07:40.

Now, a word about the mistakes / mis-takes:

Ball says, in a sentence that begins at 08:10 that Schrodinger devised the equation 1924. This is a mistake / slip of the tongue. Schrodinger developed his equation in late 1925, and published it in 1926, certainly not in 1924. I wonder how come it slipped past Ball.

Also, the title of the video is somewhat misleading. “Bigness” isn’t really the distinguishing criterion in all situations. Large-distance QM entanglements have been demonstrated; in particular, photons are (relativistic) QM phenomena. So, size isn’t necessarily always the issue (even if the ideas of multi-scaling must be used for bridging between “classical” mechanics and QM).

And, oh yes, one last point… People five-and-a-half feet tall also are big enough, Phil! Even the new-borns, for that matter…

A personal aside: Listening to Ball, somehow, I got reminded of some old English English movies I had seen long back, may be while in college. Somehow, my registration of the British accent seems to have improved a lot. (Or may be the Brits these days speak with a more easily understandable accent.)

Status of my research on QM:

If I have something to note about my research, especially that related to the QM spin and all, then I will come back a while later and note something—may be after a week or two. …

As of today, I still haven’t finished taking notes and thinking about it. In fact, the status actually is that I am kindaa “lost”, in the sense: (i) I cannot stop browsing so as to return to the study / research, and (ii) even when I do return to the study, I find that I am unable to “zoom in” and “zoom out” of the topic (by which, I mean, switching the contexts at will, in between all: the classical ideas, the mainstream QM ideas, and the ideas from my own approach). Indeed (ii) is the reason for (i). …

If the same thing continues for a while, I will have to rethink whether I want to address the QM spin right at this stage or not…

You know, there is a very good reason for omitting the QM spin. The fact of the matter is, in the non-relativistic QM, the spin can only be introduced on an ad-hoc basis. … It’s only in the relativistic QM that the spin comes out as a necessary consequence of certain more basic considerations (just the way in the non-relativistic QM, the ground-state energy comes out as a consequence of the eigenvalue nature of the problem; you don’t have to postulate a stable orbit for it as in the Bohr theory). …

So, it’s entirely possible that my current efforts to figure out a way to relate the ideas from my own approach to the mainstream QM treatment of the spin are, after all, a basically pointless exercise. Even if I do think hard and figure out some good and original ideas / path-ways, they aren’t going to be enough, because they aren’t going to be general enough anyway.

At the same time, I know that I am not going to get into the relativistic QM, because it has to be a completely distinct development—and it’s going require a further huge effort, perhaps a humongous effort. And, it’s not necessary for solving the measurement problem anyway—which was my goal!

That’s why, I have to really give it a good thought—whether I should be spending so much time on the QM spin or not. May giving some sketchy ideas (rather, making some conceptual-level statements) is really enough… No one throws so much material in just one paper, anyway! Even the founders of QM didn’t! … So, that’s another line of thought that often pops up in my mind. …

My current plan, however, is to finish taking the notes on the mainstream QM treatment of the spin anyway—at least to the level of Eisberg and Resnick, though I can’t finish it, because this desire to connect my approach to the mainstream idea also keeps on interfering…

All in all, it’s a weird state to be in! … And, that’s what the status looks like, as of now…

… Anyway, take care and bye for now…

A song I, ahem, like:

It was while browsing that I gathered, a little while ago, that there is some “research” which “explains why” some people “like” certain songs (like the one listed below) “so much”.

The research in question was this paper [^]; it was mentioned on Twitter (where else?). Someone else, soon thereafter, also pointed out a c. 2014 pop-sci level coverage [^] of a book published even earlier [c. 2007].

From the way this entire matter was now being discussed, it was plain and obvious that the song had been soul-informing for some, not just soul-satisfying. The song in question is the following:

(Hindi) सुन रुबिया तुम से प्यार हो गया (“sun rubiyaa tum se pyaar ho gayaa”)
Music: Anu Malik
Lyrics: Prayag Raj
Singers: S. Jaanaki, Shabbir Kumar

Given the nature of this song, it would be OK to list the credits in any order, I guess. … But if you ask me why I too, ahem, like this song, then recourse must be made not just to the audio of this song [^] but also to its video. Not any random video but the one that covers the initial sequence of the song to an adequate extent; e.g., as in here [^].

History:
2021.04.09 19:22 IST: Originally published.
2021.04.10 20:47 IST: Revised considerably, especially in the section related to the principle of the least action (PLA), and the section on the current status of my research on QM. Also minor corrections and streamlining. Guess now I am done with this post.

# Do you really need a QC in order to have a really unpredictable stream of bits?

0. Preliminaries:

This post has reference to Roger Schlafly’s recent post [^] in which he refers to Prof. Scott Aaronson’s post touching on the issue of the randomness generated by a QC vis-a-vis that obtained using the usual classical hardware [^], in particular, to Aaronson’s remark:

“the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random.”

I do think (based on my new approach to QM [(PDF) ^]) that building a scalable QC is an impossible task.

I wonder if they (the QC enthusiasts) haven’t already begun realizing the hopelessness of their endeavours, and thus haven’t slowly begun preparing for a graceful exit, say via the QC-as-a-RNG route.

While Aaronson’s remarks also saliently involve the element of the “faraway” skeptic, I will mostly ignore that consideration here in this post. I mean to say, initially, I will ignore the scenario in which you have to transmit random bits over a network, and still have to assure the skeptic that what he was getting at the receiving end was something coming “straight from the oven”—something which was not tampered with, in any way, during the transit. The skeptic would have to be specially assured in this scenario, because a network is inherently susceptible to a third-party attack wherein the attacker seeks to exploit the infrastructure of the random keys distribution to his advantage, via injection of systematic bits (i.e. bits of his choice) that only appear random to the intended receiver. A system that quantum-mechanically entangles the two devices at the two ends of the distribution channel, does logically seem to have a very definite advantage over a combination of ordinary RNGs and classical hardware for the network. However, I will not address this part here—not for the most part, and not initially, anyway.

Instead, for most of this post, I will focus on just one basic question:

Can any one be justified in thinking that an RNG that operates at the QM-level might have even a slightest possible advantage, at least logically speaking, over another RNG that operates at the CM-level? Note, the QM-level RNG need not always be a general purpose and scalable QC; it can be any simple or special-purpose device that exploits, and at its core operates at, the specifically QM-level.

Even if I am a 100% skeptic of the scalable QC, I also think that the answer on this latter count is: yes, perhaps you could argue that way. But then, I think, your argument would still be pointless.

Let me explain, following my approach, why I say so.

2. RNGs as based on nonlinearities. Nonlinearities in QM vs. those in CM:

QM does involve either IAD (instantaneous action a distance), or very, very large (decidedly super-relativistic) speeds for propagation of local changes over all distant regions of space.

From the experimental evidence we have, it seems that there have to be very, very high speeds of propagation, for even smallest changes that can take place in the $\Psi$ and $V$ fields. The Schrodinger equation assumes infinitely large speeds for them. Such obviously cannot be the case—it is best to take the infinite speeds as just an abstraction (as a mathematical approximation) to the reality of very, very high actual speeds. However, the experimental evidence also indicates that even if there has to be some or the other upper bound to the speeds $v$, with $v \gg c$, the speeds still have to be so high as to seemingly approach infinity, if the Schrodinger formalism is to be employed. And, of course, as you know it, Schrodinger’s formalism is pretty well understood, validated, and appreciated [^]. (For more on the speed limits and IAD in general, see the addendum at the end of this post.)

I don’t know the relativity theory or the relativistic QM. But I guess that since the electric fields of massive QM particles are non-uniform (they are in fact singular), their interactions with $\Psi$ must be such that the system has to suddenly snap out of some one configuration and in the same process snap into one of the many alternative possible configurations. Since there are huge (astronomically large) number of particles in the universe, the alternative configurations would be {astronomically large}^{very large}—after all, the particles positions and motions are continuous. Thus, we couldn’t hope to calculate the propagation speeds for the changes in the local features of a configuration in terms of all those irreversible snap-out and snap-in events taken individually. We must take them in an ensemble sense. Further, the electric charges are massive, identical, and produce singular and continuous fields. Overall, it is the ensemble-level effects of these individual quantum mechanical snap-out and snap-in events whose end-result would be: the speed-of-light limitation of the special relativity (SR). After all, SR holds on the gross scale; it is a theory from classical electrodynamics. The electric and magnetic fields of classical EM can be seen as being produced by the quantum $\Psi$ field (including the spinor function) of large ensembles of particles in the limit that the number of their configurations approaches infinity, and the classical EM waves i.e. light are nothing but the second-order effects in the classical EM fields.

I don’t know. I was just loud-thinking. But it’s certainly possible to have IAD for the changes in $\Psi$ and $V$, and thus to have instantaneous energy transfers via photons across two distant atoms in a QM-level description, and still end up with a finite limit for the speed of light ($c$) for large collections of atoms.

OK. Enough of setting up the context.

2.2: The domain of dependence for the nonlinearity in QM vs. that in CM:

If QM is not linear, i.e., if there is a nonlinearity in the $\Psi$ field (as I have proposed), then to evaluate the merits of the QM-level and CM-level RNGs, we have to compare the two nonlinearities: those in the QM vs. those in the CM.

The classical RNGs are always based on the nonlinearities in CM. For example:

• the nonlinearities in the atmospheric electricity (the “static”) [^], or
• the fluid-dynamical nonlinearities (as shown in the lottery-draw machines [^], or the lava lamps [^]), or
• some or the other nonlinear electronic circuits (available for less than \$10 in hardware stores)
• etc.

All of them are based on two factors: (i) a large number of components (in the core system generating the random signal, not necessarily in the part that probes its state), and (ii) nonlinear interactions among all such components.

The number of variables in the QM description is anyway always larger: a single classical atom is seen as composed from tens, even hundreds of quantum mechanical charges. Further, due to the IAD present in the QM theory, the domain of dependence (DoD) [^] in QM remains, at all times, literally the entire universe—all charges are included in it, and the entire $\Psi$ field too.

On the other hand, the DoD in the CM description remains limited to only that finite region which is contained in the relevant past light-cone. Even when a classical system is nonlinear, and thus gets crazy very rapidly with even small increases in the number of degrees of freedom (DOFs), its DoD still remains finite and rather very small at all times. In contrast, the DoD of QM is the whole universe—all physical objects in it.

2.3 Implication for the RNGs:

Based on the above-mentioned argument, which in my limited reading and knowledge Aaronson has never presented (and neither has any one else either, basically because they all continue to believe in von Neumann’s characterization of QM as a linear theory), an RNG operating at the QM level does seem to have, “logically” speaking, an upper hand over an RNG operating at the CM level.

Then why do I still say that arguing for the superiority of a QM-level RNG is still pointless?

3. The MVLSN principle, and its epistemological basis:

If you apply a proper epistemology (and I have in my mind here the one by Ayn Rand), then the supposed “logical” difference between the two descriptions becomes completely superfluous. That’s because the quantities whose differences are being examined, themselves begin to lose any epistemological standing.

The reason for that, in turn, is what I call the MVLSN principle: the law of the Meaninglessness of the Very Large or very Small Numbers (or scales).

What the MVLSN principle says is that if your argument crucially depends on the use of very large (or very small) quantities and relationships between them, i.e., if the fulcrum of your argument rests on some great extrapolations alone, then it begins to lose all cognitive merit. “Very large” and “very small” are contextual terms here, to be used judiciously.

Roughly speaking, if this principle is applied to our current situation, what it says is that when in your thought you cross a certain limit of DOFs and hence a certain limit of complexity (which anyway is sufficiently large as to be much, much beyond the limit of any and every available and even conceivable means of predictability), then any differences in the relative complexities (here, of the QM-level RNGs vs. the CM-level RNGs) ought to be regarded as having no bearing at all on knowledge, and therefore, as having no relevance in any practical issue.

Both QM-level and CM-level RNGs would be far too complex for you to devise any algorithm or a machine that might be able to predict the sequence of the bits coming out of either. Really. The complexity levels already grow so huge, even with just the classical systems, that it’s pointless trying to predict the the bits. Or, to try and compare the complexity of the classical RNGs with the quantum RNGs.

A clarification: I am not saying that there won’t be any systematic errors or patterns in the otherwise random bits that a CM-based RNG produces. Sure enough, due statistical testing and filtering is absolutely necessary. For instance, what the radio-stations or cell-phone towers transmit are, from the viewpoint of a RNG based on radio noise, systematic disturbances that do affect its randomness. See random.org [^] for further details. I am certainly not denying this part.

All that I am saying is that the sheer number of DOF’s involved itself is so huge that the very randomness of the bits produced even by a classical RNG is beyond every reasonable doubt.

BTW, in this context, do see my previous couple of posts dealing with probability, indeterminism, randomness, and the all-important system vs. the law distinction here [^], and here [^].

4. To conclude my main argument here…:

In short, even “purely” classical RNGs can be way, way too complex for any one to be concerned in any way about their predictability. They are unpredictable. You don’t have to go chase the QM level just in order to ensure unpredictability.

Just take one of those WinTV lottery draw machines [^], start the air flow, get your prediction algorithm running on your computer (whether classical or quantum), and try to predict the next ball that would come out once the switch is pressed. Let me be generous. Assume that the switch gets pressed at exactly predictable intervals.

5. The Height of the Tallest Possible Man (HTPM):

If you still insist on the supposedly “logical” superiority of the QM-level RNGs, make sure to understand the MVLSN principle well.

The issue here is somewhat like asking this question:

What could possibly be the upper limit to the height of man, taken as a species? Not any other species (like the legendary “yeti”), but human beings, specifically. How tall can any man at all get? Where do you draw the line?

People could perhaps go on arguing, with at least some fig-leaf of epistemological legitimacy, over numbers like 12 feet vs. 14 feet as the true limit. (The world record mentioned in the Guinness Book is slightly under 9 feet [^]. The ceiling in a typical room is about 10 feet high.) Why, they could even perhaps go like: “Ummmm… may be 12 feet is more likely a limit than 24 feet? whaddaya say?”

Being very generous of spirit, I might still describe this as a borderline case of madness. The reason is, in the act of undertaking even just a probabilistic comparison like that, the speaker has already agreed to assign non-zero probabilities to all the numbers belonging to that range. Realize, no one would invoke the ideas of likelihood or probability theory if he thought that the probability for an event, however calculated, was always going to be zero. He would exclude certain kinds of ranges from his analysis to begin with—even for a stochastic analysis. … So, madness it is, even if, in my most generous mood, I might regard it as a borderline madness.

But if you assume that a living being has all the other characteristic of only a human being (including being naturally born to human parents), and if you still say that in between the two statements: (A) a man could perhaps grow to be 100 feet tall, and (B) a man could perhaps grow to be 200 feet tall, it is the statement (A) which is relatively and logically more reasonable, then what the principle (MVLSN) says is this: “you basically have lost all your epistemological bearing.”

That’s nothing but complex (actually, philosophic) for saying that you have gone mad, full-stop.

The law of the meaningless of the very large or very small numbers does have a certain basis in epistemology. It goes something like this:

Abstractions are abstractions from the actually perceived concretes. Hence, even while making just conceptual projections, the range over which a given abstraction (or concept) can remain relevant is determined by the actual ranges in the direct experience from which they were derived (and the nature, scope and purpose of that particular abstraction, the method of reaching it, and its use in applications including projections). Abstractions cannot be used in disregard of the ranges of the measurements over which they were formed.

I think that after having seen the sort of crazy things that even simplest nonlinear systems with fewest variables and parameters can do (for instance, which weather agency in the world can make predictions (to the accuracy demanded by newspapers) beyond 5 days? who can predict which way is the first vortex going to be shed even in a single cylinder experiment?), it’s very easy to conclude that the CM-level vs. QM-level RNG distinction is comparable to the argument about the greater reasonableness of a 100 feet tall man vs. that of a 200 feet tall man. It’s meaningless. And, madness.

6. Aaronson’s further points:

To be fair, much of the above write-up was not meant for Aaronson; he does readily grant the CM-level RNGs validity. What he says, immediately after the quote mentioned at the beginning of this post, is that if you don’t have the requirement of distributing bits over a network,

…then generating random bits is obviously trivial with existing technology.

However, since Aaronson believes that QM is a linear theory, he does not even consider making a comparison of the nonlinearities involved in QM and CM.

I thought that it was important to point out that even the standard (i.e., Schrodinger’s equation-based) QM is nonlinear, and further, that even if this fact leads to some glaring differences between the two technologies (based on the IAD considerations), such differences still do not lead to any advantages whatsoever for the QM-level RNG, as far as the task of generating random bits is concerned.

As to the task of transmitting them over a network is concerned, Aaronson then notes:

If you do have the requirement, on the other hand, then you’ll have to do something interesting—and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

Sure, it will have to involve QM. But then, why does it have to be only a QC? Why not have just special-purpose devices that are quantum mechanically entangled over wires / EM-waves?

And finally, let me come to yet another issue: But why would you at all have to have that requirement?—of having to transmit the keys over a network, and not using any other means?

Why does something as messy as a network have to get involved for a task that is as critical and delicate as distribution of some super-specially important keys? If 99.9999% of your keys-distribution requirements can be met using “trivial” (read: classical) technologies, and if you can also generate random keys using equipment that costs less than \$100 at most, then why do you have to spend billions of dollars in just distributing them to distant locations of your own offices / installations—especially if the need for changing the keys is going to be only on an infrequent basis? … And if bribing or murdering a guy who physically carries a sealed box containing a thumb-drive having secret keys is possible, then what makes the guys manning the entangled stations suddenly go all morally upright and also immortal?

From what I have read, Aaronson does consider such questions even if he seems to do so rather infrequently. The QC enthusiasts, OTOH, never do.

As I said, this QC as an RNG thing does show some marks of trying to figure out a respectable exit-way out of the scalable QC euphoria—now that they have already managed to wrest millions and billions in their research funding.

My two cents.

Speed limits are needed out of the principle that infinity is a mathematical concept and cannot metaphysically exist. However, the nature of the ontology involved in QM compels us to rethink many issues right from the beginning. In particular, we need to carefully distinguish between all the following situations:

1. The transportation of a massive classical object (a distinguishable, i.e. finite-sized, bounded piece of physical matter) from one place to another, in literally no time.
2. The transmission of the momentum or changes in it (like forces or changes in them) being carried by one object, to a distant object not in direct physical contact, in literally no time.
3. Two mutually compensating changes in the local values of some physical property (like momentum or energy) suffered at two distant points by the same object, a circumstance which may be viewed from some higher-level or abstract perspective as transmission of the property in question over space but in no time. In reality, it’s just one process of change affecting only one object, but it occurs in a special way: in mutually compensating manner at two different places at the same time.

Only the first really qualifies to be called spooky. The second is curious but not necessarily spooky—not if you begin to regard two planets as just two regions of the same background object, or alternatively, as two clearly different objects which are being pulled in various ways at the same time and in mutually compensating ways via some invisible strings or fields that shorten or extend appropriately. The third one is not spooky at all—the object that effects the necessary compensations is not even a third object (like a field). Both the interacting “objects” and the “intervening medium” are nothing but different parts of one and the same object.

What happens in QM is the third possibility. I have been describing such changes as occurring with an IAD (instantaneous action at a distance), but now I am not too sure if such a usage is really correct or not. I now think that it is not. The term IAD should be reserved only for the second category—it’s an action that gets transported there. As to the first category, a new term should be coined: ITD (instantaneous transportation to distance). As to the third category, the new term could be IMCAD (instantaneous and mutually compensating actions at a distance). However, this all is an afterthought. So, in this post, I only have ended up using the term IAD even for the third category.

Some day I will think more deeply about it and straighten out the terminology, may be invent some or new terms to describe all the three situations with adequate directness, and then choose the best… Until then, please excuse me and interpret what I am saying in reference to context. Also, feel free to suggest good alternative terms. Also, let me know if there are any further distinctions to be made, i.e., if the above classification into three categories is not adequate or refined enough. Thanks in advance.

A song I like:

[A wonderful “koLi-geet,” i.e., a fisherman’s song. Written by a poet who hailed not from the coastal “konkaN” region but from the interior “desh.” But it sounds so authentically coastal… Listening to it today instantly transported me back to my high-school days.]

Singing, Music and Lyrics: Shaahir Amar Sheikh

History: Originally published on 2019.07.04 22:53 IST. Extended and streamlined considerably on 2019.07.05 11:04 IST. The songs section added: 2019.07.05 17:13 IST. Further streamlined, and also further added a new section (no. 6.) on 2019.07.5 22:37 IST. … Am giving up on this post now. It grew from about 650 words (in a draft for a comment at Schlafly’s blog) to 3080 words as of now. Time to move on.

Still made further additions and streamlining for a total of ~3500 words, on 2019.07.06 16:24 IST.

# Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)

Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.

A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.

Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.

Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.

“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.

Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.

Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics

The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.

Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.

Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

• Physical laws are always causal.
• Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
• The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
• Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
• Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
• Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
• Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
• Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.

A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi