Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)


Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.


A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.


Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.


Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.


“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.


Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.


Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics


The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.


Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.


Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

  • Physical laws are always causal.
  • Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
  • The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
  • Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
  • Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
  • Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
  • Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
  • Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.


A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi

 

 

Advertisements

Determinism, Indeterminism, and the nature of the laws of physics…

The laws of physics are causal, but this fact does not imply that they can be used to determine each and everything that you feel should be determinable using them, in each and every context in which they apply. What matters is the nature of the laws themselves. The laws of physics are not literally boundless; nothing in the universe is. They are logically bounded by the kind of abstractions they are.


Let’s take a concrete example.

Take a bottle, pour a little water and detergent in it, shake well, and have fun watching the Technicolor wonder which results. Bubbles form; they show resplendent colors. Then, some of them shrink, others grow, one or two of them eventually collapse, and the rest of the network of the similar bubbles adjusts itself. The process continues.

Looking at it in an idle way can be fun: those colorful tendrils of water sliding over those thin little surfaces, those fascinating hues and geometric patterns… That dynamics which unfolds at such a leisurely pace. … Just watching it all can make for a neat time-sink—at least for a while.

But merely having fun watching bubbles collapse is not physics. Physics proper begins with a lawful description of the many different aspects of the visually evident spectacle—be it the explanation as to how those unreal-looking colors come about, or be it an explanation of the mechanisms involved in their shrinkage or growth, and eventual collapse, … Or, a prediction of exactly which bubble is going to collapse next.


For now, consider the problem of determining, given a configuration of some bubbles at a certain time t_0, predicting exactly which bubble is going to collapse next, and why… To solve this problem, we have to study many different processes involved in the bubbles dynamics…


Theories do exist to predict various aspects of the bubble collapse process taken individually. Further it should also be possible to combine them together. The explanation involves such theories as: the Navier-Stokes equations, which govern the flow of soap water in the thin films, and of the motion of the air entrapped within each bubble; the phenomenon of film-breakage, which can involves either the particles-based approaches to modeling of fluids, or, if you insist on a continuum theory, then theories of crack initiatiation and growth in thin lamella/shells; the propagation of a film-breakage, and the propagation of the stress-strain waves associated with the process; and also, theories concerning how the collapse process gets preferentially localized to only one (or at most few) bubbles, which involves again, nonlinear theories from mechanics of materials, and material science.

All these are causal theories. It should also be possible to “throw them together” in a multi-physics simulation.

But even then, they still are not very useful in predicting which bubble in your particular setup is going to collapse next, and when, because not the combination of these theories, but even each theory involved is too complex.

The fact of the matter is, we cannot in practice predict precisely which bubble is going to collapse next.


The reason for our inability to predict, in this context, does not have to do just with the precision of the initial conditions. It’s also their vastness.

And, the known, causal, physical laws which tell us how a sensitive dependence on the smallest changes in the initial conditions deterministically leads to such huge changes in the outcomes, that using these laws to actually make a prediction squarely lies outside of our capacity to calculate.

Even simple (first- or second-order) variations to the initial conditions specified over a very small part of the network can have repercussions for the entire evolution, which is ultimately responsible for predicting which bubble is going to collapse next.


I mention this situation because it is amply illustrative of a special kind of problems which we encounter in physics today. The laws governing the system evolution are known. Yet, in practice, they cannot be applied for performing calculations in every given situation which falls under their purview. The reason for this circumstance is that the very paradigm of formulating physical laws falls short. Let me explain what I mean very briefly here.


All physical laws are essentially quantitative in nature, and can be thought of as “functions,” i.e., as mappings from a specific set of inputs to a specific set of outputs. Since the universe is lawful, given a certain set of values for the inputs, and the specific function (the law) which does the mapping, the output is  uniquely determined. Such a nature of the physical laws has come to be known as determinism. (At least that’s what the working physicist understands by the term “determinism.”) The initial conditions together with the governing equation completely determine the final outcome.

However, there are situations in which even if the laws themselves are deterministic, they still cannot practically be put to use in order to determine the outcomes. One such a situation is what we discussed above: the problem of predicting the next bubble which will collapse.

Where is the catch? It is in here:

When you say that a physical law performs a mapping from a set of input to the set of outputs, this description is actually vastly more general than what appears on the first sight.

Consider another example, the law of Newtonian gravity.

If you have only two bodies interacting gravitationally, i.e., if all other bodies in the universe can be ignored (because their influence on the two bodies is negligibly small in the problem as posed), then the set of the required input data is indeed very small. The system itself is simple because there is only one interaction going on—that between two bodies. The simplicity of the problem design lends a certain simplicity to the system behaviour: If you vary the set of input conditions slightly, then the output changes proportionately. In other words, the change in the output is proportionately small. The system configuration itself is simple enough to ensure that such a linear relation exists between the variations in the input, and the variations in the output. Therefore, in practice, even if you specify the input conditions somewhat loosely, your prediction does err, but not too much. Its error too remains bounded well enough that we can say that the description is deterministic. In other words, we can say that the system is deterministic, only because the input–output mapping is robust under minor changes to the input.

However, if you consider the N-body problem in all its generality, then the very size of the input set itself becomes big. Any two bodies from the N-bodies form a simple interacting pair. But the number of pairs is large, and worse, they all are coupled to each other through the positions of the bodies. Further, the nonlinearities involved in such a problem statement work to take away the robustness in the solution procedure. Not only is the size of the input set big, the end-solution too varies wildly with even a small variation in the input set. If you failed to specify even a single part of the input set to an adequate precision, then the predicted end-state can deterministically become very wildly different. The input–output mapping is deterministic—but it is not robust under minor changes to the input. A small change in the initial angle can lead to an object ending up either on this side of the Sun or that. Small changes produce big variations in predictions.

So, even if the mapping is known and is known to work (deterministically), you still cannot use this “knowledge” to actually perform the mapping from the input to the output, because the mapping is not robust to small variations in the input.

Ditto, for the soap bubbles collapse problem. If you change the initial configuration ever so slightly—e.g., if there was just a small air current in one setup and a more perfect stillness in another setup, it can lead to wildly different predictions as to which bubble will collapse next.

What holds for the N-body problem also holds for the bubble collapse process. The similarity is that these are complex systems. Their parts may be simple, and the physical laws governing such simple parts may be completely deterministic. Yet, there are a great many parts, and they all are coupled together such that a small change in one part—one interaction—gets multiplied and felt in all other parts, making the overall system fragile to small changes in the input specifications.

Let me add: What holds for the N-body problem or the bubble-collapse problems also holds for quantum-mechanical measurement processes. The latter too involves a large number of parts that are nonlinearly coupled to each other, and hence, forms a complex system. It is as futile to expect that you would be able to predict the exact time of the next atomic decay as it is to expect that you will be able to predict which bubble collapses next.

But all the above still does not mean that the laws themselves are indeterministic, or that, therefore, physical theories must be regarded as indeterministic. The complex systems may not be robust. But they still are composed from deterministically operating parts. It’s just that the configuration of these parts is far too complex.


It would be far too naive to think that it should be possible to make exact (non-probabilistic) predictions even in the context of systems that are nonlinear, and whose parts are coupled together in complex manner. It smacks of harboring irresponsible attitudes to take this naive expectation as the standard by which to judge physical theories, and since they don’t come up to your expectations, to jump to the conclusion that physical theories are indeterministic in nature. That’s what has happened to QM.

It should have been clear to the critic of the science that the truth-hood of an assertion (or a law, or a theory) is not subject to whether every complex manner in which it can be recombined with other theoretical elements leads to robust formulations or not. The truth-hood of an assertion is subject only to whether it by itself and in its own context corresponds to reality or not.

The error involved here is similar, in many ways, to expecting that if a substance is good for your health in a certain quantity, then it must be good in every quantity, or that if two medicines are without side-effects when taken individually, they must remain without any harmful effects even when taken in any combination—that there should be no interaction effects. It’s the same error, albeit couched in physicists’ and philosopher’s terms, that’s all.

… Too much emphasis on “math,” and too little an appreciation of the qualitative features, only helps in compounding the error.


A preliminary version of this post appeared as a comment on Roger Schlafly’s blog, here [^]. Schlafly has often wondered about the determinism vs. indeterminism issue on his blog, and often, seems to have taken positions similar to what I expressed here in this post.

The posting of this entry was motivated out of noticing certain remarks in Lee Smolin’s response to The Edge Question, 2013 edition [^], which I recently mentioned at my own blog, here [^].


A song I like:
(Marathi) “kaa re duraavaa, kaa re abolaa…”
Singer: Asha Bhosale
Music: Sudhir Phadke
Lyrics: Ga. Di. Madgulkar


[In the interests of providing better clarity, this post shall undergo further unannounced changes/updates over the due course of time.

Revision history:
2019.04.24 23:05: First published
2019.04.25 14:41: Posted a fully revised and enlarged version.
]

Stay tuned to the NSF on the next evening…

Update on 2019.04.10 18:50 IST: 

Dimitrios Psaltis, University of Arizona in Tucson, EHT project scientist [^]:

The size and shape of the shadow matches the precise predictions of Einstein’s general theory of relativity, increasing our confidence in this century-old theory. Imaging a black hole is just the beginning of our effort to develop new tools that will enable us to interpret the massively complex data that nature gives us.”

Update over.


Stay tuned to the NSF on the next evening (on 10th April 2019 at 06:30 PM IST) for an announcement of astronomical proportions. Or so it is, I gather. See: “For Media” from NSF [^]. Another media advisory made by NSF roughly 9 days ago, i.e. on the Fool’s Day, here [^]. Their news “report”s [^].


No, I don’t understand the relativity theory. Not even the “special” one (when it’s taken outside of its context of the so-called “classical” electrodynamics)—let alone the “general” one. It’s not one of my fields of knowledge.

But if I had to bet my money then, based purely on my grasp of the sociological factors these days operative in science as practised in the Western world, then I would bet a good amount (even Indian Rs. 1,000/-) that the announcement would be just a further confirmation of Einstein’s theory of general relativity.

That’s how such things go, in the Western world, today.

In other words, I would be very, very, very surprised—I mean to say, about my grasp of the sociology of science in the Western world—if they found something (anything) going even apparently contrary to any one of the implications of any one of Einstein’s theories. Here, emphatically, his theory of the General Relativity.


That’s all for now, folks! Bye for now. Will update this post in a minor way when the facts are on the table.


TBD: The songs section. Will do that too, within the next 24 hours. That’s a promise. For sure. (Or, may be, right tonight, if a song nice enough to listen to, strikes me within the next half an hour or so… Bye, really, for now.)


A song I like:

(Hindi) “ek haseen shaam ko, dil meraa kho_ gayaa…”
Lyrics: Raajaa Mehdi Ali Khaan
Music: Madan Mohan
Singer: Mohammad Rafi [Some beautiful singing here…]

 

 

 

Further on QM, and on changing tracks over to Data Science

OK. As decided, I took a short trip to IIT Bombay, and saw a couple of professors of physics, for very brief face-to-face interactions on the 28th evening.

No chalk-work at the blackboard had to be done, because both of them were very busy—but also quick, really very quick, in getting to the meat of the matter.


As to the first professor I saw, I knew beforehand that he wouldn’t be very enthusiastic with any alternatives to anything in the mainstream QM.

He was already engrossed in a discussion with someone (who looked like a PhD student) when I knocked at the door of his cabin. The prof immediately mentioned that he has to finish (what looked like a few tons of) pending work items, before going away on a month-long trip just after a couple of days! But, hey, as I said (in my last post), directly barging into a professor’s cabin has always done wonders for me! So, despite his having some heavy^{heavy} schedule, he still motioned me to sit down for a quick and short interaction.

The three of us (the prof, his student, and me) then immediately had a very highly compressed discussion for some 15-odd minutes. As expected, the discussion turned out to be not only very rapid, and also quite uneven, because there were so many abrupt changes to the sub-topics and sub-issues, as they were being brought up and dispatched in quick succession. …

It was not an ideal time to introduce my new approach, and so, I didn’t. I did mention, however, that I was trying to develop some such a thing. The professor was of the opinion that if you come up with a way to do faster simulations, it would always be welcome, but if you are going to argue against the well-established laws, then… [he just shook head].

I told him that I was clear, very clear on one point. Suppose, I said, that I have a complex-valued field that is defined only over the physical 3D, and suppose further that my new approach (which involves such a 3D field) does work out. Then, suppose further that I get essentially the same results as the mainstream QM does.

In such a case, I said, I am going to say that here is a possibility of looking at it as a real physical mechanism underlying the QM theory.

And if people even then say that because it is in some way different from the established laws, therefore it is not to be taken seriously, then I am very clear that I am going to say: “You go your way and I will go mine.”

But of course, I further added, that I still don’t know yet how the calculations are done in the mainstream QM for the interacting electrons—that is, without invoking simplifying approximations (such as the fixed nucleus). I wanted to see how these calculations are done using the computational modeling approach (not the perturbation theory).

It was at this point that the professor really got the sense of what I was trying to get at. He then remarked that variational formulations are capable enough, and proceeded to outline some of their features. To my query as to what kind of an ansatz they use, and what kind of parameters are involved in inducing the variations, he mentioned Chebyshev polynomials and a few other things. The student mentioned the Slater determinants. Then the professor remarked that the particulars of the ansatz and the particulars of the variational techniques were not so crucial because all these techniques ultimately boil down to just diagonalizing a matrix. Somehow, I instinctively got the idea that he hasn’t been very much into numerical simulations himself, which turned out to be the case. In fact he immediately said so himself: “I don’t do wavefunctions. [Someone else from the same department] does it.” I decided to see this other professor the next day, because it was already evening (almost approaching 6 PM or so).

A few wonderful clarifications later, it was time for me to leave, and so I thanked the professor profusely for accommodating me. The poor fellow didn’t even have the time to notice my gratitude; he had already switched back to his interrupted discussion with the student.

But yes, the meeting was fruitful to me because the prof did get the “nerve” of the issue right, and in fact also gave me two very helpful papers to study, both of them being review articles. After coming home, I have now realized that while one of them is quite relevant to me, the other one is absolutely god-damn relevant!


Anyway, after coming out of the department on that evening, I was thinking of calling my friend to let him know that the purpose of the visit to the campus was over, and thus I was totally free. While thinking about calling him and walking through the parking lot, I just abruptly noticed a face that suddenly flashed something recognizable to me. It was this same second professor who “does wavefunctions!”

I had planned on seeing him the next day, but here he was, right in front me, walking towards his car in a leisurely mood. Translated, it meant: he was very much free of all his students, and so was available for a chat with me! Right now!! Of course, I had never had made any acquaintance with him in the past. I had only browsed through his home page once in the recent times, and so could immediately make out the face, that’s all. He was just about to open the door of his car when I approached him and introduced myself. There followed another intense bout of discussions, for another 10-odd minutes.

This second prof has done numerical simulations himself, and so, he was even faster in getting a sense of what kind of ideas I was toying with. Once again, I told him that I was trying for some new ideas but didn’t get any deeper into my approach, because I myself still don’t know whether my approach will produce the same results as the mainstream QM does or not. In any case, knowing the mainstream method of handling these things was crucial, I said.

I told him how, despite my extensive Internet searches, I had not found suitable material for doing calculations. He then said that he will give me the details about a book. I should study this book first, and if there are still some difficulties or some discussions to be had, then he would be available, but the discussion would then have to progress in reference to what is already given in that book. Neat idea, this one was, perfect by me. And turns out that the book he suggested was neat—absolutely perfectly relevant to my needs, background as well as preparation.


And with that ends this small story of this short visit to IIT Bombay. I went there with a purpose, and returned with one 50 page-long and very tightly written review paper, a second paper of some 20+ tightly written pages, and a reference to an entire PG-level book (about 500 pages). All of this material absolutely unknown to me despite my searches, and as it seems as of today, all of it being of utmost relevance to me, my new ideas.


But I have to get into Data Science first. Else I cannot survive. (I have been borrowing money to fend off the credit card minimum due amounts every month.)

So, I have decided to take a rest for today, and from tomorrow onwards, or may be a day later—i.e., starting from the “shubh muhurat” (auspicious time) of the April Fool’s day, I will begin my full-time pursuit of Data Science, with all that new material on QM only to be studied on a part-time basis. For today, however, I am just going to be doing a bit of a time-pass here and there. That’s how this post got written.

Take care, and wish you the same kind of luck as I had in spotting that second prof just like that in the parking lot. … If my approach works, then I know who to contact first with my results, for informal comments on them. … I wish you this same kind of a luck…

Work hard, and bye for now.


A song I like
(Marathi) “dhunda_ madhumati raat re, naath re…”
Music: Master Krishnarao
Singer: Lata Mangeshkar
Lyrics: G. D. Madgulkar

[A Marathi classic. Credits are listed in a purely random order. A version that seems official (released by Rajshri Marathi) is here: [^] . However, somehow, the first stanza is not complete in it.

As to the set shown in this (and all such) movies, right up to, say the movie “Bajirao-Mastani,” I have—and always had—an issue. The open wide spaces for the palaces they show in the movies are completely unrealistic, given the technology of those days (and the actual remains of the palaces that are easy to be recalled by anyone). The ancients (whether here in India or at any other place) simply didn’t have the kind of technology which is needed in order to build such hugely wide internal (covered) spaces. Neitehr the so-called “Roman arch” (invented millenia earlier in India, I gather), nor the use of the monolithic stones for girders could possibly be enough to generate such huge spans. Idiots. If they can’t get even simple calculations right, that’s only to be expected—from them. But if they can’t even recall the visual details of the spans actually seen for the old palaces, that is simply inexcusable. Absolutely thorough morons, these movie-makers must be.]

 

Wrapping up my research on QM—without having to give up on it

Guess I am more or less ready to wrap up my research on QM. Here is the exact status as of today.


1. The status today:

I have convinced myself that my approach (viz. the idea of singular potentials anchored into electronic positions, and with a 3D wave-field) is entirely correct, as far as QM of non-interacting particles is concerned. That is to say, as far as the abstract case of two particles in a 0-potential 1D box, or a less abstract but still hypothetical case of two non-interacting electrons in the helium atom, and similar cases are concerned. (A side note: I have worked exclusively with the spinless electrons. I don’t plan to include spin right away in my development—not even in my first paper on it. Other physicists are welcome to include it, if they wish to, any time they like.)

As to the actual case of two interacting particles (i.e., the interaction term in the Hamiltonian for the helium atom), I think that my approach should come to reproduce the same results as those obtained using the perturbation theory or the variational approach. However, I need to verify this part via discussions with physicists.

All in all, I do think that the task which I had intended to complete (and to cross-check) before this month-end, is already over—and I find that I don’t have to give up on QM (as suspected earlier [^]), because I don’t have to abandon my new approach in the first place.


2. A clarification on what had to be worked out and what had to be left alone:

To me, the crucial part at this stage (i.e., for the second-half of March) was verifying whether working with the two ideas of (i) a 3D wavefield, and (ii) electrons as “particles” having definite positions (or more correctly, as points of singularities in the potential field), still leads to the same mathematical description as in the mainstream (linear) quantum mechanics or not.

I now find that my new approach leads to the same maths—at least for the QM of the non-interacting particles. And further, I also have very definite grounds to believe that my new approach should also work out for two interacting particles (as in the He atom).

The crucial part at this stage (i.e., for the second half of March) didn’t have so much to do with the specific non-linearity which I have proposed earlier, or the details of the measurement process which it implies. Working out the details of these ideas would have been impossible—certainly beyond the capacities of any single physicist, and over such a short period. An entire team of PhD physicists would be needed to tackle the issues arising in pursuing this new approach, and to conduct the simulations to verify it.

BTW, in this context, I do have some definite ideas regarding how to hasten this process of unraveling the many particular aspects of the measurement process. I would share them once physicists show readiness to pursue this new approach. [Just in case I forget about it in future, let me note just a single cue-word for myself: “DFT”.]


3. Regarding revising the Outline document issued earlier:

Of course, the Outline document (which was earlier uploaded at iMechanica, on 11th February 2019) [^] needs to be revised extensively. A good deal of corrections and modifications are in order, and so are quite a few additions to be made too—especially in the sections on ontology and entanglement.

However, I will edit this document at my leisure later; I will not allocate a continuous stretch of time exclusively for this task any more.

In fact, a good idea here would be to abandon that Outline document as is, and to issue a fresh document that deals with only the linear aspects of the theory—with just a sketchy conceptual idea of how the measurement process is supposed to progress in a broad background context. Such a document then could be converted as a good contribution to a good journal like Nature, Science, or PRL.


4. The initial skepticism of the physicists:

Coming to the skepticism shown by the couple of physicists (with whom I had had some discussions by emails), I think that, regardless of their objections (hollers, really speaking!), my main thesis still does hold. It’s they who don’t understand the quantum theory—and let me hasten to add that by the words “quantum theory,” here I emphatically mean the mainstream quantum theory.

It is the mainstream QM which they themselves don’t understood as well as they should. What my new approach then does is to merely uncover some of these weaknesses, that’s all. … Their weakness pertains to a lack of understanding of the 3D \Leftrightarrow 3ND correspondence in general, for any kind of physics: classical or quantum. … Why, I even doubt whether they understand even just the classical vibrations themselves right or not—coupled vibrations under variable potentials, that is—to the extent and depth to which they should.

In short, it is now easy for me to leave their skepticism alone, because I can now clearly see where they failed to get the physics right.


5. Next action-item:

In the near future, I would like to make short trips to some Institutes nearby (viz., in no particular order, one or more of the following: IIT Bombay, IISER Pune, IUCAA Pune, and TIFR Mumbai). I would like to have some face-to-face discussions with physicists on this one single topic: the interaction term in the Hamiltonian for the helium atom. The discussions will be held strictly in the context that is common to us, i.e., in reference to the higher-dimensional Hilbert space of the mainstream QM.

In case no one from these Institutes responds to my requests, I plan to go and see the heads of these Institutes (i.e. Deans and Directors)—in person, if necessary. I might also undertake other action items. However, I also sincerely hope and think that such things would not at all be necessary. There is a reason why I think so. Professors may or may not respond to an outsider’s emails, but they do entertain you if you just show up in their cabin—and if you yourself are smart, courteous, direct, and well… also experienced enough. And if you are capable of holding discussions on the “common” grounds alone, viz. in terms of the linear, mainstream QM as formulated in the higher-dimensional spaces (I gather it’s John von Neumann’s formulation), that is to say, the “Copenhagen interpretation.” (After doing all my studies—and, crucially, after the development of what to me is a satisfactory new approach—I now find that I no longer am as against the Copenhagen interpretation as some of the physicists seem to be.) … All in all, I do hope and think that seeing Diro’s and all won’t be necessary.

I also equally sincerely hope that my approach comes out unscathed during / after these discussions. … Though the discussions externally would be held in terms of mainstream QM, I would also be simultaneously running a second movie of my approach, in my mind alone, cross-checking whether it holds or not. (No, they wouldn’t even suspect that I was doing precisely that.)

I will be able to undertake editing of the Outline document (or leaving it as is and issuing a fresh document) only after these discussions.


6. The bottom-line:

The bottom-line is that my main conceptual development regarding QM is more or less over now, though further developments, discussions, simulations, paper-writing and all can always go on forever—there is never an end to it.


7. Data Science!

So, I now declare that I am free to turn my main focus to the other thing that interests me, viz., Data Science.

I already have a few projects in mind, and would like to initiate work on them right away. One of the “projects” I would like to undertake in the near future is: writing very brief notes, written mainly for myself, regarding the mathematical techniques used in data science. Another one is regarding applying ML techniques to NDT (nondestructive testing). Stay tuned.


A song I like:

(Western, instrumental) “Lara’s theme” (Doctor Zhivago)
Composer: Maurice Jarre