# Ontologies in physics—1: Newtonian mechanics

0. Before we begin:

The mechanics described in the last post, namely that of the molecular dynamics (MD) technique, had three salient features: (i) a potential energy which is anchored into the pair-wise separations of neighbouring discrete atomic nuclei (loosely called “atoms”), with its negative gradient forming a force field, (ii) the local force-field accelerating the atoms thereby causing a modification in the latter’s motions (velocities), and (iii) the resulting modifications in the atomic positions leading to a change in the potential energy, thereby forming a feedback loop. Hence, an essentially nonlinear dynamics.

We also saw the ramifications of such a chaotic dynamics, for instance, the obvious stability of phases over wide ranges of the important parameter, viz. temperature (i.e. average kinetic energy i.e. velocities). We also noted that MD is very close to QM, and that in my approach, the equations of QM and MD show a remarkable similarity.

However, the ontologies of QM and MD differ in that QM is not a classical theory. Further, ontology of even purely classical concepts like potentials, used even at the MD level, are not always clearly spelt out in the literature.

Therefore, before we are able to go to my tweets on my new approach to QM, it is now further necessary to clearly understand certain basic facts of life physics—pertaining to various ontologies followed in it over a period of time. We will do that beginning with this post.

1. An ontology as the proper starting point of physics:

The starting point of a physics theory is not a mathematical equation, not even the kind of configurations there are to a system. The proper starting point is: the kind of objects that are presumed to exist in the real world before the exercise of building a theoretical system involving them can even begin. Thus, the proper starting point of any and every physical theory is an implicit or explicit ontology.

Depending on the ontology followed, we may classify the physics theories (up to nonrelativistic QM) into these types:

• Newton’s original mechanics (here called the Newtonian Mechanics or NM),
• Classical Electrodynamics (EM), including:
• the ontological analogy it suggested for the Newtonian gravitational field (NG)
• The non-relativistic quantum mechanics, as in Schrodinger’s formalism (QM).

I have blogged about these ontologies before. Go through a previous blog post [^] if you wish, but also note that my overall understanding of physics has undergone substantial revision since then. Indeed, if necessary, I might further split the ontologies as I go writing about the above three/four.

The reason we must undertake this exercise of identifying a fairly precise description of these ontologies right now is that in the Outline document (on my new approach to QM [^]), in the section on ontology, I speak of some of the QM objects as being “classical.” However, there are certain important nuances to the meaning to even word “classical,” especially when it comes to the NM vs EM distinction. Hence the necessity to state the exact ontological views.

I would have loved to follow the historical order of the development in the ontological views followed in physics. However, I don’t have time for that right now. So, the development will be only very broadly in the historical order.

2. The ontology followed in the original Newtonian mechanics (NM):

2.1 Objects:

The world consists of spatially discrete objects that are spatially separated from each other. They are of finite sizes—neither zero nor infinite. (Ignore all mathematicians and even mathematical physicists who argue otherwise.) Take a piece of paper and draw some blobs for some objects, say for the earth, the sun or the moon. Or, for some neat solid objects like billiard balls. These blobs represent the primary objects of NM.

The objects are perceptually observed to be spatially extended (their opposite ends don’t coincide), and it is perceptually evident that any one object lies in a specific spatial relationship with the other objects, that it has its own location.

2.2 Absolute space:

The objects of NM exist in an absolute space.

Take an imaginary ruler and an imaginary sharp object. Mark some imaginary, straight-line scratches on the empty space, so as to leave an infinite grid of locations on it.

Yes, this is doable. Just make sure to undertake this exercise while being firmly seated in your armchair on the earth, without ever moving. (Don’t worry about some other grid that some other guy sitting in some other arm-chair makes. In the dynamical equations, they don’t conflict with each other.) You just have to realize that in NM, the world is very stable and simple.

The walls of your room, e.g., don’t move or deform. They form a rigid body, and the surfaces of any such a rigid body can be marked with a neat system of lines, like your school-time graph paper. You can also imagine strings being tied tautly, to form straight lines between opposite walls of the room. A system of such strings, when taken to infinitely small size and imagine to offer no resistance to motions of any objects (seen above), easily provides a means to measure locations within the room. A similar kind of straight lines extended in all directions and infinitely, yields a system of measurement.

But we need to make a distinction between a system of measurements and the thing that is being measured. (We are into ontology.) Here we suppose that the volume inside an empty room is not completely empty. It is filled with a background object. It is a physical object but of a special kind—it offers no resistance to any motion of anything through it.

The grid marked by you never moves because the background object that is the empty space also does not move. They both remain fixed in all respects at all times and forever.

However, objects of the first kind (solid ones like moon, Sun, etc.) are often seen as moving through the aforementioned, unmovable, undeformable background object—called the absolute space—in a lawful manner.

The concepts of position and distance are abstracted from those of locations and extensions of objects.

The concept of space has two meanings: (i) as the physically existing background object, and (ii) as a mathematically devised system of establishing quantitative measures like positions, distances, and relationships between them.

2.3 Configurations and changes in them:

Objects taken together with their (absolute) positions are said to form a configuration.

It is physically observed that configurations of objects are continuously changing from one state to another. There are an infinite number of states in between any two states, and they come to occur in some specific (observed) order. The order being followed in going through all such states (and all the attributes of the stated orderliness) is lawful—it cannot be changed arbitrarily. The individual states are described in reference to the positions of objects against the absolute space. The orderly progression in them occurs because the configuration of the universe is always changing (whether the one you see around your armchair does or not).

2.4 Absolute time:

The immutability of the order in the universal progression of changes in configurations implies a certain measure called time.

With time, you compare and contrast the perceived speed with which a progression in the states of a system undergoes changes: the faster the perceived changes, the smaller the changes in the elapsed time.

Perceiving differences in the speeds of changes of configurations is easiest when the phenomena are of perceptually reproducible speeds and hence durations, which most saliently (though not exclusively) is the case when they are periodic. For instance, pendulum comes back to a certain position (in a single cycle of oscillations) much faster; the sand in a sand-clock gets exhausts much slower; the Sun rises again at a pace that is even slower.

The perception of the speeds in the changes of physical configurations is at the basis of the concept of time.

Time is a high-level concept. It not at all the most fundamental one. (Both Kant and Einstein were summarily wrong here.) It certainly is not as fundamental as the concept of space is. Let me repeat the logic:

Objects come first. Then come the perceived extensions and locations of objects. Then comes the concept of space as a physical object. Then the concept of mathematically defined absolute space, and then of configurations. Then the orderly and continuous changes in configurations. Then we arrive at the idea of a defining a certain kind of a measure for such changes by comparing two continuous changes with each other on the basis of their perceived rapidity. It’s only at this point in the logical development that we can even think of time, or refine this concept by ascribing to it a mathematical quantity that continuously increases. Space and time are not on the same footing—neither in physical terms nor in the complexity of reasoning underlying their mathematical definitions.

This attribute of the perceived speediness of changes (i.e. the attribute of time) is common to all the changes occurring to all the objects in the universe—not just to their motions. Hence, any change whatsoever can be measured using time.

Thus, the physical universe itself has this attribute called time. Time physically exists—via the inverse relation of relative speediness, which is directly observed.

Since time is common to all changes all points of the absolute space in the universe, it can be put to use when it comes to quantitatively characterizing the changes associated with motions of objects.

In NM, the measures of time also are uniform at all locations in the absolute space.

Many of these considerations remain exactly intact even in the relativity theory. What changes in the relativity theory are only the mathematically defined systems for space and time measurements. But neither the fact that they physically exist, nor the fact that they are physically entirely different in origins and at uneven levels in the knowledge hierarchy. Any one who suggests they don’t is stupid—be it a Kant, a Poincare, an Einstein, or your next rising star on the pop-sci horizon.

Now, given the absolute space and the absolute time, it is “time” to study motions (of objects).

2.5 Mass:

Objects have mass. Mass is a dynamically defined measure that happens to match exceedingly well with the notion of amount of matter (“stuff”) possessed by objects. In NM, mass is measured (as in practice it still is) by measuring weight—i.e. the strength of an object’s response to the earth’s gravitational field (which is in common to all the objects being weighed—in fact is quantitatively constant for all of them).

Mass is an attribute of individual objects. Hence, when a given object moves and thereby changes its location, so does its mass. Thus, mass has no location other than that of the object whose attribute it is. Obvious, no? (In the NM ontology, it is.)

2.6 Point-particles:

Objects can be abstractly regarded as point-particles via the idea of the center of mass (CoM). The CoM is the distinguished point which, when entered into dynamical equations, correctly reproduces the observed motions of the actual objects, especially those with spherical symmetry (so that angular momentum etc. are not involved).

The view of objects as point-particles is an abstraction. What metaphysically exist are only spatially finite objects. However, via abstraction, objects can be taken as massive point-particles (i.e., particles having no extension).

Some of the salient features associated with the motions of point-particles are: (i) their trajectories (the continuous and mathematically simple paths that they trace in the absolute space over absolute time), (ii) their displacements, (iii) their speeds and directions (velocities), (iv) the changes in their motions i.e. their accelerations, etc.

2.7 The direct contact as the only means of interactions between objects:

Objects can be made to change (some or more of the measures of) their motions due to the actions of other objects.

In NM, physical objects cannot be made to change their motions through mental action alone. They change motions only after interaction with other physical objects.

In NM, the only mechanism through which two physical objects can come to change their motions is: via a direct physical contact between them.

The contact may last for very short durations (as happens in the collisions of billiard balls), which can be abstractly described as an instantaneous change. The contact may last, continuously, for a long time (as happens with motions of billiard balls on a table with friction; or the idealized, frictionless motion of a ball through air; or of an ideal bead sliding without friction on an ideal wire, etc.).

2.8 Momentum and force:

The dynamically most relevant measure of motion (in Newton’s words, its “quantity”) is: the momentum of an object. It at once captures the effects of both mass and velocity on an object’s dynamical behavior.

The physical mechanism of how two objects affect each other’s motions is: the direct physical contact. The (mathematically devised) quantitative measure of how much an object’s motion has been affected is the force, defined as time-rate of change of its momentum.

Thus, in NM, forces arise only by direct contact between two bodies, and only for the duration that they are in contact.

Since in NM, mass of a given object always remains constant, force and acceleration amount to be just two different terms to describe the essentially same quantitative measures of the same physical facts. Any acceleration of a point-particle necessarily implies a force acting on it; any (net non-zero) force applied to a point-particle necessarily accelerates it. There also is no delay in the action of a force and the acceleration produced in reality—or vice versa. (Deceleration of one object while in contact with second object is a production of a force by the first on the second.)

The universe obeys the law of conservation of momentum.

2.9 An interaction, but without direct contact—gravity:

In the ontology of NM, the only exception to the rule of interaction via direct contact is: gravity.

No one knows how can it be that one object affects—forces—another object at a distance, with literally nothing in between them. Let’s call it an instantaneous action at a distance (IAD).

This issue of the presence of IAD in gravity is a riddle for NM because physical contact is the only mechanism allowed in it by which forces can ever come to arise, i.e., the direct contact is the only mechanism available for one object to affect another object.

[The legal system till date recognizes this principle. To show that a moving knife involved in a murder was not wielded by you, you only have to show that there was no direct physical contact between you and that knife, at that time.]

Coming back to the ontological riddle, no one knows how to resolve it within the context of the NM ontology. Not even Newton. Therefore, the dynamical equation that is Newton’s law of universal gravitation is an incomplete description. Even though it works perfectly in explaining all the observed data concerning the celestial motions (such as those by Kepler).

2.10 The energetics program and the potential energy:

The same physics as is given by Newton’s laws can also be described using a different ontological term: energy.

An object in motion has an attribute called the kinetic energy (whose quantitative measure is defined as $1/2 mv^2$). Objects in a perfectly elastic collision conserve their total kinetic energy. This is a direct parallel to Newton’s original analysis via the conservation of total momentum.

In the energetics program (pursued by Leibnitz, Euler, Lagrange, and others), two objects interacting at a distance with each other via gravity, say a massive ball and the earth, have an additional energy associated with them. This energy is associated not with their motions, but with their common configuration. This energy is called the potential energy.

Consider a ball held in hand at some height, which is about to be released. So long as the ball is not released, the configuration of the ball and the earth stays the same over any lapse of time. Though both the objects have zero kinetic energy, their configuration still is considered to have this second form of energy called potential energy. For an unreleased ball, since the configuration of ball–earth system stays the same in time, the potential energy of this configuration also stays the same.

The potential energy measures the unrealized capacity of a configuration to undergo change, if the physical constraints restricting the possible motion, such as the support for the ball, are removed.

When the support is removed, the ball falls down. It accelerates towards the ground.

In the energetic analysis, the ball acquires a kinetic energy (of motion). If initial KE is zero, and if total KE is conserved, then where does this KE of the falling ball come from? It comes about because the ball–earth system is supposed as simultaneously losing its potential energy. When the ball undergoes free fall the system configuration is continuously changing. So, the energy associated with the configuration (relative positions) also is continuously changing. For conservation law to work, the system has to lose PE so that it can gain KE. Gaining of a KE is regarded as a process of realization of a potential. The realized potential is subtracted from the initial potential energy.

Just before the ball comes to rest at the ground, its speed is the highest. That’s because almost all of its initial potential energy has been realized; the realization consists of this particular instantaneous state of motion (of the highest speed).

Thus, the potential energy of the ball (its capacity to undergo motion) is higher at a height, and it is zero at the ground. (After all, once it’s on the ground, it can’t move any further down.) Mathematically, the potential energy of a system is given as $mgh$.

When action-at-a-distance forces like gravity are part of a system description, the total energy of a system at any instant is the sum, at that instant, of the kinetic energies of all its separate constituent objects taken individually, and the potential energy associated with all their positions taken at once—i.e. their configuration.

Thus, notice, the potential energy belongs to the configuration—to the entire system—and not to any one object. That’s in contrast to the kinetic energy. Each object has its own kinetic energy (when it’s in motion). But a single isolated object does not have any potential energy, be it stationary or in motion. Only two or more objects taken together (as a system) possess PE.

For this reason, in NM, the KE has a point-position: it is always located where that object is, during motion. In contrast, the PE does not have any spatial position. It is an attribute of the relative positions of two or more objects taken at once. That’s why, in NM, there is no spatially distinguished point where the PE of a falling ball exists—there is no PE of a ball in the first place!

The conservation of law for the universe is: KE + PE = constant.

2.11 A recap of the NM ontology:

In short, the ontology of NM is this: The objects that NM studies are massive (like solid balls), and isolated from each other in the absolute space. They can move and affect each other’s motions primarily through direct contact. In an extended description, two objects can also act via gravity, though mechanism for such action at a distance is not known in the NM ontology. (In a tentative substitute for the ontology, gravity is taken to act as if it were through an invisible string that connected two spatially separated objects.) In NM, the motions and interactions of objects can be described with reference to the passage of a common universal time. Point-particles don’t physically exists, but form a useful abstraction.

Notice, specific ideas like Newton’s laws, or the law of conservation of momentum or energy, though mentioned above, are not a part of NM ontology as such—they form a part only of its physics, not of ontology.

2.12 In NM, potentials don’t form fields, and so, are attributes of configurations, not of individual objects:

Notice also that while potential energy has entered the physics analysis using NM, it is still not being regarded as a field. Neither gravity nor potential is still being regarded as a field. An object like a field is missing from the NM ontology.

In principle, for visualization of what the world is like using Newton’s own approach, you can draw isolated dots in space representing massive point-particles; indicate (or show in animation) their velocities/momenta; and also indicate the forces which arise between them—which can happen only during a direct contact.

Forces arise and act at the point of direct contact but nowhere else. Therefore, forces arise only at the point-positions of particles when they are in direct contact—and it is for this reason that forces are able to affect the particles’ motions. You can use Newton’s laws (or conservation of the sum of PE and KE) and calculate the motions of such particles. If objects of finite sizes have to be dealt with as such, they are to be seen as collections of infinitely many particles each of which is infinitely small. It is the particles that are basic to the NM ontology.

In using the Leibniz/Euler/Lagrange’s energetics program, you still draw only isolated dots for particles. However, you now implicitly suppose that they form a system.

“System” actually is a much later-date concept. Using modern ideas, we can draw an imaginary box around the particles which are being considered for a dynamical description. We can then imagine as if a meter is attached to this imaginary box. This meter displays a number, and calculations involving it enter into analysis. The reading on the meter gives the potential energy for the overall system—for all the particles put together, in the configuration in which they are found together. Thus, this number is not associated with any one particle in the system, but with the overall system taken as a whole (or, the system taken as an abstract object of sorts).

Thus, to repeat, the potential energy “of a ball” is a rather loose expression, if you follow the NM ontology. The PE is not an attribute of a single object. Hence, PE is not something which moves in space along with it. PE remains a global property of a system with unspecified spatial properties (like position) for it.

The idea of a potential as something that is an attribute of an individual object itself (regardless of the system it is in), though so familiar to us today, actually forms a part of a distinct development in ontology. This development is best illustrated with Maxwell’s electrodynamics. I will come to it after a few days.

… In the meanwhile, GaNapati festival greetings, take care, and bye for now…

A song I like:

(Marathi) “too sukhakartaa too du:khahartaa…”
Singer: Ashalata Wabgaonkar
Lyrics and Music: Vijay Sonalkar

History: Originally published (~2,700 words) on 2019/09/02 15:48 IST. Considerably extended (but without changing the sub-paragraphs structure or altering the basic points—~3,900 words) on 2019/09/03 15:04 IST. … Now am leaving it in whatever shape it is in.

/

# Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)

Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.

A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.

Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.

Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.

“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.

Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.

Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics

The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.

Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.

Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

• Physical laws are always causal.
• Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
• The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
• Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
• Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
• Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
• Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
• Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.

A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi

/

# Determinism, Indeterminism, and the nature of the laws of physics…

The laws of physics are causal, but this fact does not imply that they can be used to determine each and everything that you feel should be determinable using them, in each and every context in which they apply. What matters is the nature of the laws themselves. The laws of physics are not literally boundless; nothing in the universe is. They are logically bounded by the kind of abstractions they are.

Let’s take a concrete example.

Take a bottle, pour a little water and detergent in it, shake well, and have fun watching the Technicolor wonder which results. Bubbles form; they show resplendent colors. Then, some of them shrink, others grow, one or two of them eventually collapse, and the rest of the network of the similar bubbles adjusts itself. The process continues.

Looking at it in an idle way can be fun: those colorful tendrils of water sliding over those thin little surfaces, those fascinating hues and geometric patterns… That dynamics which unfolds at such a leisurely pace. … Just watching it all can make for a neat time-sink—at least for a while.

But merely having fun watching bubbles collapse is not physics. Physics proper begins with a lawful description of the many different aspects of the visually evident spectacle—be it the explanation as to how those unreal-looking colors come about, or be it an explanation of the mechanisms involved in their shrinkage or growth, and eventual collapse, … Or, a prediction of exactly which bubble is going to collapse next.

For now, consider the problem of determining, given a configuration of some bubbles at a certain time $t_0$, predicting exactly which bubble is going to collapse next, and why… To solve this problem, we have to study many different processes involved in the bubbles dynamics…

Theories do exist to predict various aspects of the bubble collapse process taken individually. Further it should also be possible to combine them together. The explanation involves such theories as: the Navier-Stokes equations, which govern the flow of soap water in the thin films, and of the motion of the air entrapped within each bubble; the phenomenon of film-breakage, which can involves either the particles-based approaches to modeling of fluids, or, if you insist on a continuum theory, then theories of crack initiatiation and growth in thin lamella/shells; the propagation of a film-breakage, and the propagation of the stress-strain waves associated with the process; and also, theories concerning how the collapse process gets preferentially localized to only one (or at most few) bubbles, which involves again, nonlinear theories from mechanics of materials, and material science.

All these are causal theories. It should also be possible to “throw them together” in a multi-physics simulation.

But even then, they still are not very useful in predicting which bubble in your particular setup is going to collapse next, and when, because not the combination of these theories, but even each theory involved is too complex.

The fact of the matter is, we cannot in practice predict precisely which bubble is going to collapse next.

The reason for our inability to predict, in this context, does not have to do just with the precision of the initial conditions. It’s also their vastness.

And, the known, causal, physical laws which tell us how a sensitive dependence on the smallest changes in the initial conditions deterministically leads to such huge changes in the outcomes, that using these laws to actually make a prediction squarely lies outside of our capacity to calculate.

Even simple (first- or second-order) variations to the initial conditions specified over a very small part of the network can have repercussions for the entire evolution, which is ultimately responsible for predicting which bubble is going to collapse next.

I mention this situation because it is amply illustrative of a special kind of problems which we encounter in physics today. The laws governing the system evolution are known. Yet, in practice, they cannot be applied for performing calculations in every given situation which falls under their purview. The reason for this circumstance is that the very paradigm of formulating physical laws falls short. Let me explain what I mean very briefly here.

All physical laws are essentially quantitative in nature, and can be thought of as “functions,” i.e., as mappings from a specific set of inputs to a specific set of outputs. Since the universe is lawful, given a certain set of values for the inputs, and the specific function (the law) which does the mapping, the output is  uniquely determined. Such a nature of the physical laws has come to be known as determinism. (At least that’s what the working physicist understands by the term “determinism.”) The initial conditions together with the governing equation completely determine the final outcome.

However, there are situations in which even if the laws themselves are deterministic, they still cannot practically be put to use in order to determine the outcomes. One such a situation is what we discussed above: the problem of predicting the next bubble which will collapse.

Where is the catch? It is in here:

When you say that a physical law performs a mapping from a set of input to the set of outputs, this description is actually vastly more general than what appears on the first sight.

Consider another example, the law of Newtonian gravity.

If you have only two bodies interacting gravitationally, i.e., if all other bodies in the universe can be ignored (because their influence on the two bodies is negligibly small in the problem as posed), then the set of the required input data is indeed very small. The system itself is simple because there is only one interaction going on—that between two bodies. The simplicity of the problem design lends a certain simplicity to the system behaviour: If you vary the set of input conditions slightly, then the output changes proportionately. In other words, the change in the output is proportionately small. The system configuration itself is simple enough to ensure that such a linear relation exists between the variations in the input, and the variations in the output. Therefore, in practice, even if you specify the input conditions somewhat loosely, your prediction does err, but not too much. Its error too remains bounded well enough that we can say that the description is deterministic. In other words, we can say that the system is deterministic, only because the input–output mapping is robust under minor changes to the input.

However, if you consider the N-body problem in all its generality, then the very size of the input set itself becomes big. Any two bodies from the N-bodies form a simple interacting pair. But the number of pairs is large, and worse, they all are coupled to each other through the positions of the bodies. Further, the nonlinearities involved in such a problem statement work to take away the robustness in the solution procedure. Not only is the size of the input set big, the end-solution too varies wildly with even a small variation in the input set. If you failed to specify even a single part of the input set to an adequate precision, then the predicted end-state can deterministically become very wildly different. The input–output mapping is deterministic—but it is not robust under minor changes to the input. A small change in the initial angle can lead to an object ending up either on this side of the Sun or that. Small changes produce big variations in predictions.

So, even if the mapping is known and is known to work (deterministically), you still cannot use this “knowledge” to actually perform the mapping from the input to the output, because the mapping is not robust to small variations in the input.

Ditto, for the soap bubbles collapse problem. If you change the initial configuration ever so slightly—e.g., if there was just a small air current in one setup and a more perfect stillness in another setup, it can lead to wildly different predictions as to which bubble will collapse next.

What holds for the N-body problem also holds for the bubble collapse process. The similarity is that these are complex systems. Their parts may be simple, and the physical laws governing such simple parts may be completely deterministic. Yet, there are a great many parts, and they all are coupled together such that a small change in one part—one interaction—gets multiplied and felt in all other parts, making the overall system fragile to small changes in the input specifications.

Let me add: What holds for the N-body problem or the bubble-collapse problems also holds for quantum-mechanical measurement processes. The latter too involves a large number of parts that are nonlinearly coupled to each other, and hence, forms a complex system. It is as futile to expect that you would be able to predict the exact time of the next atomic decay as it is to expect that you will be able to predict which bubble collapses next.

But all the above still does not mean that the laws themselves are indeterministic, or that, therefore, physical theories must be regarded as indeterministic. The complex systems may not be robust. But they still are composed from deterministically operating parts. It’s just that the configuration of these parts is far too complex.

It would be far too naive to think that it should be possible to make exact (non-probabilistic) predictions even in the context of systems that are nonlinear, and whose parts are coupled together in complex manner. It smacks of harboring irresponsible attitudes to take this naive expectation as the standard by which to judge physical theories, and since they don’t come up to your expectations, to jump to the conclusion that physical theories are indeterministic in nature. That’s what has happened to QM.

It should have been clear to the critic of the science that the truth-hood of an assertion (or a law, or a theory) is not subject to whether every complex manner in which it can be recombined with other theoretical elements leads to robust formulations or not. The truth-hood of an assertion is subject only to whether it by itself and in its own context corresponds to reality or not.

The error involved here is similar, in many ways, to expecting that if a substance is good for your health in a certain quantity, then it must be good in every quantity, or that if two medicines are without side-effects when taken individually, they must remain without any harmful effects even when taken in any combination—that there should be no interaction effects. It’s the same error, albeit couched in physicists’ and philosopher’s terms, that’s all.

… Too much emphasis on “math,” and too little an appreciation of the qualitative features, only helps in compounding the error.

A preliminary version of this post appeared as a comment on Roger Schlafly’s blog, here [^]. Schlafly has often wondered about the determinism vs. indeterminism issue on his blog, and often, seems to have taken positions similar to what I expressed here in this post.

The posting of this entry was motivated out of noticing certain remarks in Lee Smolin’s response to The Edge Question, 2013 edition [^], which I recently mentioned at my own blog, here [^].

A song I like:
(Marathi) “kaa re duraavaa, kaa re abolaa…”
Singer: Asha Bhosale
Lyrics: Ga. Di. Madgulkar

[In the interests of providing better clarity, this post shall undergo further unannounced changes/updates over the due course of time.

Revision history:
2019.04.24 23:05: First published
2019.04.25 14:41: Posted a fully revised and enlarged version.
]

/

# The rule of omitting the self-field in calculations—and whether potentials have an objective existence or not

There was an issue concerning the strictly classical, non-relativistic electricity which I was (once again) confronted with, during my continuing preoccupation with quantum mechanics.

Actually, a small part of this issue had occurred to me earlier too, and I had worked through it back then.

However, the overall issue had never occurred to me with as much of scope, generality and force as it did last evening. And I could not immediately resolve it. So, for a while, especially last night, I unexpectedly found myself to have become very confused, even discouraged.

Then, this morning, after a good night’s rest, everything became clear right while sipping my morning cup of tea. Things came together literally within a span of just a few minutes. I want to share the issue and its resolution with you.

The question in question (!) is the following.

Consider 2 (or $N$) number of point-charges, say electrons. Each electron sets up an electrostatic (Coulombic) potential everywhere in space, for the other electrons to “feel”.

As you know, the potential set up by the $i$-th electron is:
$V_i(\vec{r}_i, \vec{r}) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_i}{|\vec{r} - \vec{r}_i|}$
where $\vec{r}_i$ is the position vector of the $i$-th electron, $\vec{r}$ is any arbitrary point in space, and $Q_i$ is the charge of the $i$-th electron.

The potential energy associated with some other ($j$-th) electron being at the position $\vec{r}_j$ (i.e. the energy that the system acquires in bringing the two electrons from $\infty$ to their respective positions some finite distance apart), is then given as:
$U_{ij}(\vec{r}_i, \vec{r}_j) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_i\,Q_j}{|\vec{r}_j - \vec{r}_i|}$

The notation followed here is the following: In $U_{ij}$, the potential field is produced by the $i$-th electron, and the work is done by the $j$-th electron against the $i$-th electron.

Symmetrically, the potential energy for this configuration can also be expressed as:
$U_{ji}(\vec{r}_j, \vec{r}_i) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_j\,Q_i}{|\vec{r}_i - \vec{r}_j|}$

If a system has only two charges, then its total potential energy $U$ can be expressed either as $U_{ji}$ or as $U_{ij}$. Thus,
$U = U_{ji} = U_{ij}$

Similarly, for any pair of charges in an $N$-particle system, too. Therefore, the total energy of an $N$-particle system is given as:
$U = \sum\limits_{i}^{N} \sum\limits_{j = i+1}^{N} U_{ij}$

The issue now is this: Can we say that the total potential energy $U$ has an objective existence in the physical world? Or is it just a device of calculations that we have invented, just a concept from maths that has no meaningful physical counterpart?

(A side remark: Energy may perhaps exist as an attribute or property of something else, and not necessarily as a separate physical object by itself. However, existence as an attribute still is an objective existence.)

The reason to raise this doubt is the following.

When calculating the motion of the $i$-th charge, we consider only the potentials $V_j$ produced by the other charges, not the potential produced by the given charge $V_i$ itself.

Now, if the potential produced by the given charge ($V_i$) also exists at every point in space, then why does it not enter the calculations? How does its physical efficacy get evaporated away? And, symmetrically: The motion of the $j$-th charge occurs as if $V_j$ had physically evaporated away.

The issue generalizes in a straight-forward manner. If there are $N$ number of charges, then for calculating the motion of a given $i$-th charge, the potential fields of all other charges are considered operative. But not its own field.

How can motion become sensitive to only a part of the total potential energy existing at a point even if the other part also exists at the same point? That is the question.

This circumstance seems to indicate as if there is subjectivity built deep into the very fabric of classical mechanics. It is as if the universe just knows what a subject is going to calculate, and accordingly, it just makes the corresponding field mystically go away. The universe—the physical universe—acts as if it were changing in response to what we choose to do in our mind. Mind you, the universe seems to change in response to not just our observations (as in QM), but even as we merely proceed to do calculations. How does that come to happen?… May be the whole physical universe exists only in our imagination?

Got the point?

No, my confusion was not as pathetic as that in the previous paragraph. But I still found myself being confused about how to account for the fact that an electron’s own field does not enter the calculations.

But it was not all. A non-clarity on this issue also meant that there was another confusing issue which also raised its head. This secondary issue arises out of the fact that the Coulombic potential set up by any point-charge is singular in nature (or at least approximately so).

If the electron is a point-particle and if its own potential “is” $\infty$ at its position, then why does it at all get influenced by the finite potential of any other charge? That is the question.

Notice, the second issue is most acute when the potentials in question are singular in nature. But even if you arbitrarily remove the singularity by declaring (say by fiat) a finite size for the electron, thereby making its own field only finitely large (and not infinite), the above-mentioned issue still remains. So long as its own field is finite but much, much larger than the potential of any other charge, the effects due to the other charges should become comparatively less significant, perhaps even negligibly small. Why does this not happen? Why does the rule instead go exactly the other way around, and makes those much smaller effects due to other charges count, but not the self-field of the very electron in question?

While thinking about QM, there was a certain point where this entire gamut of issues became important—whether the potential has an objective existence or not, the rule of omitting the self-field while calculating motions of particles, the singular potential, etc.

The specific issue I was trying to think through was: two interacting particles (e.g. the two electrons in the helium atom). It was while thinking on this problem that this problem occurred to me. And then, it also led me to wonder: what if some intellectual goon in the guise of a physicist comes along, and says that my proposal isn’t valid because there is this element of subjectivity to it? This thought occurred to me with all its force only last night. (Or so I think.) And I could not recall seeing a ready-made answer in a text-book or so. Nor could I figure it out immediately, at night, after a whole day’s work. And as I failed to resolve the anticipated objection, I progressively got more and more confused last night, even discouraged.

However, this morning, it all got resolved in a jiffy.

Would you like to give it a try? Why is it that while calculating the motion of the $i$-th charge, you consider the potentials set up by all the rest of the charges, but not its own potential field? Why this rule? Get this part right, and all the philosophical humbug mentioned earlier just evaporates away too.

I would wait for a couple of days or so before coming back and providing you with the answer I found. May be I will write another post about it.

Update on 2019.03.16 20:14 IST: Corrected the statement concerning the total energy of a two-electron system. Also simplified the further discussion by couching it preferably in terms of potentials rather than energies (as in the first published version), because a Coulombic potential always remains anchored in the given charge—it doesn’t additionally depend on the other charges the way energy does. Modified the notation to reflect the emphasis on the potentials rather than energy.

A song I like:

[What else? [… see the songs section in the last post.]]
(Hindi) “woh dil kahaan se laaoon…”
Singer: Lata Mangeshkar
Music: Ravi
Lyrics: Rajinder Kishen

A bit of a conjecture as to why Ravi’s songs tend to be so hummable, of a certain simplicity, especially, almost always based on a very simple rhythm. My conjecture is that because Ravi grew up in an atmosphere of “bhajan”-singing.

Observe that it is in the very nature of music that it puts your mind into an abstract frame of mind. Observe any singer, especially the non-professional ones (or the ones who are not very highly experienced in controlling their body-language while singing, as happens to singers who participate in college events or talent shows).

When they sing, their eyes seem to roll in a very peculiar manner. It seems random but it isn’t. It’s as if the eyes involuntarily get set in the motions of searching for something definite to be found somewhere, as if the thing to be found would be in the concrete physical space outside, but within a split-second, the eyes again move as if the person has realized that nothing corresponding is to be found in the world out there. That’s why the eyes “roll away.” The same thing goes on repeating, as the singer passes over various words, points of pauses, nuances, or musical phrases.

The involuntary motions of the eyes of the singer provide a window into his experience of music. It’s as if his consciousness was again and again going on registering a sequence of two very fleeting experiences: (i) a search for something in the outside world corresponding to an inner experience felt in the present, and immediately later, (ii) a realization (and therefore the turning away of the eyes from an initially picked up tentative direction) that nothing in the outside world would match what was being searched for.

The experience of music necessarily makes you realize the abstractness of itself. It tends to make you realize that the root-referents of your musical experience lie not in a specific object or phenomenon in the physical world, but in the inner realm, that of your own emotions, judgments, self-reflections, etc.

This nature of music makes it ideally suited to let you turn your attention away from the outside world, and has the capacity or potential to induce a kind of a quiet self-reflection in you.

But the switch from the experience of frustrated searches into the outside world to a quiet self-reflection within oneself is not the only option available here. Music can also induce in you a transitioning from those unfulfilled searches to a frantic kind of an activity: screams, frantic shouting, random gyrations, and what not. In evidence, observe any piece of modern American / Western pop-music.

However, when done right, music can also induce a state of self-reflection, and by evoking certain kind of emotions, it can even lead to a sense of orderliness, peace, serenity. To make this part effective, such a music has to be simple enough, and orderly enough. That’s why devotional music in the refined cultural traditions is, as a rule, of a certain kind of simplicity.

The experience of music isn’t the highest possible spiritual experience. But if done right, it can make your transition from the ordinary experience to a deep, profound spiritual experience easy. And doing it right involves certain orderliness, simplicity in all respects: tune, tone, singing style, rhythm, instrumental sections, transitions between phrases, etc.

If you grow up listening to this kind of a music, your own music in your adult years tends to reflect the same qualities. The simplicity of rhythm. The alluringly simple tunes. The “hummability quotient.” (You don’t want to focus on intricate patterns of melody in devotional music; you want it to be so simple that minimal mental exertion is involved in rendering it, so that your mental energy can quietly transition towards your spiritual quest and experiences.) Etc.

I am not saying that the reason Ravi’s music is so great is because he listened his father sing “bhajan”s. If this were true, there would be tens of thousands of music composers having talents comparable to Ravi’s. But the fact is that Ravi was a genius—a self-taught genius, in fact. (He never received any formal training in music ever.) But what I am saying is that if you do have the musical ability, having this kind of a family environment would leave its mark. Definitely.

Of course, this all was just a conjecture. Check it out and see if it holds or not.

… May be I should convert this “note” in a separate post by itself. Would be easier to keep track of it. … Some other time. … I have to work on QM; after all, exactly only half the month remains now. … Bye for now. …

/

# An intermediate update regarding my intermediate development regarding my new approach regarding QM

Update on 2019.10.02, 17:00 IST

I have completed writing (more like somehow filling in the contents for) the alpha version of the outline document. However, it is not at all readable. So, I am not in a position to be able to distribute it even as a private communication. (Talking besides the black-board is so much easier to do!)

By now, the outline document alone runs into 18 pages (some of the contents being repetitive). The background document has become another 12 pages. Editing 30 pages should take at least about a week or so, if not a little more.

So, no promises, but chances are good that both these documents could get finalized and distributed within the next 7 to 10 days.

In the meanwhile, feel free to look for the other things on this blog, and bye for now.

Update over; original post, below the fold.

0. As mentioned here earlier, I have been in the process of writing a point-by-point outline document on my new approach to quantum mechanics.

1. A certain preliminary version of the outline document was completed on the afternoon of 4th February 2019. It is about 10 pages long, and roughly at a pre-alpha stage. Separately, there also has been an additional document covering some of the background material for understanding QM. (An earlier version of this background document was posted here at this blog few days ago—too bad if you never noticed it—bad, for you, that is.) It too has been under expansion and revision; currently it stands at a total of further 10 pages (i.e in addition to the outline document).

2. As things usually go at such a stage (i.e., in the stages before the alpha), certain mistakes (including some basic conceptual errors too) were noticed even in the main document, but only after it was “carefully” completed. Currently, these are being addressed.

3. In case you are wondering about the nature of the inadvertent errors or lacunae:

Contrary to what many people might be expecting from me:

3.1: First, errors or lacunae were mainly found not regarding my new ideas concerning the measurement postulate, but rather with the more philosophical ideas concerning the quantum-physical ontology!

3.2: Second, perhaps then not very surprisingly, lacunae were also found on the more applied side of the QM postulates, especially regarding the many- particles systems and quantum entanglement.

The nature of the lacunae / errors somehow gives me a confidence that the basic ideas of my new approach themselves should be right!

4. Pre-release versions starting from the (upcoming) alpha version could perhaps be made available to select physicists, as a private communication. …

… Of course, it is a different matter altogether that I think that none would be interested in the same. (Indian and American physicists and others think that way, anyway!)

… But still, if interested, drop me a line, and I will consider having you on the distribution list (which is expected not to carry more than 8–10 people at the most, so as to keep my own email communications and the attendant diversions and confusions down to the minimum so that I myself the jobless could at all handle it).

5. The Release Candidate should get posted at iMechanica, but only for the purposes of securing an external “time-stamp”—not so much for the purposes of discussions. (The focus of iMechanica is obviously different; it’s much more on the classical engineering side—which fact I love.)

6. I will try to finish the alpha by this week-end.

The next milestones until the final release (or even the release candidates) will be decided once the alpha is actually at the hand.

7. I will announce the availability of the alpha at this blog via a separate post.

A song I like:

(Hindi) “teraa meraa pyaar amar, phir bhee mujh ko lagataa hai Dar…”
Singer: Lata Mangeshkar
Music: Shankar-Jaikishan
Lyrics: Shailendra

[No specific order is being implied by the order of the credits. … In other words, I can’t decide on it. Not for this song.]

History:

First written on my private machine: Wednesday 06 February 2019 08:35:32 AM IST
First finalized here: Wednesday 06 February 2019 11:31:05 PM IST

/