# The One vs. the Many

This post continues from my last post. In that post, I had presented a series of diagrams depicting the states of the universe over time, and I had then asked you a simple question pertaining to the physics of it: what the series depicted, physically speaking.

I had also given an answer to that question, the one which most people would give. It would run something like this:

There are two blocks/objects/entities which are initially moving closer towards each other. Following their motions, they come closer to each other, touch each other, and then reverse the directions of their motions. Thus, there is a collision of sorts. (We deliberately didn’t go into the maths of it, e.g., such narrower, detailed or higher-level aspects such as whether the motions were uniform or whether they had accelerations/decelerations (implying forces) or not, etc.)

I had then told you that the preceding was not the only answer possible. At least one more answer that captures the physics of it, also is certainly possible. This other answer in fact leads to an entirely different kind of mathematics! I had asked you to think about such alternative(s).

In this post, let me present the alternative description.

The alternative answer is what school/early college-level text-books never present to students. Neither do the pop-sci. books. However, the alternative approach has been documented, in some or the other form, at least for centuries if not for millenia. The topic is routinely taught in the advanced UG and PG courses in physics. However, the university courses always focus on the maths of it, not the physics. The physical ideas are never explicitly discussed in them. The text-books, too, dive straight into the relevant mathematics. The refusal of physicists (and of mathematicians) to dwell on the physical bases of this alternative description is in part responsible for the endless confusion and debates surrounding such issues as quantum entanglement, action at a distance, etc.

There also is another interesting side to it. Some aspects of this kind of a thinking are also evident in the philosophical/spiritual/religious/theological thinking. I am sure that you would immediately notice the resonance to such broader ideas as we subsequently discuss the alternative approach. However, let me stress that, in this post, we focus only on the physics-related issues. Thus, if I at times just say “universe,” it is to be understood that the word pertains only to the physical universe (i.e. the sum total of the inanimate objects, and also the inanimate aspects of living beings), not to any broader, spiritual or philosophical issue.

OK. Now, on to the alternative description itself. It runs something like this:

There is only one physical object which physically exists, and it is the physical universe. The grey blocks that you see in the series of diagrams are not independent objects, really speaking. In this particular depiction, what look like two independent “objects” are, really speaking, only two spatially isolated parts of what actually is one and only one object. In fact, the “empty” or the “white” space you see in between the objects is not, really speaking, empty at all—it does not represent the literal void or the nought, so to speak. The region of space corresponding to the “empty” portions is actually occupied by a physical something. In fact, since there is only one physical object to all exist, it is that same—singleton—physical object which is present also in the apparently empty portions.

This is not to deny that the distinction between the grey and the white/“empty” parts is not real. The physically existing distinction between them—the supposed qualitative differences among them—arises only because of some quantitative differences in some property/properties of the universe-object. In other words, the universe does not exist uniformly across all its parts. There are non-uniformities within it, some quantitative differences existing over different parts of itself. Notice, up to this point, we are talking of parts and variations within the universe. Both these words: “parts” and “within” are to be taken in the broadest possible sense, as in  the sense of“logical parts” and “logically within”.

However, one set of physical attributes that the universe carries pertains to the spatial characteristics such as extension and location. A suitable concept of space can therefore be abstracted from these physically existing characteristics. With the concept of space at hand, the physical universe can then be put into an abstract correspondence with a suitable choice of a space.

Thus, what this approach naturally suggests is the idea that we could use a mathematical field-function—i.e. a function of the coordinates of a chosen space—in order to describe the quantitative variations in the properties of the physical universe. For instance, assuming a $1D$ universe, it could be a function that looks something like what the following diagram shows.

Here, the function shows that a certain property (like mass density) exists with a zero measure in the regions of the supposedly empty space, whereas it exists with a finite measure, say with density of $\rho_{g}$ in the grey regions. Notice that if the formalism of a field-function (or a function of a space) is followed, then the property that captures the variations is necessarily a density. Just the way the mass density is the density of mass, similarly, you can have a density of any suitable quantity that is spread over space.

Now, simply because the density function (shown in blue) goes to zero in certain regions, we cannot therefore claim that nothing exists in those regions. The reason is: we can always construct another function that has some non-zero values everywhere, and yet it shows sufficiently sharp differences between different regions.

For instance, we could say that the graph has $\rho_{0} \neq 0$ value in the “empty” region, whereas it has a $\rho_{g}$ value in the interior of the grey regions.

Notice that in the above paragraph, we have subtly introduced two new ideas: (i) some non-zero value, say $\rho_{0}$, as being assigned even to the “empty” region—thereby assigning a “something”, a matter of positive existence, to the “empty”-ness; and (ii) the interface between the grey and the white regions is now asserted to be only “sufficiently” sharp—which means, the function does not take a totally sharp jump from $\rho_{0}$ to $\rho_{g}$ at a single point $x_i$ which identifies the location of the interface. Notice that if the function were to have such a totally sharp jump at a single point, it would not in fact even be a proper function, because there would be an infinity of density values between and including $\rho_{0}$ and $\rho_{g}$ existing at the same point $x_i$. Since the density would not have a unique value at $x_i$, it won’t be a function.

However, we can always replace the infinitely sharp interface of zero thickness by a sufficiently sharp (and not infinitely sharp) interface of a sufficiently small but finite thickness.

Essentially, what this trick does is to introduce three types of spatial regions, instead of two: (i) the region of the “empty” space, (ii) the region of the interface (iii) the interior, grey, region.

Of course, what we want are only two regions, not three. After all, we need to make a distinction only between the grey and the white regions. Not an issue. We can always club the interface region with either of the remaining two. Here is the mathematical procedure to do it.

Introduce yet another quantitative measure, viz., $\rho_{c}$, called the critical density. Using it, we can in fact divide the interface dispense region into further two parts: one which has $\rho < \rho_c$ and another one which has $\rho \geq \rho_c$. This procedure does give us a point-thick locus for the distinction between the grey and the white regions, and yet, the actual changes in the density always remain fully smooth (i.e. density can remain an infinitely differentiable function).

All in all, the property-variation at the interface looks like this:

Indeed, our previous solution of clubbing the interface region into the grey region is nothing but having $\rho_c = \rho_0$, whereas clubbing the interface in the “empty” space region is tantamount to having $\rho_c = \rho_g$.

In any case, we do have a sharp demarcation of regions, and yet, the density remains a continuous function.

We can now claim that such is what the physical reality is actually like; that the depiction presented in the original series of diagrams, consisting of infinitely sharp interfaces, cannot be taken as the reference standard because that depiction itself was just that: a mere depiction, which means: an idealized description. The actual reality never was like that. Our ultimate standard ought to be reality itself. There is no reason why reality should not actually be like what our latter description shows.

This argument does hold. Mankind has never been able to think of a single solid argument against having the latter kind of a description.

Even Euclid had no argument for the infinitely sharp interfaces his geometry implies. Euclid accepted the point, the line and the plane as the already given entities, as axioms. He did not bother himself with locating their meaning in some more fundamental geometrical or mathematical objects or methods.

What can be granted to Euclid can be granted to us. He had some axioms. We don’t believe them. So we will have our own axioms. As part of our axioms, interfaces are only finitely sharp.

Notice that the perceptual evidence remains the same. The difference between the two descriptions pertains to the question of what is it that we regard as object(s), primarily. The considerations of the sharpness or the thickness of the interface is only a detail, in the overall scheme.

In the first description, the grey regions are treated as objects in their own right. And there are many such objects.

In the second description, the grey regions are treated not as objects in their own right, but merely as distinguishable (and therefore different) parts of a single object that is the universe. Thus, there is only one object.

So, we now have two alternative descriptions. Which one is correct? And what precisely should we regard as an object anyway? … That, indeed, is a big question! 🙂

More on that question, and the consequences of the answers, in the next post in this series…. In it, I will touch upon the implications of the two descriptions for such things as (a) causality, (b) the issue of the aether—whether it exists and if yes, what its meaning is, (c) and the issue of the local vs. non-local descriptions (and implications therefore, in turn, for such issues as quantum entanglement), etc. Stay tuned.

A Song I Like:

(Hindi) “kitni akeli kitni tanha see lagi…”
Singer: Lata Mangeshkar
Music: Sachin Dev Burman
Lyrics: Majrooh Sultanpuri

[May be one editing pass, later? May be. …]

## 11 thoughts on “The One vs. the Many”

1. Did you find time to learn more about the density matrix formalism? Can you see how it gives you a bit more information and control over subsystems without requiring information and control over the “whole” system (which you typically don’t have)? The uncertainty which arises is now due to unobservable degrees of freedom of the rest of the “whole” system, while classically randomness arises due to the unobservable degrees of freedom internal to the subsystem.

No, unfortunately, I could not find any time to go through any formalism of QM—DM or the usual ones. But yes, I sure do have your helpful comments somewhere in the back of the mind.

As to the unobserved DOFs and the (apparent) randomness that they lead to: I don’t think we can say that if a system is classical, then the causes (or the unobserved DOFs) responsible for generating randomness observed in some given part of the system must necessarily come from only that particular part of the overall system. Randomness even in classical systems could easily arise out of some “global” causes (i.e. DOFs that are not restricted to the same, local, subsystem). Think turbulence in fluids, for instance. Or chaos in planetary orbits.

–Ajit

• Thanks for your reply. You are right for the randomness of turbulence in fluids, but I disagree with respect to chaos in planetary orbits. The chaos in planetary orbits is a prime example for the classical randomness due to unobservable degrees of freedom internal to the subsystem. The randomness in this case arises due to our imperfect knowledge of the exact positions and velocity of the individual planets. Those are local degrees of freedom, i.e. exactly what I mean by degrees of freedom “internal to the subsystem”.

Sorry for the delay. Right the next day, I got down with cough, fever, runny nose, etc. Am still down, but this is the first 6 straight hours in the past few days that I am not at least directly feeling the drowsiness of the anti-histamines. … So, let me very brief:

Can’t you take a planetary system and divide the 3D volume it occupies into two 3D regions i.e. two sub-systems, say the eastwards region and the westwards region (separated by a vertical infinite plane erected say at the center of the solar system)? Wouldn’t then the apparent randomness observed in one of the sub-systems be caused by DOFs operating also in the other sub-system? Wouldn’t then such DOFs stand to be characterized as global in character?

Thanks, and best,

–Ajit

2. Sorry for the delay. I enjoyed my vacation away from any computer.

I guess the reason why I disagree with respect to chaos in planetary orbits is that I imagine classical randomness as a process in time, where observation and prediction are intermixed similar to the operation of a Kalman filter. Hence the “hidden” planets would still reveal their approximate position and velocity to me (over time), by the gravitational force they exert upon me. Here, the randomness for me is the additional information revealed “continuously” by the observation after the initial transient period (for the Kalman filter) is over. If you now propose to have so many “hidden” planets moving in complicated orbits that I will be unable to deduce their approximate positions and velocities over time, then you approach one of the reasons why I agreed with you regarding randomness of turbulence in fluids.

There is a second reason why I agree that there is a global character to the uncertainty for fluids. For many relevant real world examples like the flow around an airplane or a river near a dike (or dam), the local surfaces will determine most of the phenomena, but the system is irrevocably non-closed and the “minor” uncertainty continuously streams in from outside.

It would be interesting to contrast this with the situation for quantum mechanics, but not now. Also, for the concrete examples I have in mind right now, the randomness (or uncertainty) would appear in the form of expectation values, instead of in the form of random processes in time (as for the classical examples discussed above).

There are, I catch, quite a few deep ideas in what you write above. [Presumably, the vacation away from the computer has come to serve you well. [There is a lesson in it for us all! Namely, we don’t have to turn ourselves slaves to the INDUSTRY IN THE S. F. BAY AREA!!]]

Let me think over it, I mean your comment, and let me come back to you a little later. …. I mean, the Kalman filter and all… It’s something I remain vague about, though, quite some years ago, I had thought that I had “got it.” … But, apparently, not quite. [Else I would know what to write, wouldn’t I?]

Anyway, as a conclusion, oh, BTW, thanks for sharing the comments, but anyway, as a conclusion, let me come back to you ASAP. … Also, please check out my new post and all (though I would surely be checking out comments on *all* my posts).

Best,

–Ajit

Ok, it’s only now that I seem to have got your drift.

Chaos can exist even in those planetary systems where there are no hidden planets, i.e., where all planets are known. Let’s consider such a system. Here, you would be able to estimate the approximate trajectories of the planets (and the the region to which the trajectory of any given planet would be confined). However, apparent randomness would still arise because of the chaos.

Now, observe that this is an n-body problem, and it also involves sensitive dependence on initial conditions. Therefore, the motion in a given sub-system (say the westward half from the center of the system), there is a sensitive dependence on the configuration (of the known planets) as it exists in some other sub-system (say the eastward half from the center). Clearly, the DOFs existing outside of a given subsystem do come to influence what happens in that subsystem. Clearly, then, the chaos observed in any one subsystem does have a global character.

The main difference between a chaotic planetary system and a turbulent flow of a fluid is that the former is a discrete system whereas the latter is continuous. Yet, inasmuch as the trajectories in both cases do go outside a given subsystem, and inasmuch as the portions of the trajectories lying outside do produce certain influences that one way or the other propagate to the interiors of the subsystem (whether directly via gravity, or indirectly, via momentum transfers between different parts of fluids), both the systems remain chaotic in nature (thereby producing apparent randomness), and in both the systems, the relevant DOFs remain global in character.

The above remains the case even if you manage to capture the past randomness via an uncertainty matrix, as in the case of the Kalman filter.

Best,

–Ajit

• The main difference for me is that the idealization of a closed system is nearly always appropriate for a planetary system, but nearly always inappropriate for a turbulent flow. This is independent of being continuous or discrete. The idealization of a closed system is appropriate for many electrostatic (or even electrodynamic) problems, while that idealization would be inappropriate for studying traffic flow on a specific segment of a highway.

As long as I don’t have a better example than a planetary system for explaining what I mean by unobservable (local) degrees of freedom internal to the subsystem, I need to defend that example. Otherwise it seems to me that it would become impossible to understand my words, since there would be no example left which can be used to explain them.

I did learn from our conversation that unobservable (global) degrees of freedom of the rest of the “whole” system are not restricted to quantum mechanics, but naturally also occur for classical systems. The main reason why I associated those with quantum mechanics is that it is a typical mistake in that context to inappropriately assume a closed system (like the “wavefunction of the whole universe”) there, and then talk about the wonders of the resulting paradoxes. And many people are so used to randomness flowing out of the potentially infinite information content of (potentially hidden local) internal degrees of freedom that they try to reduce all randomness to that sort of origin.