The One vs. the Many

This post continues from my last post. In that post, I had presented a series of diagrams depicting the states of the universe over time, and I had then asked you a simple question pertaining to the physics of it: what the series depicted, physically speaking.

I had also given an answer to that question, the one which most people would give. It would run something like this:

There are two blocks/objects/entities which are initially moving closer towards each other. Following their motions, they come closer to each other, touch each other, and then reverse the directions of their motions. Thus, there is a collision of sorts. (We deliberately didn’t go into the maths of it, e.g., such narrower, detailed or higher-level aspects such as whether the motions were uniform or whether they had accelerations/decelerations (implying forces) or not, etc.)

I had then told you that the preceding was not the only answer possible. At least one more answer that captures the physics of it, also is certainly possible. This other answer in fact leads to an entirely different kind of mathematics! I had asked you to think about such alternative(s).

In this post, let me present the alternative description.


The alternative answer is what school/early college-level text-books never present to students. Neither do the pop-sci. books. However, the alternative approach has been documented, in some or the other form, at least for centuries if not for millenia. The topic is routinely taught in the advanced UG and PG courses in physics. However, the university courses always focus on the maths of it, not the physics. The physical ideas are never explicitly discussed in them. The text-books, too, dive straight into the relevant mathematics. The refusal of physicists (and of mathematicians) to dwell on the physical bases of this alternative description is in part responsible for the endless confusion and debates surrounding such issues as quantum entanglement, action at a distance, etc.

There also is another interesting side to it. Some aspects of this kind of a thinking are also evident in the philosophical/spiritual/religious/theological thinking. I am sure that you would immediately notice the resonance to such broader ideas as we subsequently discuss the alternative approach. However, let me stress that, in this post, we focus only on the physics-related issues. Thus, if I at times just say “universe,” it is to be understood that the word pertains only to the physical universe (i.e. the sum total of the inanimate objects, and also the inanimate aspects of living beings), not to any broader, spiritual or philosophical issue.

OK. Now, on to the alternative description itself. It runs something like this:

There is only one physical object which physically exists, and it is the physical universe. The grey blocks that you see in the series of diagrams are not independent objects, really speaking. In this particular depiction, what look like two independent “objects” are, really speaking, only two spatially isolated parts of what actually is one and only one object. In fact, the “empty” or the “white” space you see in between the objects is not, really speaking, empty at all—it does not represent the literal void or the nought, so to speak. The region of space corresponding to the “empty” portions is actually occupied by a physical something. In fact, since there is only one physical object to all exist, it is that same—singleton—physical object which is present also in the apparently empty portions.

This is not to deny that the distinction between the grey and the white/“empty” parts is not real. The physically existing distinction between them—the supposed qualitative differences among them—arises only because of some quantitative differences in some property/properties of the universe-object. In other words, the universe does not exist uniformly across all its parts. There are non-uniformities within it, some quantitative differences existing over different parts of itself. Notice, up to this point, we are talking of parts and variations within the universe. Both these words: “parts” and “within” are to be taken in the broadest possible sense, as in  the sense of“logical parts” and “logically within”.

However, one set of physical attributes that the universe carries pertains to the spatial characteristics such as extension and location. A suitable concept of space can therefore be abstracted from these physically existing characteristics. With the concept of space at hand, the physical universe can then be put into an abstract correspondence with a suitable choice of a space.

Thus, what this approach naturally suggests is the idea that we could use a mathematical field-function—i.e. a function of the coordinates of a chosen space—in order to describe the quantitative variations in the properties of the physical universe. For instance, assuming a 1D universe, it could be a function that looks something like what the following diagram shows.

Here, the function shows that a certain property (like mass density) exists with a zero measure in the regions of the supposedly empty space, whereas it exists with a finite measure, say with density of \rho_{g} in the grey regions. Notice that if the formalism of a field-function (or a function of a space) is followed, then the property that captures the variations is necessarily a density. Just the way the mass density is the density of mass, similarly, you can have a density of any suitable quantity that is spread over space.

Now, simply because the density function (shown in blue) goes to zero in certain regions, we cannot therefore claim that nothing exists in those regions. The reason is: we can always construct another function that has some non-zero values everywhere, and yet it shows sufficiently sharp differences between different regions.

For instance, we could say that the graph has \rho_{0} \neq 0 value in the “empty” region, whereas it has a \rho_{g} value in the interior of the grey regions.

Notice that in the above paragraph, we have subtly introduced two new ideas: (i) some non-zero value, say \rho_{0}, as being assigned even to the “empty” region—thereby assigning a “something”, a matter of positive existence, to the “empty”-ness; and (ii) the interface between the grey and the white regions is now asserted to be only “sufficiently” sharp—which means, the function does not take a totally sharp jump from \rho_{0} to \rho_{g} at a single point x_i which identifies the location of the interface. Notice that if the function were to have such a totally sharp jump at a single point, it would not in fact even be a proper function, because there would be an infinity of density values between and including \rho_{0} and \rho_{g} existing at the same point x_i. Since the density would not have a unique value at x_i, it won’t be a function.

However, we can always replace the infinitely sharp interface of zero thickness by a sufficiently sharp (and not infinitely sharp) interface of a sufficiently small but finite thickness.

Essentially, what this trick does is to introduce three types of spatial regions, instead of two: (i) the region of the “empty” space, (ii) the region of the interface (iii) the interior, grey, region.

Of course, what we want are only two regions, not three. After all, we need to make a distinction only between the grey and the white regions. Not an issue. We can always club the interface region with either of the remaining two. Here is the mathematical procedure to do it.

Introduce yet another quantitative measure, viz., \rho_{c}, called the critical density. Using it, we can in fact divide the interface dispense region into further two parts: one which has \rho  < \rho_c and another one which has \rho \geq \rho_c. This procedure does give us a point-thick locus for the distinction between the grey and the white regions, and yet, the actual changes in the density always remain fully smooth (i.e. density can remain an infinitely differentiable function).

All in all, the property-variation at the interface looks like this:

Indeed, our previous solution of clubbing the interface region into the grey region is nothing but having \rho_c = \rho_0, whereas clubbing the interface in the “empty” space region is tantamount to having \rho_c = \rho_g.

In any case, we do have a sharp demarcation of regions, and yet, the density remains a continuous function.

We can now claim that such is what the physical reality is actually like; that the depiction presented in the original series of diagrams, consisting of infinitely sharp interfaces, cannot be taken as the reference standard because that depiction itself was just that: a mere depiction, which means: an idealized description. The actual reality never was like that. Our ultimate standard ought to be reality itself. There is no reason why reality should not actually be like what our latter description shows.

This argument does hold. Mankind has never been able to think of a single solid argument against having the latter kind of a description.

Even Euclid had no argument for the infinitely sharp interfaces his geometry implies. Euclid accepted the point, the line and the plane as the already given entities, as axioms. He did not bother himself with locating their meaning in some more fundamental geometrical or mathematical objects or methods.

What can be granted to Euclid can be granted to us. He had some axioms. We don’t believe them. So we will have our own axioms. As part of our axioms, interfaces are only finitely sharp.

Notice that the perceptual evidence remains the same. The difference between the two descriptions pertains to the question of what is it that we regard as object(s), primarily. The considerations of the sharpness or the thickness of the interface is only a detail, in the overall scheme.

In the first description, the grey regions are treated as objects in their own right. And there are many such objects.

In the second description, the grey regions are treated not as objects in their own right, but merely as distinguishable (and therefore different) parts of a single object that is the universe. Thus, there is only one object.

So, we now have two alternative descriptions. Which one is correct? And what precisely should we regard as an object anyway? … That, indeed, is a big question! 🙂

More on that question, and the consequences of the answers, in the next post in this series…. In it, I will touch upon the implications of the two descriptions for such things as (a) causality, (b) the issue of the aether—whether it exists and if yes, what its meaning is, (c) and the issue of the local vs. non-local descriptions (and implications therefore, in turn, for such issues as quantum entanglement), etc. Stay tuned.


A Song I Like:

(Hindi) “kitni akeli kitni tanha see lagi…”
Singer: Lata Mangeshkar
Music: Sachin Dev Burman
Lyrics: Majrooh Sultanpuri

[May be one editing pass, later? May be. …]

Introducing a Very Foundational Issue of Physics (and of Maths)

OK, so I am finally done with moving my stuff, and so, from now on, should be able to find at least some time for ‘net activities, including browsing and blogging (not to mention also picking up writing my position paper on QM from where I left it).

Alright, so let me resume my blogging right away by touching on a very foundational aspect of physics (and also of maths).


Before you can even think of building a theory of physics, you must first adopt, implicitly or explicitly, a viewpoint concerning what kind of physical objects are assumed to exist in the physical universe.

For instance, Newtonian mechanics assumes that the physical universe is made from massive and charge-less solid bodies that experience and exert the inter-body forces of gravity and those arising out of their direct contact. In contrast, the later development of the Maxwellian electrodynamics assumes that there are two types of objects: massive and charged solid bodies, and the electromagnetic and gravitational fields which they set-up and with which they interact. Last year, I had written a post spelling out the different kinds of physical objects that are assumed to exist in the Newtonian mechanics, in the classical electrodynamics, etc.; see here [^].

In this post, I want to highlight yet another consideration which enters physics at the most fundamental level. Let me illustrate the issue involved via a simple example.

Consider a 2D universe. The following series of diagrams depicts this universe as it exists at different instants of time, from t_{1} through t_{9}. Each diagram in the series represents the entire universe.

Assume that the changes in time actually occur continuously; it’s just that while drawing diagrams, we can depict the universe only at isolated (or “discrete”) instants of time.

Now, consider this seemingly very simple question:

What precisely does the above series of diagrams depict, physically speaking?

Can you provide a brief description (say, running into 2–3 lines) as to what is happening here, physics-wise?

At this point, you may perhaps be thinking that the answer is obvious. The answer is so obvious, you could be thinking, that it is very stupid of me to even think of raising such a question.

“Why, of course, what that series of pictures depicts is this: there are two blocks/objects/entities which are initially moving towards each other. Eventually they come so close to each other that they even touch each other. They thus undergo a collision, and as a result, they begin to move apart. … Plain and simple.”

You could be thinking along some lines like that.

But let me warn you, that precisely is your potential pitfall—i.e., thinking that the question is so simple, and the answer so obvious. Actually, as it turns out, there is no unique answer to that question.

That’s why, no matter how dumb the above question may look to you, let me ask you once again to take a moment to think afresh about it. And then, whatever be your answer, write it down. In your answer, try to be as brief and as precise as possible.

I will continue with this issue in my next post, to be written and posted after a few days. I am deliberately taking a break here because I do want you to give it a shot—writing down a precise answer. Unless you actually try out this exercise for yourself, you won’t come to appreciate either of the following two, separate points:

  1. how difficult it can be to write very precise answers to what appear to be the simplest of questions, and
  2. how unwittingly and subtly some unwarranted assumptions can so easily creep in, in a physical description—and therefore, in mathematics.

You won’t come to appreciate how deceptive this question really is unless you actually give it a try. And it is to ensure this part that I have to take a break here.

Enjoy!

“Measure for Measure”—a pop-sci video on QM

This post is about a video on QM for the layman. The title of the video is: “Measure for Measure: Quantum Physics and Reality” [^]. It is also available on YouTube, here [^].

I don’t recall precisely where on the ‘net I saw the video being mentioned. Anyway, even though its running time is 01:38:43 (i.e. 1 hour, 38 minutes, making it something like a full-length feature film), I still went ahead, downloaded it and watched it in full. (Yes, I am that interested in QM!)

The video was shot live at an event called “World Science Festival.” I didn’t know about it beforehand, but here is the Wiki on the festival [^], and here is the organizer’s site [^].

The event in the video is something like a panel discussion done on stage, in front of a live audience, by four professors of physics/philosophy. … Actually five, including the moderator.

Brian Greene of Columbia [^] is the moderator. (Apparently, he co-founded the World Science Festival.) The discussion panel itself consists of: (i) David Albert of Columbia [^]. He speaks like a philosopher but seems inclined towards a specific speculative theory of QM, viz. the GRW theory. (He has that peculiar, nasal, New York accent… Reminds you of Dr. Harry Binswanger—I mean, by the accent.) (ii) Sheldon Goldstein of Rutgers [^]. He is a Bohmian, out and out. (iii) Sean Carroll of CalTech [^]. At least in the branch of the infinity of the universes in which this video unfolds, he acts 100% deterministically as an Everettian. (iv) Ruediger Schack of Royal Holloway (the spelling is correct) [^]. I perceive him as a QBist; guess you would, too.

Though the video is something like a panel discussion, it does not begin right away with dudes sitting on chairs and talking to each other. Even before the panel itself assembles on the stage, there is a racy introduction to the quantum riddles, mainly on the wave-particle duality, presented by the moderator himself. (Prof. Greene would easily make for a competent TV evangelist.) This part runs for some 20 minutes or so. Then, even once the panel discussion is in progress, it is sometimes interwoven with a few short visualizations/animations that try to convey the essential ideas of each of the above viewpoints.

I of course don’t agree with any one of these approaches—but then, that is an entirely different story.

Coming back to the video, yes, I do want to recommend it to you. The individual presentations as well as the panel discussions (and comments) are done pretty well, in an engaging and informal way. I did enjoy watching it.


The parts which I perhaps appreciated the most were (i) the comment (near the end) by David Albert, between 01:24:19–01:28:02, esp. near 1:27:20 (“small potatoes”) and, (ii) soon later, another question by Brian Greene and another answer by David Albert, between 01:33:26–01:34:30.

In this second comment, David Albert notes that “the serious discussions of [the foundational issues of QM] … only got started 20 years ago,” even though the questions themselves do go back to about 100 years ago.

That is so true.

The video was recorded recently. About 20 years ago means: from about mid-1990s onwards. Thus, it is only from mid-1990s, Albert observes, that the research atmosphere concerning the foundational issues of QM has changed—he means for the better. I think that is true. Very true.

For instance, when I was in UAB (1990–93), the resistance to attempting even just a small variation to the entrenched mainstream view (which means, the Copenhagen interpretation (CI for short)) was so enormous and all pervading, I mean even in the US/Europe, that I was dead sure that a graduate student like me would never be able to get his nascent ideas on QM published, ever. It therefore came as a big (and a very joyous) surprise to me when my papers on QM actually got accepted (in 2005). … Yes, the attitudes of physicists have changed. Anyway, my point here is, the mainstream view used to be so entrenched back then—just about 20 years ago. The Copenhagen interpretation still was the ruling dogma, those days. Therefore, that remark by Prof. Albert does carry some definite truth.


Prof. Albert’s observation also prompts me to pose a question to you.

What could be the broad social, cultural, technological, economic, or philosophic reasons behind the fact that people (researchers, graduate students) these days don’t feel the same kind of pressure in pursuing new ideas in the field of Foundations of QM? Is the relatively greater ease of publishing papers in foundations of QM, in your opinion, an indication of some negative trends in the culture? Does it show a lowering of the editorial standards? Or is there something positive about this change? Why has it become easier to discuss foundations of QM? What do you think?

I do have my own guess about it, and I would sure like to share it with you. But before I do that, I would very much like to hear from you.

Any guesses? What could be the reason(s) why the serious discussions on foundations of QM might have begun to occur much more freely only after mid-1990s—even though the questions had been raised as early as in 1920s (or earlier)?

Over to you.


Greetings in advance for the Republic Day. I [^] am still jobless.


[E&OE]

 

The indistinguishability of the indistinguishable particles is the problem

For many of you (and all of you in the Western world), these would be the times of the Christmas vacations.

For us, the Diwali vacations are over, and, in fact, the new term has already begun. To be honest, classes are not yet going on in full swing. (Many students are still visiting home after their examinations for the last term—which occurred after Diwali.) Yet, the buzz is in the air, and in fact, for an upcoming accreditation visit the next month, we are once again back to working also on week-ends.

Therefore, I don’t (and for a month or so, won’t be able to) find the time to do any significant blogging.

Yes, I do have a few things lined up for blogging—in my mind. On the physical plane, there simply is no time. Still, rather than going on cribbing about lack of time, let me give you something more substantial to chew on, in the meanwhile. It’s one of the things lined up, anyway.


 

Check out this piece [^] in Nautilus by Amanda Gefter [^]; H/T to Roger Schlafly [^].

Let me reproduce the paragraph that Roger did, because it really touches on the central argument by Frank Wilczek [^][^]. In the Nautilus piece, Amanda Gefter puts him in a hypothetical court scene:

“Dr. Wilczek,” the defense attorney begins. “You have stated what you believe to be the single most profound result of quantum field theory. Can you repeat for the court what that is?”

The physicist leans in toward the microphone. “That two electrons are indistinguishable,” he says.

Dude, get it right. It’s not the uncertainty principle. It’s not the wave-particle duality. It’s not even the spooky action-at-a-distance and entanglement. It is indistinguishability. Amanda Gefter helps us understand the physics Nobel laureate’s viewpoint

The smoking gun for indistinguishability, and a direct result of the 1-in-3 statistics, is interference. Interference betrays the secret life of the electron, explains Wilczek. On observation, we will invariably find the electron to be a corpuscular particle, but when we are not looking at it, the electron bears the properties of a wave. When two waves overlap, they interfere—adding and amplifying in the places where their phases align—peaks with peaks, troughs with troughs—and canceling and obliterating where they find themselves out of sync. These interfering waves are not physical waves undulating through a material medium, but mathematical waves called wavefunctions. Where physical waves carry energy in their amplitudes, wavefunctions carry probability. So although we never observe these waves directly, the result of their interference is easily seen in how it affects probability and the statistical outcomes of experiment. All we need to do is count.

The crucial point is that only truly identical, indistinguishable things interfere. The moment we find a way to distinguish between them—be they particles, paths, or processes—the interference vanishes, and the hidden wave suddenly appears in its particle guise. If two particles show interference, we can know with absolute certainty that they are identical. Sure enough, experiment after experiment has proven it beyond a doubt: electrons interfere. Identical they are—not for stupidity or poor eyesight but because they are deeply, profoundly, inherently indistinguishable, every last one.

This is no minor technicality. It is the core difference between the bizarre world of the quantum and the ordinary world of our experience. The indistinguishability of the electron is “what makes chemistry possible,” says Wilczek. “It’s what allows for the reproducible behavior of matter.” If electrons were distinguishable, varying continuously by minute differences, all would be chaos. It is their discrete, definite, digital nature that renders them error-tolerant in an erroneous world.

You have to read the entire article in order to understand what Amanda means when she says the “1-in-3 statistics.” Here are the relevant excerpts:

An electron—any electron—is an elementary particle, which is to say it has no known substructure.

[snip]

What does this mean? That every electron is the precise spitting image of every other electron, lacking, as it does, even the slightest leeway for even the most minuscule deviation. Unlike a composite, macroscopic object [snip] electrons are not merely similar, if uncannily so, but deeply, profoundly identical—interchangeable, fungible, mere placeholders, empty labels that read “electron” and nothing more.

This has some rather curious—and measurable—consequences. Consider the following example: We have two elementary particles, A and B, and two boxes, and we know each particle must be in one of the two boxes at any given time. Assuming that A and B are similar but distinct, the setup allows four possibilities: A is in Box 1 and B is in Box 2, A and B are both in Box 1, A and B are both in Box 2, or A is in Box 2 and B is in Box 1. The rules of probability tell us that there is a 1-in-4 chance of finding the two particles in each of these configurations.

If, on the other hand, particles A and B are truly identical, we must make a rather strange adjustment in our thinking, for in that case there is literally no difference between saying that A is in Box 1 and B in Box 2, or B is in Box 1 and A is in Box 2. Those scenarios, originally considered two distinct possibilities, are in fact precisely the same. In total, now, there are only three possible configurations, and probability assigns a 1-in-3 chance that we will discover the particles in any one of them.

Some time ago, I had mentioned how, during my text-book studies of QM, I had got stuck at the topic of spin and identical particles [^]. … Well, I didn’t have this in mind, but, yes, identical particles is the topic where I had got stuck anyway. (I still am, to some extent. However, since then, this article [^] by Joshua Samani did help in getting things clarified.)

Anyway, coming back to Wilczek and QM, Gefter reports:

Wilczek puts it this way: “Another aspect of quantum mechanics closely related to indistinguishability, and a competitor for its deepest aspect, is that if you want to describe the state of two electrons, it’s not that you have a wavefunction for one and a separate wavefunction for the other, each living in three-dimensional space. You really have a six-dimensional wavefunction that has two positions in it where you can fill in two electrons.” The six-dimensional wavefunction means that the probabilities for finding each electron at a particular location are not independent—that is, they are entangled.

It is no mystery that all electrons look alike, he [i.e. Wilczek] says, because they are all manifestations, temporary excitations of one and the same underlying electron field, which permeates all space, all time. Others, like physicist John Archibald Wheeler, say one particle. He suggested that perhaps electrons are indistinguishable because there’s only one, but it traces such convoluted paths through space and time that at any given moment it appears to be many.

Ummm. Not quite—this only one electron part. Wheeler never got “it” right, IMO. He also influenced Feynman and “won” him, but in the reverse order: he first got Feynman as a graduate student, and then, of course, influenced him. … BTW, how come Wheeler’s idea hasn’t been used to put forth monotheistic arguments? Any idea? As to me, I guess, two reasons: (i) the monotheistic people wouldn’t like their God doing this frenzied a running around in the material world, and (ii) the mainstream QM insists on the vagueness in the position of the quantum particle, so that its running from “here” to “there” itself is untenable. … Anyway, let’s continue with Amanda Gefter:

So if the elementary particles of which we are made don’t really exist as objects, how do we exist?

Good job, Amanda!

… Her search for the answer involves other renowned physicists, too; in particular, Peter Pesic [^]:

“When you have more and more electrons, the state that they together form starts to be more and more capable of being distinct,” Pesic said.

Only when you have “more and more” electrons?

“So the reason that you and I have some kind of identity is that we’re composed of so enormously many of these indistinguishable components. It’s our state that’s distinguishable, not our materiality.”

IMO, Pesic nearly got it—and then, just as easily, also lost it!

It has to be something to do with the state! After all, in QM, state defines everything. But you don’t really need the many here—there is no need for a “collective” approach like that, IMO. And, as to the state vs materiality distinction: The quantum mechanical state is supposed to describe each and every material aspect of every thing.

So, that’s a physicist thinking about QM(,) for you.

…Anyway, Amanda has a job to do, and she continues doing as best of it as she can:

Our identity is a state, but if it’s not a state of matter—not a state of individual physical objects, like quarks and electrons—then a state of what?

Enter: Ladyman, a philosopher:

A state, perhaps, of information. Ladyman suggests that we can replace the notion of a “thing” with a “real pattern”—a concept first articulated by the philosopher Daniel Dennett and further developed by Ladyman and philosopher Don Ross. “Another way of articulating what you mean by an object is to talk about compression of information,” Ladyman says. “So you can claim that something’s real if there’s a reduction in the information-theoretic complexity of tracking the world if you include it in your description.”

There is more along this line:

Should such examples give the impression that the real patterns are patterns of particles, beware: Particles, like our electron, are real patterns themselves. “We’re using a particle-like description to keep track of the real patterns,” Ladyman says. “It’s real patterns all the way down.”

Honest, what I experienced when I first read this passage was: a very joyful moment!

We are nothing but fleeting patterns, signals in the noise. Drill down and the appearance of materiality gives way; underneath it, nothing.

Ladyman tutoring Amanda, that was.

Here is a conjecture about the path they trace together; the part in the square brackets [] is optional:

We (i.e. a physical object in this context)-> Fleeting Patterns -> Fleeting Patterns -> Signals in the Noise –> [We –>] Signals in the Noise –> Appearance of Materiality –> Appearance of Materiality –> Appearance –> Nothing.

Fascinating, these philosophers (really) are. Ladyman proves the point, once again:

“I think in the end,” says Ladyman, “it may well be that the world isn’t made of anything.”

You could tell how rapidly he would go from “may well be” to “is,” couldn’t you?


So, that is what I have picked up for thinking. I mean, the two issues raised by Wilczek.

(1) The first issue was about how the indistinguishability of the indistinguishable particles is a problem. I will come back at it some later time, but in the meanwhile, here is the answer in brief (and in the vague):

Electrons are identical because: (i) the only extent to which we can at all determine that they are identical is based on quantum-mechanical observations, and (ii) observables are operators in QM.

That much of an answer is enough, but just in case it doesn’t strike the right chord:

The fact that observables are operators means that they are mathematical processes. These processes operate on wavefunctions. They “bring out” a mathematical aspect of the wavefunction.

Even if electrons were not to be exactly identical in all respects, as long as the QM postulates remain valid—as long as observables must be represented via Hermitian operators so that only real eigenvalues can be had—you would have no way to tell in what micro-way they might actually be different.

If you must have a (rather bad) analogy, take two particles of sand of roughly the same size, and gently drop both of them in a jar of honey (or some suitable fluid) at the same time. Both will fall at the same rate (within the experimental margin), and if, somehow, classical mechanics were such that it was only the rate of falling that could at all be measured in experiment, or at least, if the rate of descent alone could tell you anything about the size (and shape) of the sand particle, then you would have to treat both the particles as exactly the same in all respects.

The analogy is bad because QM measurements involve eigenvalues, and, practically speaking, their measurements are more robust (involving less variability from one experiment to another) as compared to the rate of descent. Why? Simple. Because, no matter how limiting you might get, fluid dynamics equations are basically nonlinear; eigenvalue situations are basically linear. That’s why.

I don’t think this much of explanation is enough. It’s just that I haven’t the time either to think through my newer QM conjectures, or work out their maths, let alone write blog posts about them. The situation will continue definitely for at least a month or so (till the course and the labs and all settle down), perhaps also for the entire teaching term (about 4 months).

(2) The second issue was about how multi-dimensionality of the wavefunction implies entanglement of particles. As to entanglement, I will be able to come to it even later—i.e., after issue no. (1) here.

Regarding purely the multi-dimensionality part, however, I can already direct you to a recent post (by me), here [^]. (I think it can be improved—the distinction of embedded vs embedding space needs to be made more clear, and the aspect of “projection” needs to be looked into—but, once again: I’ve no time; so some time later!)

Bye for now.

 


A Song I Like:

(Marathi) “ashee nishaa punhaa kadhee disel kaa?”
Singers: Hridaynath Mangeshkar, Lata Mangeshkar
Music: Yashawant Deo
Lyrics: Yashawant Deo

 

[May be another pass tomorrow or so. I also am not sure whether I ran this song before or not. In case I did, I would come back and replace it with some other song.]

[E&OE]