The One vs. the Many

This post continues from my last post. In that post, I had presented a series of diagrams depicting the states of the universe over time, and I had then asked you a simple question pertaining to the physics of it: what the series depicted, physically speaking.

I had also given an answer to that question, the one which most people would give. It would run something like this:

There are two blocks/objects/entities which are initially moving closer towards each other. Following their motions, they come closer to each other, touch each other, and then reverse the directions of their motions. Thus, there is a collision of sorts. (We deliberately didn’t go into the maths of it, e.g., such narrower, detailed or higher-level aspects such as whether the motions were uniform or whether they had accelerations/decelerations (implying forces) or not, etc.)

I had then told you that the preceding was not the only answer possible. At least one more answer that captures the physics of it, also is certainly possible. This other answer in fact leads to an entirely different kind of mathematics! I had asked you to think about such alternative(s).

In this post, let me present the alternative description.


The alternative answer is what school/early college-level text-books never present to students. Neither do the pop-sci. books. However, the alternative approach has been documented, in some or the other form, at least for centuries if not for millenia. The topic is routinely taught in the advanced UG and PG courses in physics. However, the university courses always focus on the maths of it, not the physics. The physical ideas are never explicitly discussed in them. The text-books, too, dive straight into the relevant mathematics. The refusal of physicists (and of mathematicians) to dwell on the physical bases of this alternative description is in part responsible for the endless confusion and debates surrounding such issues as quantum entanglement, action at a distance, etc.

There also is another interesting side to it. Some aspects of this kind of a thinking are also evident in the philosophical/spiritual/religious/theological thinking. I am sure that you would immediately notice the resonance to such broader ideas as we subsequently discuss the alternative approach. However, let me stress that, in this post, we focus only on the physics-related issues. Thus, if I at times just say “universe,” it is to be understood that the word pertains only to the physical universe (i.e. the sum total of the inanimate objects, and also the inanimate aspects of living beings), not to any broader, spiritual or philosophical issue.

OK. Now, on to the alternative description itself. It runs something like this:

There is only one physical object which physically exists, and it is the physical universe. The grey blocks that you see in the series of diagrams are not independent objects, really speaking. In this particular depiction, what look like two independent “objects” are, really speaking, only two spatially isolated parts of what actually is one and only one object. In fact, the “empty” or the “white” space you see in between the objects is not, really speaking, empty at all—it does not represent the literal void or the nought, so to speak. The region of space corresponding to the “empty” portions is actually occupied by a physical something. In fact, since there is only one physical object to all exist, it is that same—singleton—physical object which is present also in the apparently empty portions.

This is not to deny that the distinction between the grey and the white/“empty” parts is not real. The physically existing distinction between them—the supposed qualitative differences among them—arises only because of some quantitative differences in some property/properties of the universe-object. In other words, the universe does not exist uniformly across all its parts. There are non-uniformities within it, some quantitative differences existing over different parts of itself. Notice, up to this point, we are talking of parts and variations within the universe. Both these words: “parts” and “within” are to be taken in the broadest possible sense, as in  the sense of“logical parts” and “logically within”.

However, one set of physical attributes that the universe carries pertains to the spatial characteristics such as extension and location. A suitable concept of space can therefore be abstracted from these physically existing characteristics. With the concept of space at hand, the physical universe can then be put into an abstract correspondence with a suitable choice of a space.

Thus, what this approach naturally suggests is the idea that we could use a mathematical field-function—i.e. a function of the coordinates of a chosen space—in order to describe the quantitative variations in the properties of the physical universe. For instance, assuming a 1D universe, it could be a function that looks something like what the following diagram shows.

Here, the function shows that a certain property (like mass density) exists with a zero measure in the regions of the supposedly empty space, whereas it exists with a finite measure, say with density of \rho_{g} in the grey regions. Notice that if the formalism of a field-function (or a function of a space) is followed, then the property that captures the variations is necessarily a density. Just the way the mass density is the density of mass, similarly, you can have a density of any suitable quantity that is spread over space.

Now, simply because the density function (shown in blue) goes to zero in certain regions, we cannot therefore claim that nothing exists in those regions. The reason is: we can always construct another function that has some non-zero values everywhere, and yet it shows sufficiently sharp differences between different regions.

For instance, we could say that the graph has \rho_{0} \neq 0 value in the “empty” region, whereas it has a \rho_{g} value in the interior of the grey regions.

Notice that in the above paragraph, we have subtly introduced two new ideas: (i) some non-zero value, say \rho_{0}, as being assigned even to the “empty” region—thereby assigning a “something”, a matter of positive existence, to the “empty”-ness; and (ii) the interface between the grey and the white regions is now asserted to be only “sufficiently” sharp—which means, the function does not take a totally sharp jump from \rho_{0} to \rho_{g} at a single point x_i which identifies the location of the interface. Notice that if the function were to have such a totally sharp jump at a single point, it would not in fact even be a proper function, because there would be an infinity of density values between and including \rho_{0} and \rho_{g} existing at the same point x_i. Since the density would not have a unique value at x_i, it won’t be a function.

However, we can always replace the infinitely sharp interface of zero thickness by a sufficiently sharp (and not infinitely sharp) interface of a sufficiently small but finite thickness.

Essentially, what this trick does is to introduce three types of spatial regions, instead of two: (i) the region of the “empty” space, (ii) the region of the interface (iii) the interior, grey, region.

Of course, what we want are only two regions, not three. After all, we need to make a distinction only between the grey and the white regions. Not an issue. We can always club the interface region with either of the remaining two. Here is the mathematical procedure to do it.

Introduce yet another quantitative measure, viz., \rho_{c}, called the critical density. Using it, we can in fact divide the interface dispense region into further two parts: one which has \rho  < \rho_c and another one which has \rho \geq \rho_c. This procedure does give us a point-thick locus for the distinction between the grey and the white regions, and yet, the actual changes in the density always remain fully smooth (i.e. density can remain an infinitely differentiable function).

All in all, the property-variation at the interface looks like this:

Indeed, our previous solution of clubbing the interface region into the grey region is nothing but having \rho_c = \rho_0, whereas clubbing the interface in the “empty” space region is tantamount to having \rho_c = \rho_g.

In any case, we do have a sharp demarcation of regions, and yet, the density remains a continuous function.

We can now claim that such is what the physical reality is actually like; that the depiction presented in the original series of diagrams, consisting of infinitely sharp interfaces, cannot be taken as the reference standard because that depiction itself was just that: a mere depiction, which means: an idealized description. The actual reality never was like that. Our ultimate standard ought to be reality itself. There is no reason why reality should not actually be like what our latter description shows.

This argument does hold. Mankind has never been able to think of a single solid argument against having the latter kind of a description.

Even Euclid had no argument for the infinitely sharp interfaces his geometry implies. Euclid accepted the point, the line and the plane as the already given entities, as axioms. He did not bother himself with locating their meaning in some more fundamental geometrical or mathematical objects or methods.

What can be granted to Euclid can be granted to us. He had some axioms. We don’t believe them. So we will have our own axioms. As part of our axioms, interfaces are only finitely sharp.

Notice that the perceptual evidence remains the same. The difference between the two descriptions pertains to the question of what is it that we regard as object(s), primarily. The considerations of the sharpness or the thickness of the interface is only a detail, in the overall scheme.

In the first description, the grey regions are treated as objects in their own right. And there are many such objects.

In the second description, the grey regions are treated not as objects in their own right, but merely as distinguishable (and therefore different) parts of a single object that is the universe. Thus, there is only one object.

So, we now have two alternative descriptions. Which one is correct? And what precisely should we regard as an object anyway? … That, indeed, is a big question! 🙂

More on that question, and the consequences of the answers, in the next post in this series…. In it, I will touch upon the implications of the two descriptions for such things as (a) causality, (b) the issue of the aether—whether it exists and if yes, what its meaning is, (c) and the issue of the local vs. non-local descriptions (and implications therefore, in turn, for such issues as quantum entanglement), etc. Stay tuned.


A Song I Like:

(Hindi) “kitni akeli kitni tanha see lagi…”
Singer: Lata Mangeshkar
Music: Sachin Dev Burman
Lyrics: Majrooh Sultanpuri

[May be one editing pass, later? May be. …]

Introducing a Very Foundational Issue of Physics (and of Maths)

OK, so I am finally done with moving my stuff, and so, from now on, should be able to find at least some time for ‘net activities, including browsing and blogging (not to mention also picking up writing my position paper on QM from where I left it).

Alright, so let me resume my blogging right away by touching on a very foundational aspect of physics (and also of maths).


Before you can even think of building a theory of physics, you must first adopt, implicitly or explicitly, a viewpoint concerning what kind of physical objects are assumed to exist in the physical universe.

For instance, Newtonian mechanics assumes that the physical universe is made from massive and charge-less solid bodies that experience and exert the inter-body forces of gravity and those arising out of their direct contact. In contrast, the later development of the Maxwellian electrodynamics assumes that there are two types of objects: massive and charged solid bodies, and the electromagnetic and gravitational fields which they set-up and with which they interact. Last year, I had written a post spelling out the different kinds of physical objects that are assumed to exist in the Newtonian mechanics, in the classical electrodynamics, etc.; see here [^].

In this post, I want to highlight yet another consideration which enters physics at the most fundamental level. Let me illustrate the issue involved via a simple example.

Consider a 2D universe. The following series of diagrams depicts this universe as it exists at different instants of time, from t_{1} through t_{9}. Each diagram in the series represents the entire universe.

Assume that the changes in time actually occur continuously; it’s just that while drawing diagrams, we can depict the universe only at isolated (or “discrete”) instants of time.

Now, consider this seemingly very simple question:

What precisely does the above series of diagrams depict, physically speaking?

Can you provide a brief description (say, running into 2–3 lines) as to what is happening here, physics-wise?

At this point, you may perhaps be thinking that the answer is obvious. The answer is so obvious, you could be thinking, that it is very stupid of me to even think of raising such a question.

“Why, of course, what that series of pictures depicts is this: there are two blocks/objects/entities which are initially moving towards each other. Eventually they come so close to each other that they even touch each other. They thus undergo a collision, and as a result, they begin to move apart. … Plain and simple.”

You could be thinking along some lines like that.

But let me warn you, that precisely is your potential pitfall—i.e., thinking that the question is so simple, and the answer so obvious. Actually, as it turns out, there is no unique answer to that question.

That’s why, no matter how dumb the above question may look to you, let me ask you once again to take a moment to think afresh about it. And then, whatever be your answer, write it down. In your answer, try to be as brief and as precise as possible.

I will continue with this issue in my next post, to be written and posted after a few days. I am deliberately taking a break here because I do want you to give it a shot—writing down a precise answer. Unless you actually try out this exercise for yourself, you won’t come to appreciate either of the following two, separate points:

  1. how difficult it can be to write very precise answers to what appear to be the simplest of questions, and
  2. how unwittingly and subtly some unwarranted assumptions can so easily creep in, in a physical description—and therefore, in mathematics.

You won’t come to appreciate how deceptive this question really is unless you actually give it a try. And it is to ensure this part that I have to take a break here.

Enjoy!

WEF, Institutions, Media and Credibility

Some time ago, I had run into some Internet coverage about some WEF (World Economic Forum) report about institutions and their credibility rankings. I no longer remember where I had seen it mentioned, but the fact that such an article had appeared, had somehow stayed in the mind.

Today, in order to locate the source, I googled using the strings “WEF”, “Credibility” and “Media”. The following are a few links I got as a result of these searches. In each case, I first give the source organization, then the title of the article they published, and finally, the URL. Please note, all cover essentially the same story.

  • Edelman, “2017 Edelman TRUST BAROMETER Reveals Global Implosion of Trust,” [^]
  • Quartz, “The results are in: Nobody trusts anyone anymore,” [^]
  • PostCard, “Must read! World Economic Forum releases survey on Indian media, the results are shameful!,” [^]
  • TrollIndianPolitics, “`INDIAN MEDIA 2ND MOST UNTRUSTED INSTITUTION’ Reports WORLD ECONOMIC FORUM,” [^]
  • Financial Express, “WEF Report: ‘India most trusted nation in terms of institutions’,” [^]
  • Financial Times, “Public trust in media at all time low, research shows,” [^]
  • WEF, “Why credibility is the future of journalism,” [^]

“Same hotel, two different prices…” … [Sorry, just couldn’t resist it!]

Oh, BTW, I gather that the report says that institutions in India are more credible as compared to those in Singapore.

Do click the links if you haven’t yet done so, already. [No, I don’t get paid for the clicks on the outgoing links.]


Still getting settled in the new job and the city. Some stuff still is to be moved. But guess it was time to slip in at least a short post. So there. Take care and bye for now.

 

 

Are the recent CS graduates from India that bad?

In the recent couple of weeks, I had not found much time to check out blogs on a very regular basis. But today I did find some free time, and so I did do a routine round-up of the blogs. In the process, I came across a couple of interesting posts by Prof. Dheeraj Sanghi of IIIT Delhi. (Yes, it’s IIIT Delhi, not IIT Delhi.)

The latest post by Prof. Sanghi is about achieving excellence in Indian universities [^]. He offers valuable insights by taking a specific example, viz., that of the IIIT Delhi. I would like to leave this post for the attention of [who else] the education barons in Pune and the SPPU authorities. [Addendum: Also this post [^] by Prof. Pankaj Jalote, Director of IIIT Delhi.]

Prof. Sanghi’s second (i.e. earlier) post is about the current (dismal) state of the CS education in this country. [^].

As someone who has a direct work-experience in both the IT industry as well as in teaching in mechanical engineering departments in “private” engineering colleges in India, the general impression I seem to have developed seemed to be a bit at odds with what was being reported in this post by Prof. Sanghi (and by his readers, in its comments section). Of course, Prof. Sanghi was restricting himself only to the CS graduates, but still, the comments did hint at the overall trend, too.

So, I began writing a comment at Prof. Sanghi’s blog, but, as usual, my comment soon grew too big. It became big enough that I finally had to convert it into a separate post here. Let me share these thoughts of mine, below.


As compared to the CS graduates in India, and speaking in strictly relative terms, the mechanical engineering students seem to be doing better, much better, as far the actual learning being done over the 4 UG years is concerned. Not just the top 1–2%, but even the top 15–20% of the mechanical engineering students, perhaps even the top quarter, do seem to be doing fairly OK—even if it could be, perhaps, only at a minimally adequate level when compared to the international standards.

… No, even for the top quarter of the total student population (in mechanical engineering, in “private” colleges), their fundamental concepts aren’t always as clear as they need to be. More important, excepting the top (may be) 2–5%, others within the top quarter don’t seem to be learning the art of conceptual analysis of mathematics, as such. They probably would not always be able to figure out the meaning of even a simplest variation on an equation they have already studied.

For instance, even after completing a course (or one-half part of a semester-long course) on vibrations, if they are shown the following equation for the classical transverse waves on a string:

\dfrac{\partial^2 \psi(x,t)}{\partial x^2} + U(x,t) = \dfrac{1}{c^2}\dfrac{\partial^2 \psi(x,t)}{\partial t^2},

most of them wouldn’t be able to tell the physical meaning of the second term on the left hand-side—not even if they are asked to work on it purely at their own convenience, at home, and not on-the-fly and under pressure, say during a job interview or a viva voce examination.

However, change the notation used for second term from U(x,t) to S(x,t) or F(x,t), and then, suddenly, the bulb might flash on, but for only some of the top quarter—not all. … This would be the case, even if in their course on heat transfer, they have been taught the detailed derivation of a somewhat analogous equation: the equation of heat conduction with the most general case, including the possibly non-uniform and unsteady internal heat generation. … I am talking about the top 25% of the graduating mechanical engineers from private engineering colleges in SPPU and University of Mumbai. Which means, after leaving aside a lot of other top people who go to IITs and other reputed colleges like BITS Pilani, COEP, VJTI, etc.

IMO, their professors are more responsible for the lack of developing such skills than are the students themselves. (I was talking of the top quarter of the students.)

Yet, I also think that these students (the top quarter) are at least “passable” as engineers, in some sense of the term, if not better. I mean to say, looking at their seminars (i.e. the independent but guided special studies, mostly on the student-selected topics, for which they have to produce a small report and make a 10–15 minutes’ presentation) and also looking at how they work during their final year projects, sure, they do seem to have picked up some definite competencies in mechanical engineering proper. In their projects, most of the times, these students may only be reproducing some already reported results, or trying out minor variations on existing machine designs, which is what is expected at the UG level in our university system anyway. But still, my point is, they often are seen taking some good efforts in actually fabricating machines on their own, and sometimes they even come up with some good, creative, or cost-effective ideas in their design- or fabrication-activities.

Once again, let me remind you: I was talking about only the top quarter or so of the total students in private colleges (and from mechanical engineering).

The bottom half is overall quite discouraging. The bottom quarter of the degree holders are mostly not even worth giving a post X-standard, 3 year’s diploma certificate. They wouldn’t be able to write even a 5 page report on their own. They wouldn’t be able to even use the routine metrological instruments/gauges right. … Let’s leave them aside for now.

But the top quarter in the mechanical departments certainly seems to be doing relatively better, as compared to the those from the CS departments. … I mean to say: if these CS folks are unable to write on their own even just a linked-list program in C (using pointers and memory allocation on the heap), or if their final-year projects wouldn’t exceed (independently written) 100+ lines of code… Well, what then is left on this side for making comparisons anyway? … Contrast: At COEP, my 3rd year mechanical engineering students were asked to write a total of more than 100 lines of C code, as part of their routine course assignments, during a single semester-long course on FEM.

… Continuing with the mechanical engineering students, why, even in the decidedly average (or below average) colleges in Mumbai and Pune, some kids (admittedly, may be only about 10% or 15% of them) can be found taking some extra efforts to learn some extra skills from the outside of our pathetic university system. Learning CAD/CAM/CAE software by attending private training institutes, has become a pretty wide-spread practice by now.

No, with these courses, they aren’t expected to become FEM/CFD experts, and they don’t. But at least they do learn to push buttons and put mouse-clicks in, say, ProE/SolidWorks or Ansys. They do learn to deal with conversions between different file formats. They do learn that meshes generated even in the best commercial software could sometimes be not of sufficiently high quality, or that importing mesh data into a different analysis program may render the mesh inconsistent and crash the analysis. Sometimes, they even come to master setting the various boundary condition options right—even if only in that particular version of that particular software. However, they wouldn’t be able to use a research level software like OpenFOAM on their own—and, frankly, it is not expected of them, not at their level, anyway.

They sometimes are also seen taking efforts on their own, in finding sponsorships for their BE projects (small-scale or big ones), sometimes even in good research institutions (like BARC). In fact, as far as the top quarter of the BE student projects (in the mechanical departments, in private engineering colleges) go, I often do get the definite sense that any lacunae coming up in these projects are not attributable so much to the students themselves as to the professors who guide these projects. The stories of a professor shooting down a good project idea proposed by a student simply because the professor himself wouldn’t have any clue of what’s going on, are neither unheard of nor entirely without merit.

So, yes, the overall trend even in the mechanical engineering stream is certainly dipping downwards, that’s for sure. Yet, the actual fall—its level—does not seem to be as bad as what is being reported about CS.

My two cents.


Today is India’s National Science Day. Greetings!


Will stay busy in moving and getting settled in the new job. … Don’t look for another post for another couple of weeks. … Take care, and bye for now.

[Finished doing minor editing touches on 28 Feb. 2017, 17:15 hrs.]