Links…

Here are a few interesting links I browsed recently, listed in no particular order:


“Mathematicians Tame Turbulence in Flattened Fluids” [^].

The operative word here, of course, is: “flattened.” But even then, it’s an interesting read. Another thing: though the essay is pop-sci, the author gives the Navier-Stokes equations, complete with fairly OK explanatory remarks about each term in the equation.

(But I don’t understand why every pop-sci write-up gives the NS equations only in the Lagrangian form, never Eulerian.)


“A Twisted Path to Equation-Free Prediction” [^]. …

“Empirical dynamic modeling.” Hmmm….


“Machine Learning’s `Amazing’ Ability to Predict Chaos” [^].

Click-bait: They use data science ideas to predict chaos!

8 Lyapunov times is impressive. But ignore the other, usual kind of hype: “…the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. ” [italics added.]


“Your Simple (Yes, Simple) Guide to Quantum Entanglement” [^].

Click-bait: “Entanglement is often regarded as a uniquely quantum-mechanical phenomenon, but it is not. In fact, it is enlightening, though somewhat unconventional, to consider a simple non-quantum (or “classical”) version of entanglement first. This enables us to pry the subtlety of entanglement itself apart from the general oddity of quantum theory.”

Don’t dismiss the description in the essay as being too simplistic; the author is Frank Wilczek.


“A theoretical physics FAQ” [^].

Click-bait: Check your answers with those given by an expert! … Do spend some time here…


Tensor product versus Cartesian product.

If you are engineer and if you get interested in quantum entanglement, beware of the easily confusing terms: The tensor product and the Cartesian product.

The tensor product, you might think, is like the Cartesian product. But it is not. See mathematicians’ explanations. Essentially, the basis sets (and the operations) are different. [^] [^].

But what the mathematicians don’t do is to take some simple but non-trivial examples, and actually work everything out in detail. Instead, they just jump from this definition to that definition. For example, see: “How to conquer tensorphobia” [^] and “Tensorphobia and the outer product”[^]. Read any of these last two articles. Any one is sufficient to give you tensorphobia even if you never had it!

You will never run into a mathematician who explains the difference between the two concepts by first directly giving you a vague feel: by directly giving you a good worked out example in the context of finite sets (including enumeration of all the set elements) that illustrates the key difference, i.e. the addition vs. the multiplication of the unit vectors (aka members of basis sets).

A third-class epistemology when it comes to explaining, mathematicians typically have.


A Song I Like:

(Marathi) “he gard niLe megha…”
Singers: Shailendra Singh, Anuradha Paudwal
Music: Rushiraj
Lyrics: Muralidhar Gode

[As usual, a little streamlining may occur later on.]

Advertisements

HNY (Marathi). Also, a bit about modern maths.

Happy New (Marathi) Year!

OK.

I will speak in “aaeechee bhaashaa”  (lit.: mother’s language).

“gudhi-paaDawyaachyaa haardik shubhechchhaa.” (lit.: hearty compliments [on the occasion] of “gudhi-paaDawaa” [i.e. the first day of the Marathi new year  [^]].)


I am still writing up my notes on scalars, vectors, tensors, and CFD (cf. my last post). The speed is good. I am making sure that I remain below the RSI [^] detection levels.


BTW, do you know how difficult it can get to explain even the simplest of concepts once mathematicians have had a field day about it? (And especially after Americans have praised them for their efforts?) For instance, even a simple idea like, say, the “dual space”?

Did any one ever give you a hint (or even a hint of a hint) that the idea of “dual space” is nothing but a bloody stupid formalization based on nothing but the idea of taking the transpose of a vector and using it in the dot product? Or the fact that the idea of the transpose of a vector essentially means nothing than more than taking the same old three (or n number of) scalar components, but interpreting them to mean a (directed) planar area instead of an arrow (i.e. a directed line segment)? Or the fact that this entire late 19th–early 20th century intellectual enterprise springs from no grounds more complex than the fact that the equation to the line is linear, and so is the equation to the plane?

[Yes, dear American, it’s the equation not an equation, and the equation is not of a line, but to the line. Ditto, for the case of the plane.]

Oh, but no. You go ask any mathematician worth his salt to explain the idea (say of the dual space), and this modern intellectual idiot would immediately launch himself into blabbering endlessly about “fields” (by which he means something other than what either a farmer or an engineer means; he also knows that he means something else; further, he also knows that not knowing this fact, you are getting confused; but, he doesn’t care to even mention this fact to you let alone explain it (and if you catch him, he ignores you and turns his face towards that other modern intellectual idiot aka the theoretical physicist (who is all ears to the mathematician, BTW))), “space” (ditto), “functionals” (by which term he means two different things even while strictly within the context of his own art: one thing in linear algebra and quite another thing in the calculus of variations), “modules,” (neither a software module nor the lunar one of Apollo 11—and generally speaking, most any modern mathematical idiot would have become far too generally incompetent to be able to design either), “ring” (no, he means neither an engagement nor a bell), “linear forms,” (no, neither Picasso nor sticks), “homomorphism” (no, not not a gay in the course of adding on or shedding body-weight), etc. etc. etc.

What is more, the idiot would even express surprise at the fact that the way he speaks about his work, it makes you feel as if you are far too incompetent to understand his art and will always be. And that’s what he wants, so that his means of livelihood is protected.

(No jokes. Just search for any of the quoted terms on the Wiki/Google. Or, actually talk to an actual mathematician about it. Just ask him this one question: Essentially speaking, is there something more to the idea of a dual space than transposing—going from an arrow to a plane?)

So, it’s not just that no one has written about these ideas before. The trouble is that they have, including the extent to which they have and the way they did.

And therefore, writing about the same ideas but in plain(er) language (but sufficiently accurately) gets tough, extraordinarily tough.

But I am trying. … Don’t keep too high a set of hopes… but well, at least, I am trying…


BTW, talking of fields and all, here are a few interesting stories (starting from today’s ToI, and after a bit of a Google search)[^][^] [^][^].


A Song I Like:

(Marathi) “maajhyaa re preeti phulaa”
Music: Sudhir Phadake
Lyrics: Ga. Di. Madgulkar
Singers: Asha Bhosale, Sudhir Phadke

 

 

In maths, the boundary is…

In maths, the boundary is a verb, not a noun.

It’s an active something, that, through certain agencies (whose influence, in the usual maths, is wholly captured via differential equations) actually goes on to act [directly or indirectly] over the entirety of a [spatial] region.

Mathematicians have come to forget about this simple physical fact, but by the basic rules of knowledge, that’s how it is.

They love to portray the BV (boundary-value) problems in terms of some dead thing sitting at the boundary, esp. for the Dirichlet variety of problems (esp. for the case when the field variable is zero out there) but that’s not what the basic nature of the abstraction is actually like. You couldn’t possibly build the very abstraction of a boundary unless if first pre-supposed that what it in maths represented was an active [read: physically active] something!

Keep that in mind; keep on reminding yourself at least 10^n times every day, where n is an integer \ge 1.

 


A Song I Like:

[Unlike most other songs, this was an “average” one  in my [self-]esteemed teenage opinion, formed after listening to it on a poor-reception-area radio in an odd town at some odd times. … It changed for forever to a “surprisingly wonderful one” the moment I saw the movie in my SE (second year engineering) while at COEP. … And, haven’t yet gotten out of that impression yet… .]

(Hindi) “main chali main chali, peechhe peeche jahaan…”
Singers: Lata Mangeshkar, Mohammad Rafi
Music: Shankar-Jaikishan
Lyrics: Shailendra


[May be an editing pass would be due tomorrow or so?]

 

Is something like a re-discovery of the same thing by the same person possible?

Yes, we continue to remain very busy.


However, in spite of all that busy-ness, in whatever spare time I have [in the evenings, sometimes at nights, why, even on early mornings [which is quite unlike me, come to think of it!]], I cannot help but “think” in a bit “relaxed” [actually, abstract] manner [and by “thinking,” I mean: musing, surmising, etc.] about… about what else but: QM!

So, I’ve been doing that. Sort of like, relaxed distant wonderings about QM…

Idle musings like that are very helpful. But they also carry a certain danger: it is easy to begin to believe your own story, even if the story itself is not being borne by well-established equations (i.e. by physic-al evidence).

But keeping that part aside, and thus coming to the title question: Is it possible that the same person makes the same discovery twice?

It may be difficult to believe so, but I… I seemed to have managed to have pulled precisely such a trick.

Of course, the “discovery” in question is, relatively speaking, only a part of of the whole story, and not the whole story itself. Still, I do think that I had discovered a certain important part of a conclusion about QM a while ago, and then, later on, had completely forgotten about it, and then, in a slow, patient process, I seem now to have worked inch-by-inch to reach precisely the same old conclusion.

In short, I have re-discovered my own (unpublished) conclusion. The original discovery was may be in the first half of this calendar year. (I might have even made a hand-written note about it, I need to look up my hand-written notes.)


Now, about the conclusion itself. … I don’t know how to put it best, but I seem to have reached the conclusion that the postulates of quantum mechanics [^], say as stated by Dirac and von Neumann [^], have been conceptualized inconsistently.

Please note the issue and the statement I am making, carefully. As you know, more than 9 interpretations of QM [^][^][^] have been acknowledged right in the mainstream studies of QM [read: University courses] themselves. Yet, none of these interpretations, as far as I know, goes on to actually challenge the quantum mechanical formalism itself. They all do accept the postulates just as presented (say by Dirac and von Neumann, the two “mathematicians” among the physicists).

Coming to me, my positions: I, too, used to say exactly the same thing. I used to say that I agree with the quantum postulates themselves. My position was that the conceptual aspects of the theory—at least all of them— are missing, and so, these need to be supplied, and if the need be, these also need to be expanded.

But, as far as the postulates themselves go, mine used to be the same position as that in the mainstream.

Until this morning.

Then, this morning, I came to realize that I have “re-discovered,” (i.e. independently discovered for the second time), that I actually should not be buying into the quantum postulates just as stated; that I should be saying that there are theoretical/conceptual errors/misconceptions/misrepresentations woven-in right in the very process of formalization which produced these postulates.

Since I think that I should be saying so, consider that, with this blog post, I have said so.


Just one more thing: the above doesn’t mean that I don’t accept Schrodinger’s equation. I do. In fact, I now seem to embrace Schrodinger’s equation with even more enthusiasm than I have ever done before. I think it’s a very ingenious and a very beautiful equation.


A Song I Like:

(Hindi) “tum jo hue mere humsafar”
Music: O. P. Nayyar
Singers: Geeta Dutt and Mohammad Rafi
Lyrics: Majrooh Sultanpuri


Update on 2017.10.14 23:57 IST: Streamlined a bit, as usual.

 

Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]

Mathematics—Historic, Contemporary, and Its Relation to Physics

The title of this post does look very ambitious, but in fact the post itself isn’t. I mean, I am not going to even attempt to integrate these diverse threads at all. Instead, I am going to either just jot down a few links, or copy-paste my replies (with a bit editing) that I had made at some other blogs.

 

1. About (not so) ancient mathematics:

1.1 Concerning calculus: It was something of a goose-bumps moment for me to realize that the historic Indians had very definitely gotten to that branch of mathematics which is known as calculus. You have to understand the context behind it.

Some three centuries ago, there were priority battles concerning invention of calculus (started by Newton, and joined by Liebniz and his supporters). Echoes of these arguments could still be heard in popular science writings as recently as when I was a young man, about three decades ago.

Against this backdrop, it was particularly wonderful that an Indian mathematician as early as some eight centuries ago had gotten to the basic idea of calculus.

The issue was highlighted by Prof. Abinandanan at the blog nanpolitan, here [^]. It was based on an article by Prof. Biman Nath that had appeared in the magazine Frontline [^]. My replies can be found at Abi’s post. I am copy-pasting my replies here. I am also taking the opportunity to rectify a mistake—somehow, I thought that Nath’s article appeared in the Hindu newspaper, and not in the Frontline magazine. My comment (now edited just so slightly):

A few comments:

0. Based on my earlier readings of the subject matter (and I have never been too interested in the topic, and so, it was generally pretty much a casual reading), I used to believe that the Indians had not reached that certain abstract point which would allow us to say that they had got to calculus. They had something of a pre-calculus, I thought.

Based (purely) on Prof. Nath’s article, I have now changed my opinion.

Here are a few points to note:

1. How “jyaa” turned to “sine” makes for a fascinating story. Thanks for its inclusion, Prof. Nath.

2. Aaryabhata didn’t have calculus. Neither did Bramhagupta [my spelling is correct]. But if you wonder why the latter might have laid such an emphasis on the zero about the same time that he tried taking Aaryabhata’s invention further, chances are, there might have been some churning in Bramhagupta’s mind regarding the abstraction of the infinitesimal, though, with the evidence available, he didn’t reach it.

3. Bhaaskara II, if the evidence in the article is correct, clearly did reach calculus. No doubt about it.

He did not only reach a more abstract level, he even finished the concept by giving it a name: “taatkaalik.” Epistemologically speaking, the concept formation was complete.

I wonder why Prof. Nath, writing for the Frontline, didn’t allocate a separate section to Bhaaskara II. The “giant leap” richly deserved it.

And, he even got to the max-min problem by setting the derivative to zero. IMO, this is a second giant leap. Conceptually, it is so distinctive to calculus that even just a fleeting mention of it would be enough to permanently settle the issue.

You can say that Aaryabhata and Bramhagupta had some definite anticipation of calculus. And you can’t possible much more further about Archimedes’ method of exhaustion either. But, as a sum total, I think, they still missed calculus per say.

But with this double whammy (or, more accurately, the one-two punch), Bhaaskara II clearly had got the calculus.

Yes, it would have been nice if he could have left for the posterity a mention of the limit. But writing down the process of reaching the invention has always been so unlike the ancient Indians. Philosophically, the atmosphere would generally be antithetical to such an idea; the scientist, esp. the mathematician, may then be excused.

But then, if mathematicians had already been playing with infinite series with ease, and were already performing the calculus of finite differences in the context of these infinite series, even explicitly composing verses about their results, then they can be excused for not having conceptualized limits.

After all, even Newton initially worked only with the fluxion and Leibniz with the infinitesimal. The modern epsilon-delta definition still was some one–two centuries (in the three–four centuries of modern science) in the coming.

But when you explicitly say “instantaneous,” (i.e. after spelling out the correct thought process leading to it), there is no way one can say that some distance had yet to be travelled to reach calculus. The destination was already there.

And as if to remove any doubt still lingering, when it comes to the min-max condition, no amount of merely geometric thinking would get you there. Reaching of that conclusion means that the train had not already left the first station after entering the calculus territory, but also that it had in fact gone past the second or the third station as well. Complete with an application from astronomy—the first branch of physics.

I would like to know if there are any counter-arguments to the new view I now take of this matter, as spelt out above.

4. Maadhava missed it. The 1/4 vs. 1/6 is not hair-splitting. It is a very direct indication of the fact that either Maadhava did a “typo” (not at all possible, considering that these were verses to be by-hearted by repetition by the student body), or, obviously, he missed the idea of the repeated integration (which in turn requires considering a progressively greater domain even if only infinitesimally). Now this latter idea is at the very basis of the modern Taylor series. If Maadhava were to perform that repeated integration (and he would be a capable mathematical technician to be able to do that should the idea have struck him), then he would surely get 1/6. He would get that number, even if he were not to know anything about the factorial idea. And, if he could not get to 1/6, it’s impossible that he would get the idea of the entire infinite series i.e. the Taylor series, right.

5. Going by the content of the article, Prof. Nath’s conclusion in the last paragraph is, as indicated above, in part, non-sequitur.

6. But yes, I, too, very eagerly look forward to what Prof. Nath has to say subsequently on this and related issues.

But as far as the issues such as the existence of progress only in fits here and there, and indeed the absence of a generally monotonously increasing build-up of knowledge (observe the partial regression in Bramhagupta from Aaryabhat, or in Maadhav from Bhaaskar II), I think that philosophy as the fundamental factor in human condition, is relevant.

7. And, oh, BTW, is “Matteo Ricci” a corrupt form of the original “Mahadeva Rishi” [or “Maadhav Rishi”] or some such a thing? … May Internet battles ensue!

1.2 Concerning “vimaan-shaastra” and estimating \pi: Once again, this was a comment that I made at Abi’s blog, in response to his post on the claims concerning “vimaan-shaastra” and all, here[^]. Go through that post, to know the context in which I wrote the following comment (reproduced here with a bit of copy-editing):

I tend not to out of hand dismiss claims about the ancient Indian tradition. However, this one about the “Vimaan”s and all does seem to exceed even my limits.

But, still, I do believe that it can also be very easy to dismiss such claims without giving them due consideration. Yes, so many of them are ridiculous. But not all. Indeed, as a less noted fact, some of the defenders themselves do contradict each other, but never do notice this fact.

Let me give you an example. I am unlike some who would accept a claim only if there is a direct archaeological evidence for it. IMO, theirs is a materialistic position, and materialism is a false premise; it’s the body of the mind-body dichotomy (in Ayn Rand’s sense of the terms). And, so, I am willing to consider the astronomical references contained in the ancient verses as an evidence. So, in that sense, I don’t dismiss a 10,000+ old history of India; I don’t mindlessly accept 600 BC or so as the starting point of civilization and culture, a date so convenient to the missionaries of the Abrahamic traditions. IMO, not every influential commentator to come from the folds of the Western culture can be safely assumed to have attained the levels obtained by the best among the Greek or enlightenment thinkers.

And, so, I am OK if someone shows, based on the astronomical methods, the existence of the Indian culture, say, 5000+ years ago.

Yet, there are two notable facts here. (i) The findings of different proponents of this astronomical method of dating of the past events (say the dates of events mentioned in RaamaayaNa or Mahaabhaarata) don’t always agree with each other. And, more worrisome is the fact that (ii) despite Internet, they never even notice each other, let alone debate the soundness of their own approaches. All that they—and their supporters—do is to pick out Internet (or TED etc.) battles against the materialists.

A far deeper thinking is required to even just approach these (and such) issues. But the proponents don’t show the required maturity.

It is far too easy to jump to conclusions and blindly assert that there were material “Vimaana”s; that “puShpak” etc. were neither a valid description of a spiritual/psychic phenomenon nor a result of a vivid poetic imagination. It is much more difficult, comparatively speaking, to think of a later date insertion into a text. It is most difficult to be judicious in ascertaining which part of which verse of which book, can be reliably taken as of ancient origin, which one is a later-date interpolation or commentary, and which one is a mischievous recent insertion.

Earlier (i.e. decades earlier, while a school-boy or an undergrad in college etc.), I tended to think the very last possibility as not at all possible. Enough people couldn’t possibly have had enough mastery of Sanskrit, practically speaking, to fool enough honest Sanskrit-knowing people, I thought.

Over the decades, guess, I have become wiser. Not only have I understood the possibilities of the human nature better on the up side, but also on the down side. For instance, one of my colleagues, an engineer, an IITian who lived abroad, could himself compose poetry in Sanskrit very easily, I learnt. No, he wouldn’t do a forgery, sure. But could one say the same for every one who had a mastery of Sanskrit, without being too naive?

And, while on this topic, if someone knows the exact reference from which this verse quoted on Ramesh Raskar’s earlier page comes, and drops a line to me, I would be grateful. http://www.cs.unc.edu/~raskar/ . As usual, when I first read it, I was impressed a great deal. Until, of course, other possibilities struck me later. (It took years for me to think of these other possibilities.)

BTW, Abi also had a follow-up post containing further links about this issue of “vimaan-shaastra” [^].

But, in case you missed it, I do want to highlight my question again: Do you know the reference from which this verse quoted by Ramesh Raskar (now a professor at MIT Media Lab) comes? If yes, please do drop me a line.

 

2. An inspiring tale of a contemporary mathematician:

Here is an inspiring story of a Chinese-born mathematician who beat all the odds to achieve absolutely first-rank success.

I can’t resist the temptation to insert my trailer: As a boy, Yitang Zhang could not even attend school because he was forced into manual labor on vegetable-growing farms—he lived in the Communist China. As a young PhD graduate, he could not get a proper academic job in the USA—even if he got his PhD there. He then worked as an accountant of sorts, and still went on to solve one of mathematics’ most difficult problems.

Alec Wilkinson writes insightfully, beautifully, and with an authentic kind of admiration for man the heroic, for The New Yorker, here [^]. (H/T to Prof. Phanish Suryanarayana of GeorgiaTech, who highlighted this article at iMechanica [^].)

 

3. FQXi Essay Contest 2015:

(Hindi) “Picture abhi baaki nahin hai, dost! Picture to khatam ho gai” … Or, welcome back to the “everyday” reality of the modern day—modern day physics, modern day mathematics, and modern day questions concerning the relation between the two.

In other words, they still don’t get it—the relation between mathematics and physics. That’s why FQXi [^] has got an essay contest about it. They even call it “mysterious.” More details here [^]. (H/T to Roger Schlafly [^].)

Though this last link looks like a Web page of some government lab (American government, not Indian), do check out the second section on that same page: “II Evaluation Criteria.” The main problem description appears in this section. Let me quote the main problem description right in this post:

The theme for this Essay Contest is: “Trick or Truth: the Mysterious Connection Between Physics and Mathematics”.

In many ways, physics has developed hand-in-hand with mathematics. It seems almost impossible to imagine physics without a mathematical framework; at the same time, questions in physics have inspired so many discoveries in mathematics. But does physics simply wear mathematics like a costume, or is math a fundamental part of physical reality?

Why does mathematics seem so “unreasonably” effective in fundamental physics, especially compared to math’s impact in other scientific disciplines? Or does it? How deeply does mathematics inform physics, and physics mathematics? What are the tensions between them — the subtleties, ambiguities, hidden assumptions, or even contradictions and paradoxes at the intersection of formal mathematics and the physics of the real world?

This essay contest will probe the mysterious relationship between physics and mathematics.

Further, this section actually carries a bunch of thought-provocative questions to get you going in your essay writing. … And, yes, the important dates are here [^].

Now, my answers to a few questions about the contest:

Is this issue interesting enough? Yes.

Will I write an essay? No.

Why? Because I haven’t yet put my thoughts in a sufficiently coherent form.

However, I notice that the contest announcement itself includes so many questions that are worth attempting. And so, I will think of jotting down my answers to these questions, even if in a bit of a hurry.

However, I will neither further forge the answers together in a single coherent essay, nor will I participate in the contest.

And even if I were to participate… Well, let me put it this way. Going by Max Tegmark’s and others’ inclinations, I (sort of) “know” that anyone with my kind of answers would stand a very slim chance of actually landing the prize. … That’s another important reason for me not even to try.

But, yes, at least this time round, many of the detailed questions themselves are both valid and interesting. And so, it should be worth your while addressing them (or at least knowing what you think of them for your answers). …

As far as I am concerned, the only issue is time. … Given my habits, writing about such things—the deep and philosophical, and therefore fascinating things, the things that are interesting by themselves—have a way of totally getting out of control. That is, even if you know you aren’t going to interact with anyone else. And, mandatory interaction, incidentally, is another FQXi requirement that discourages me from participating.

So, as the bottom-line: no definitive promises, but let me see if I can write a post or a document by just straight-forwardly jotting down my answers to those detailed questions, without bothering to explain myself much, and without bothering to tie my answers together into a coherent whole.

Ok. Enough is enough. Bye for now.

[May be I will come back and add the “A Song I Like” section or so. Not sure. May be I will; may be I won’t. Bye.]

[E&OE]

 

Free books on the nature of mathematics

Just passing along a quick tip, in case you didn’t know about it:

Early editions of quite a few wonderful books concerning history and nature of mathematics have now become available for free downloading at archive.org. (I hope they have checked the copyrights and all):

Books by Prof. Morris Kline:

  1. Mathematics in Western Culture (1954) [^]
  2. Mathematics and the Search for Knowledge (1985) [^]
  3. Mathematics and the Physical World (1959) [^] (I began Kline’s books with this one.)

Of course, Kline’s 3-volume book, “Mathematical Thought from Ancient to Modern Times,” is the most comprehensive and detailed one. However, it is not yet available off archive.org. But that hardly matters, because the book is in print, and a pretty inexpensive (Rs. ~1600) paperback is available at Amazon [^]. The Kindle edition is just Rs. 400.

(No, I don’t have Kindle. Neither do I plan to buy one. I will probably not use it even if someone gives it to me for free. I am sure I will find someone else to pass it on for free, again! … I don’t have any use for Kindle. I am old enough to like my books only the old-fashioned way—the fresh smell of the paper and the ink included. Or, the crispiness of the fading pages of an old one. And, I like my books better in the paperback format, not hard-cover. Easy to hold while comfortably reclining in my chair or while lying over a sofa or a bed.)

Anyway, back to archive.org.

Prof. G. H. Hardy’s “A Mathematician’s Apology,” too, has become available for free downloading [^]. It’s been more than two decades since I first read it. … Would love to find time to go through it again.

Anyway, enjoy! (And let me know if you run into some other interesting books at archive.org.)

* * * * *   * * * * *   * * * * *

A Song I Like:
(Hindi) “chain se hum ko kabhie…”
Music: O. P. Nayyar
Singer: Asha Bhosale
Lyrics: S. H. Bihari

Incidentally, I have often thought that this song was ideally suited for a saxophone, i.e., apart from Asha’s voice. Not just any instrument, but, specifically, only a saxophone. … Today I searched for, and heard for the first time, a sax rendering—the one by Babbu Khan. It’s pretty good, though I had a bit of a feeling that someone could do better, probably, a lot better. Manohari Singh? Did he ever play this song on a sax?

As to the other instruments, though I often do like to listen to a flute (I mean the Indian flute (“baansuri”)), this song simply is not at all suited to one. For instance, just listen to Shridhar Kenkare’s rendering. The entire (Hindi) “dard” gets lost, and then, worse: that sweetness oozing out in its place, is just plain irritating. At least to me. On the other hand, also locate on the ‘net a violin version of this song, and listen to it. It’s pathetic. … Enough for today. I have lost the patience to try out any piano version, though I bet it would sound bad, too.

Sax. This masterpiece is meant for the sax. And, of course, Asha.

[E&OE]

 

My comments at other blogs—part 2: Chalk-Work vs. [?] Slide-Shows

Prof. Richard Lipton (of GeorgiaTech) recently mused aloud on his blog about the (over)use of slides in talks these days. I left a comment on his blog yesterday, and then realized that my reply could make for a separate post all by itself. So, let me note here what I wrote.

Prof. Lipton’s posts, it seems, typically end with a section of the title: “Open Problems.” These are not some open technical problems of research, but just some questions of general interest that he raises for discussions. My reply was in reference to such a question:

“Do you like PowerPoint type talks or chalk talks?”

In the next section I give my (slightly edited) reply.

* * * * *  * * * * *  * * * * *

[Reply begins]

Both. (I am talking about the Open Problem.) Simultaneously. And, actually, something more.

On the rare occasions that Indians have allowed me to teach (I mean in the recent past of the last decade or so), I have found that the best strategy is:

(i) to use slides on a side-screen (and, actually, the plain wall besides the white-/black-board works great!)

(ii) and to keep the white-/black-board at the center, and extensively use it to explain the points as they systematically appear on the slides.

It’s the same old bones-and-flesh story, really speaking. Both are necessary.

Even if you don’t use the slides for a lecture, preparing them is still advisable, because a lot of forethought thereby has to go into structuring your presentation, including some thought about the amount of time to be allocated to each sub-unit of a lecture. Why, if you prepare the slides, even if you don’t use them, you would find that your management of the black-board space also has improved a great deal!

I know quite a few professors who seem to take the ability to deliver a lecture without referring to any notes or slides, as the standard by which to judge the lecturer’s own mastery of the subject matter. Quite fallacious, but somehow, at least in India—where there always is a great emphasis on memorization rather than on understanding—this opinion persists widely. … And, that way, the lecturer’s own mastery, even if necessary, is not at all sufficient to generate a great lecture, anyway. Newton in Cambridge would teach mostly to empty benches.

The flow of a spontaneously delivered lecture is fine, as far as it goes. But far too many professors have this tendency to take things—students, actually!—for granted. If you don’t prepare slides, it’s easy—far too easy, in fact—to do that.

I have known the “free-flowing” type of professors who would habitually dwell on the initial and actually simpler part of a lecture for far too long (e.g., spend time in drawing the reference diagrams (think FBD of mechanics) or in discussing the various simple parts of a simple definition, etc.), and then realizing that “portion is to be finished,” hurriedly wind up the really important points in the last five minutes or so. For example, I have seen some idiots spend up to 45 minutes explaining the simplest points such as, say, the household plumbing as an analogy for the electric circuits, or the movement of a worm as an analogy for the edge-dislocation motion, and then wind up in the remaining 15 minutes the really important topics like the Thevenin/Norton network theorem (EE) or mechanisms of dislocation growth (materials science).

Maths types are the worst as far as the so highly prized “flow”—and showing one’s genius by solving problems on the fly on the black-board without ever referring to notes—goes. Well, if you are going to prize your own genius in that manner, some other things are going to get sacrificed. And, indeed, they do! Ask yourself if your UG ODE/PDE teacher had appropriately emphasized the practically important points such as the well posed-ness of the DEs, or, for that matter, even the order of a DE and the number of auxiliary (boundary and/or initial) conditions that must be specified, and how—whether you can have both the flux and the field conditions specified at the same point or not, and why. I have known people who got these points only while taking post-graduate engineering courses like CFD or FEM, but not from the UG mathematics professors proper. The latter were busy being geniuses—i.e. calculating, without referring to notes or slides.

All such folks must face the arrogance of their idiocy if (to them) the highly constraining rule that slides must be prepared in advance for every lecture, is made compulsory. It should be!

You can never make anything fool-proof, of course. For every “free-flowing” guy of the above kind, there always is that despicable “nerdy” type… You know, one who meekly slides into his class with his slides/notes, hangs in nervously there for the duration of the lecture, puts up the slides and reads them aloud (if you are lucky, that is—I have suffered through some specimens who would merely mumble while vaguely looking somewhere in the direction of the slides), “solves” the problems already solved in the prescribed text-book, and then after a while, leaves the class with somewhat less meekness: carrying on his face some apparent sense of some satisfaction—of what kind, he alone knows. These types could certainly do well with the advise to do some chalk-work.

With slides, diagrams can be far more neat; students can copy definitions/points at their own convenience during the lecture, and you don’t have to wait for every one in the class to finish taking down the material before proceeding further because they know that hand-outs would be available anyway (because these are very easy to generate); and the best part: you don’t have to worry too much about your hand-writing.

In between PowerPoint and LaTeX, I personally like Beamer because: (i) its template makes me feel guilty any time I exceed three main points for an hour-long lecture (though I often do!), and (ii) I always have the assurance that the fonts won’t accidentally change, or that the diagrams wouldn’t begin floating with those dashed rectangles right in the middle of a lecture. And, it is free. To a mostly jobless guy like me, that helps.

Finally, one word about “more.” Apart from chalkwork and slideshows, there are many other modalities. Even simplest physical experimentation is often very useful (e.g., tearing a paper in a class on fracture mechanics, or explaining graph algorithms using knotted strings, say as hung down from this knot vs that knot, etc). Physical experimentation also kills the monotony and the boredom.

In my class, I also try to use simulations as much as possible. By simulation/animation, I do not mean the irritating things like those alphabets coming dancing down on a PowerPoint slide as if some reverse kind of a virus had hit the computer. I mean real simulations. For instance, in teaching solid mechanics, FEM simulation of stress fields is greatly useful. I gained some of the most valuable insights into classical EM only by watching animations, e.g., noticing how changing an electric current changes the magnetic field everywhere “simultaneously” (i.e. at each time step, even if the propagation of the disturbance is limited by ‘c’). In CS, you can spend one whole hour throwing slides on the screen, or doing chalk-work, or even stepping through code to explain how, e.g., the quicksort differs from the bubble sort, but a simple graphical visualization/animation showing the sorting process in action, delievers a certain key point within 5 minutes flat; it also concretizes the understanding in a way that would be imposible to achieve using any other means.

My two/three cents.

[Reply over]

* * * * *  * * * * *  * * * * *

See Prof. Lipton’s post and the other replies he got, here [^]; my original reply is here [^]. BTW, as an aside, when I wrote in my reply at his blog, the two replies immediately above mine (i.e., those by “CrackPot” and Prof. M. Vidyasagar) had not yet appeared, and so, there is no implied reference to them, or, for that matter, to any replies earlier either—-I just wrote whatever I did, in reference to the main post and the “Open Problems” question.

* * * * *  * * * * *  * * * * *

A Song I Like:

The “chalk-work” version:
(French) “L’amour est bleu”
Music: Andre Popp
Singer: Vicky Leandros
Lyrics: Pierre Cour

The “slide-show” version:
(Western instrumental): “Love is blue”
Orchestrator and Conductor: Paul Mauriat

[Asides: I had first heard Mauriat’s version on a cassette that I had generally bought sometime in the late 1980s, and till date was not at all aware that there also was an actual song with actual lyrics, here. I always thought that it was some instrumental composition by someone, all by itself. I saw the video of Vicky Leondros’ version only today, after an Internet search. And, the browsing (mostly Wiki!) also reveals that the singer’s real name was Vassiliki Papathanassiou, not Vicky Leandros. Though in the video recording she sometimes looks Asian, she actually was a Greek singer who sang in French while representing Luxemborg in the 1967 Eurovision Song competition, where she was placed 4th. And, Mauriat’s version, per Wiki, “became the only number-one hit by a French artist to top [sic] the “Billboard Hot 100” in America.Tch… Are they sure about this last bit? I mean, it serves to give just too much credit to the Americans, don’t you think?]

[E&OE]

An American academic who will moderate out these two questions…

This post will be (relatively) short. It simply is to note an instance of my questions being moderated out, by an American academic.

The academic in question is Dr. Peter Woit of the Mathematics department of Columbia University (an Ivy League university) in New York, USA [^]. His latest blog post is about a new course for undergraduate students of mathematics: Quantum Mechanics for Mathematicians [^]. After going through his tentative syllabus (pdf [^]) available off the course Web page [^] I had asked the following couple of questions by way of a comment at his blog post:

Two questions:
(i) Would the typical student have had a prior course on quantum physics? on modern physics?
(ii) What would be the learning objectives/outcomes?

Ajit
[E&OE]

Though I did not take a printout of it, I did see that my comment make an appearance on the blog post yesterday. Its position was immediately after one Sadiq Ahmed’s comment on September 4, 2012 at 3:50 am.

However, my above-mentioned comment was found deleted (i.e. moderated out) right today.

The deletion was not a total surprise because I had noticed the FAQ page [^] of his blog right yesterday. Especially relevant is his answer to the question # 2: “Why did you delete my comment?” [^]:

I delete a lot of the comments submitted here. For some postings, the majority of submitted comments get deleted. I don’t delete comments because the commenter disagrees with me, actually comments agreeing with me are deleted far more often than ones that disagree with me. The overall goal is to try and maintain a comment section worth reading, so comments should ideally be well-informed and tell us something true that we didn’t already know. The most common reason for deleting a comment is that it’s off-topic. Often people are inspired by something in a posting to start discussing something else that interests them and that they feel is likely to interest others here. Unfortunately I have neither the time nor inclination to take on the thankless job of running a general discussion forum here.

Now, two questions for you, the reader:

(i) Do the questions I raised meet any of the reasons mentioned in the Columbia-paid professor’s publicly stated policy? Any reasonably similar reason?

(ii) In view of the Columbia-paid professor’s answers to others (and I don’t supply link to these answers here; after all, who knows, he might later on delete those answers, too!), it seems that it was not the first one of my two questions which was bothersome to him; he seems to have addressed it, even if indirectly, in replies to others. The bothersome question, it seems, could only be the second, the one concerned with learning objectives/outcomes.

Moral?

To continue forward from what I had mentioned in my last post below, but now being stated in somewhat better terms. The non-A, taken by itself and in the absence of identification of A, is not a statement of identity. The non-A requires A for its identity; butthe A does not require the non-A. In other words, the “reverse” situation by itself is not completely at par with just the “forward” situation taken in reverse. This is an epistemological instance of the same kind of asymmetry that exists between food and the set consisting of poison + minerals + chemically inert elements, between life and the set consisting of death + coma + whatever similar.

In a way, in both morality and epistemology, two negatives do not make a positive. (Mathematics is too narrow a science, and so is CS.)

What’s the relevance of that here, you ask? Ok. I mean it this way. The Columbia University-paid professor has written a book called “Not Even Wrong.” His 15+ minutes of fame mostly traces itself back to that book; it mostly does not rest on the mathematics courses he teaches (and gets paid for) at Columbia. The book, in turn, acquired its own 15+ minutes of fame because it criticized string theorists. String theory is not a theory of physics; it’s just a bundle of some arbitrary pronouncements; its “ex post facto” nature is what the better among string theorists themselves concede, but only in private. I have not read his book. (In case you didn’t know, the expression “not even wrong” is not original.)

… Now, having said that much, I will leave the task of making connection between the above two/three paragraphs, as an exercise for the reader.

(OK. A hint: Finding enemies of enemies is a stupid way of making friends, even if Berkeley/MIT/Princeton/CalTech/Michigan/Google/etc. folks follow that policy.)

Coming back to the deletion matter itself.

Was I annoyed? Of course, I was. That’s why I decided to write this post. I have interacted with a lot of professors thus far. Needless to add, from all over the world. Including from the top 20 in whatever latest ranking scheme that is popular today. (Check out my iMechanica blog over the many years by now. That’s just an example.)

However, I can’t (at least off-hand) think of a single professor who would:
(i) consider the two questions I raised as ill-informed or off-topic in the context of a post like that,
(ii) possibly take offense at raising them, esp. the second (and remember, this is not a “seminar” or “special studies” or “extra-mural” course at the graduate level, it’s an undergraduate course), or
(iii) possibly find some ingenious (perhaps even mathematical) way to interpret the two questions, esp. the second one as somehow indicative of my being in agreement with him.

As to finding a way to interpret that I was being in agreement (and not just asking a question): Yes, many, including the mathematical and computer-science bastards, esp. those in the USA, are completely capable of being inventive in coming to agreement in this manner, not just disagreements. When you don’t care to be concerned about reality, being inventive is easy. As Lokmaanya Tilak once put it (words not exact, only heard as a legend, but completely believable given what he was like):

(Marathi) “kalpanaa, kalpanaa, kaay mhaNataa, tumhi? tyaa shaniwarwaaDyaachyaa ithe jaavun chaar aaNyaachaa gaanjaa aaNun khallaa tari waaTTel tevadhyaa kalpanaa suchataat maaNasaannaa.”

Nearest English translation:

“Ideas, ideas, what [more like why] do you talk of [more like mention] ideas? Even having gone to that “shaniwar waadaa” [where the daily bazaar of Pune used to be held in Tilak’s time] and having taken cannabis worth four annas [then exactly equal to 1/4th of a rupee; an amount today worth about, say, Rs. 100–150 or so], as many ideas as desired, occur to men.”

So, being “inventive” is easy—if reality is not your concern.

Anyway, coming back to those three possibilities, I can’t at least off-hand think of any professor who could pick up any one of those three possibilities.

(BTW, here, I was mostly thinking of the engineering department professors. Now, even as I was typing it, it occurred to me that there could possibly be those CS/maths/physics/humanities/related departments professors, who could possibly do that, be so “inventive”—apart from some very very rare engineering professors like the guy who failed me in my PhD qualifiers. But in CS/maths/physics/humanities/etc. departments, it seems a far more widespread thing or a thing very easy to do. After all, Dr. Scott Aaronson, the TIBCO Career Development Associate Professor of Electrical Engineering and Computer Science at MIT [^] has still not answered my question—involving a counter-example, not just an “esoteric philosophic” point—posed on his blog [^]. … Or, may be, being at an Ivy League school lends that extra (Marathi) “chaar aaNe” effect. Or, may be, being in the USA is enough for that (Marathi) “chaar aaNe” effect, though not am not too sure on this last count—I myself once worked with TIBCO, right during its pre-IPO days, when it wasn’t even 100 people strong, and they had tried to lure me a lot to go work with them for permanent, but I was even back then firm on getting a green-card first and moving onto CAE and physics immediately next. Thanks to many  Americans’ machinations (including “follow-up”s, psychic attacks and whatnot), the green-card didn’t happen, but, yes, the CAE and physics did happen—in India, (to the shame of Americans).)

And to think that this guy is employed and gets paid by the Columbia university… Or, may be, precisely because he is employed and gets paid by the Columbia university….

Yeah, Americans, pay him. Give him a platform to promote a few other Americans etc. as his favorite commentators. But, don’t ever ask him to explicitly identify the learning objectives/outcomes of a course he teaches (before this post of my mine appeared, of course). Not even for a course that is ambitious enough to run for two semesters, not just one. Not even for a course sequence that occurs at the undergraduate level. But, yes, pay him. And, others like him. In an Ivy League school. In top 5 schools. In top 2 schools. Shower VC funding on them. Whatever. Yeah. Do that. Yeah. Keep on doing that.

* * * * *   * * * * *   * * * * *

No “A Song I Like” section, once again. I still go jobless. Keep that in mind.

[This is initial draft, published on September 5, 2012, 11:41 AM, IST. May be I will make some minor corrections/updates later on]
[E&OE]