HNY (Marathi). Also, a bit about modern maths.

Happy New (Marathi) Year!

OK.

I will speak in “aaeechee bhaashaa”  (lit.: mother’s language).

“gudhi-paaDawyaachyaa haardik shubhechchhaa.” (lit.: hearty compliments [on the occasion] of “gudhi-paaDawaa” [i.e. the first day of the Marathi new year  [^]].)


I am still writing up my notes on scalars, vectors, tensors, and CFD (cf. my last post). The speed is good. I am making sure that I remain below the RSI [^] detection levels.


BTW, do you know how difficult it can get to explain even the simplest of concepts once mathematicians have had a field day about it? (And especially after Americans have praised them for their efforts?) For instance, even a simple idea like, say, the “dual space”?

Did any one ever give you a hint (or even a hint of a hint) that the idea of “dual space” is nothing but a bloody stupid formalization based on nothing but the idea of taking the transpose of a vector and using it in the dot product? Or the fact that the idea of the transpose of a vector essentially means nothing than more than taking the same old three (or n number of) scalar components, but interpreting them to mean a (directed) planar area instead of an arrow (i.e. a directed line segment)? Or the fact that this entire late 19th–early 20th century intellectual enterprise springs from no grounds more complex than the fact that the equation to the line is linear, and so is the equation to the plane?

[Yes, dear American, it’s the equation not an equation, and the equation is not of a line, but to the line. Ditto, for the case of the plane.]

Oh, but no. You go ask any mathematician worth his salt to explain the idea (say of the dual space), and this modern intellectual idiot would immediately launch himself into blabbering endlessly about “fields” (by which he means something other than what either a farmer or an engineer means; he also knows that he means something else; further, he also knows that not knowing this fact, you are getting confused; but, he doesn’t care to even mention this fact to you let alone explain it (and if you catch him, he ignores you and turns his face towards that other modern intellectual idiot aka the theoretical physicist (who is all ears to the mathematician, BTW))), “space” (ditto), “functionals” (by which term he means two different things even while strictly within the context of his own art: one thing in linear algebra and quite another thing in the calculus of variations), “modules,” (neither a software module nor the lunar one of Apollo 11—and generally speaking, most any modern mathematical idiot would have become far too generally incompetent to be able to design either), “ring” (no, he means neither an engagement nor a bell), “linear forms,” (no, neither Picasso nor sticks), “homomorphism” (no, not not a gay in the course of adding on or shedding body-weight), etc. etc. etc.

What is more, the idiot would even express surprise at the fact that the way he speaks about his work, it makes you feel as if you are far too incompetent to understand his art and will always be. And that’s what he wants, so that his means of livelihood is protected.

(No jokes. Just search for any of the quoted terms on the Wiki/Google. Or, actually talk to an actual mathematician about it. Just ask him this one question: Essentially speaking, is there something more to the idea of a dual space than transposing—going from an arrow to a plane?)

So, it’s not just that no one has written about these ideas before. The trouble is that they have, including the extent to which they have and the way they did.

And therefore, writing about the same ideas but in plain(er) language (but sufficiently accurately) gets tough, extraordinarily tough.

But I am trying. … Don’t keep too high a set of hopes… but well, at least, I am trying…


BTW, talking of fields and all, here are a few interesting stories (starting from today’s ToI, and after a bit of a Google search)[^][^] [^][^].


A Song I Like:

(Marathi) “maajhyaa re preeti phulaa”
Music: Sudhir Phadake
Lyrics: Ga. Di. Madgulkar
Singers: Asha Bhosale, Sudhir Phadke

 

 

Advertisements

In maths, the boundary is…

In maths, the boundary is a verb, not a noun.

It’s an active something, that, through certain agencies (whose influence, in the usual maths, is wholly captured via differential equations) actually goes on to act [directly or indirectly] over the entirety of a [spatial] region.

Mathematicians have come to forget about this simple physical fact, but by the basic rules of knowledge, that’s how it is.

They love to portray the BV (boundary-value) problems in terms of some dead thing sitting at the boundary, esp. for the Dirichlet variety of problems (esp. for the case when the field variable is zero out there) but that’s not what the basic nature of the abstraction is actually like. You couldn’t possibly build the very abstraction of a boundary unless if first pre-supposed that what it in maths represented was an active [read: physically active] something!

Keep that in mind; keep on reminding yourself at least 10^n times every day, where n is an integer \ge 1.

 


A Song I Like:

[Unlike most other songs, this was an “average” one  in my [self-]esteemed teenage opinion, formed after listening to it on a poor-reception-area radio in an odd town at some odd times. … It changed for forever to a “surprisingly wonderful one” the moment I saw the movie in my SE (second year engineering) while at COEP. … And, haven’t yet gotten out of that impression yet… .]

(Hindi) “main chali main chali, peechhe peeche jahaan…”
Singers: Lata Mangeshkar, Mohammad Rafi
Music: Shankar-Jaikishan
Lyrics: Shailendra


[May be an editing pass would be due tomorrow or so?]

 

Is something like a re-discovery of the same thing by the same person possible?

Yes, we continue to remain very busy.


However, in spite of all that busy-ness, in whatever spare time I have [in the evenings, sometimes at nights, why, even on early mornings [which is quite unlike me, come to think of it!]], I cannot help but “think” in a bit “relaxed” [actually, abstract] manner [and by “thinking,” I mean: musing, surmising, etc.] about… about what else but: QM!

So, I’ve been doing that. Sort of like, relaxed distant wonderings about QM…

Idle musings like that are very helpful. But they also carry a certain danger: it is easy to begin to believe your own story, even if the story itself is not being borne by well-established equations (i.e. by physic-al evidence).

But keeping that part aside, and thus coming to the title question: Is it possible that the same person makes the same discovery twice?

It may be difficult to believe so, but I… I seemed to have managed to have pulled precisely such a trick.

Of course, the “discovery” in question is, relatively speaking, only a part of of the whole story, and not the whole story itself. Still, I do think that I had discovered a certain important part of a conclusion about QM a while ago, and then, later on, had completely forgotten about it, and then, in a slow, patient process, I seem now to have worked inch-by-inch to reach precisely the same old conclusion.

In short, I have re-discovered my own (unpublished) conclusion. The original discovery was may be in the first half of this calendar year. (I might have even made a hand-written note about it, I need to look up my hand-written notes.)


Now, about the conclusion itself. … I don’t know how to put it best, but I seem to have reached the conclusion that the postulates of quantum mechanics [^], say as stated by Dirac and von Neumann [^], have been conceptualized inconsistently.

Please note the issue and the statement I am making, carefully. As you know, more than 9 interpretations of QM [^][^][^] have been acknowledged right in the mainstream studies of QM [read: University courses] themselves. Yet, none of these interpretations, as far as I know, goes on to actually challenge the quantum mechanical formalism itself. They all do accept the postulates just as presented (say by Dirac and von Neumann, the two “mathematicians” among the physicists).

Coming to me, my positions: I, too, used to say exactly the same thing. I used to say that I agree with the quantum postulates themselves. My position was that the conceptual aspects of the theory—at least all of them— are missing, and so, these need to be supplied, and if the need be, these also need to be expanded.

But, as far as the postulates themselves go, mine used to be the same position as that in the mainstream.

Until this morning.

Then, this morning, I came to realize that I have “re-discovered,” (i.e. independently discovered for the second time), that I actually should not be buying into the quantum postulates just as stated; that I should be saying that there are theoretical/conceptual errors/misconceptions/misrepresentations woven-in right in the very process of formalization which produced these postulates.

Since I think that I should be saying so, consider that, with this blog post, I have said so.


Just one more thing: the above doesn’t mean that I don’t accept Schrodinger’s equation. I do. In fact, I now seem to embrace Schrodinger’s equation with even more enthusiasm than I have ever done before. I think it’s a very ingenious and a very beautiful equation.


A Song I Like:

(Hindi) “tum jo hue mere humsafar”
Music: O. P. Nayyar
Singers: Geeta Dutt and Mohammad Rafi
Lyrics: Majrooh Sultanpuri


Update on 2017.10.14 23:57 IST: Streamlined a bit, as usual.

 

Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]