Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]

Advertisements

Busy, busy, busy… And will be. (Aka: Random Notings in the Passing)

Have been very busy. [What’s new about that? Read on…]


First, there is that [usual] “busy-ness” on the day job.


Then, Mary Hesse (cf. my last post) does not cover tensor fields.

A tensor is a very neat mathematical structure. Essentially, you get it by taking a Cartesian product of the basis vectors of (a) space(s). A tensor field is a tensor-valued function of, say, the physical (“ambient”) space, itself a vector space and also a vector field.

Yes, that reads like the beginning paragraph of a Wiki article on a mathematical topic. Yes, you got into circles. Mathematicians always do that—esp. to you. … Well, they also try doing that, on me. But, usually, they don’t succeed. … But, yes, it does keep me busy. [Now you know why I’ve been so busy.]


Now, a few other, mostly random, notings in the passing…


As every year, the noise pollution of the Ganapati festival this year, too, has been nothing short of maddening. But this year, it has not been completely maddening. Not at least to me. The reason is, I am out of Pune. [And what a relief it is!]


OK, time to take some cognizance of the usual noises on the QM front. The only way to do that is to pick up the very best among them. … I will do that for you.

The reference is to Roger Schlafly’s latest post: “Looking for new quantum axioms”, here [^]. He in turn makes a reference to a Quanta Mag article [^] by Philip Ball, who in turn makes a reference to the usual kind of QM noises. For the last, I shall not provide you with references. … Then, in his above-cited post, Schlafly also makes a reference to the Czech physicist Lubos Motl’s blog post, here [^].

Schlafly notes that Motl “…adequately trashes it as an anti-quantum crackpot article,” and that he “will not attempt to outdo his [i.e. Motl’s] rant.” Schlafly even states that he agrees with him Motl.

Trailer: I don’t; not completely anyway.

Immediately later, however, Schlafly says quite a remarkable thing, something that is interesting in its own regard:

Instead, I focus on one fallacy at the heart of modern theoretical physics. Under this fallacy, [1] the ideal theory is one that is logically derived from postulates, and [2] where one can have a metaphysical belief in those postulates independent of messy experiments.” [Numbering of the clauses is mine.]

Hmmm…

Yes, [1] is right on, but not [2]. Both the postulates and the belief in them here are of physics; experiments—i.e. [controlled] observations of physical reality—play not just a crucial part; they play the “game-starting” part. Wish Schlafly had noted the distinction between the two clauses.

All in all, I think that, on this issue of Foundations of QM, we all seem to be not talking to each other—we seem to be just looking past each other, so to say. That’s the major reason why the field has been flourishing so damn well. Yet, all in all, I think, Schlafly and Motl are more right about it all than are Ball or the folks he quotes.

But apart from it all, let me say that Schlafly and Motl have been advocating the view that Dirac–von Neumann axioms [^] provide the best possible theoretical organization for the theory of the quantum mechanical phenomena.

I disagree.

My position is that the Dirac-von Neumann axioms have not been done with due care to the scope (and applicability) of all the individual concepts subsuming the different aspects of the quantum physical phenomena. Like all QM physicists of the past century (and continuing with those in this century as well, except for, as far as I know, me!), they confuse on one crucial issue. And that issue is at the heart and the base of the measurement/collapse postulate. Understand that one critical issue well, and the measurement/collapse postulate itself collapses in no time. I can name it—that one critical issue. In fact, it’s just one concept. Just one concept that is already well-known to science, but none thinks of it in the context of Foundations of QM. Not in the right way, anyway. [Meet me in person to learn what it is.]


OK, another thing.

I haven’t yet finished Hesse’s book. [Did you honestly expect me to do that so fast?] That, plus the fact that in my day-job, we would be working even harder, working extra hours (plus may be work on week-ends, as well).

In fact, I have already frozen all my research schedule and put it in the deep freeze section. (Not even on the back-burner, I mean.)

So, allow me to go off the blog once again for yet another 3–4 weeks or so. [And I will do that anyway, even if you don’t allow.]


A Song I Like:

[The value of this song to me is mostly nostalgic; it has some very fond memories of my childhood associated with it. As an added bonus, Shammi Kapoor looks slim(mer than his usual self) in this video, the so-called Part 2 of the song, here [^]—and thereby causes a relatively lesser irritation to the eye. [Yes, sometimes, I do refer to videos too, even in this section.]]

(Hindi) “madahosh hawaa matawaali fizaa”
Lyrics: Farooq Qaisar
Singer: Mohammed Rafi
Music: Shankar-Jaikishan

[BTW, did you guess the RD+Gulzar+Lata song I had alluded to, last time? … May be I will write a post just to note that song. Guess it might make for a  good “blog-filler” sometime during the upcoming several weeks, when I will once again be generally off the blog. … OK, take care, and bye for now….]

Off the blog. [“Matter” cannot act “where” it is not.]

I am going to go off the blogging activity in general, and this blog in most particular, for some time. [And, this time round, I will keep my promise.]


The reason is, I’ve just received the shipment of a book which I had ordered about a month ago. Though only about 300 pages in length, it’s going to take me weeks to complete. And, the book is gripping enough, and the issue important enough, that I am not going to let a mere blog or two—or the entire Internet—come in the way.


I had read it once, almost cover-to-cover, some 25 years ago, while I was a student in UAB.

Reading a book cover-to-cover—I mean: in-sequence, and by that I mean: starting from the front-cover and going through the pages in the same sequence as the one in which the book has been written, all the way to the back-cover—was quite odd a thing to have happened with me, at that time. It was quite unlike my usual habits whereby I am more or less always randomly jumping around in a book, even while reading one for the very first time.

But this book was different; it was extraordinarily engaging.

In fact, as I vividly remember, I had just idly picked up this book off a shelf from the Hill library of UAB, for a casual examination, had browsed it a bit, and then had began sampling some passage from nowhere in the middle of the book while standing in an library aisle. Then, some little time later, I was engrossed in reading it—with a folded elbow resting on the shelf, head turned down and resting against a shelf rack (due to a general weakness due to a physical hunger which I was ignoring [and I would have have to go home and cook something for myself; there was none to do that for me; and so, it was easy enough to ignore the hunger]). I don’t honestly remember how the pages turned. But I do remember that I must have already finished some 15-20 pages (all “in-the-order”!) before I even realized that I had been reading this book while still awkwardly resting against that shelf-rack. …

… I checked out the book, and once home [student dormitory], began reading it starting from the very first page. … I took time, days, perhaps weeks. But whatever the length of time that I did take, with this book, I didn’t have to jump around the pages.


The issue that the book dealt with was:

[Instantaneous] Action at a Distance.

The book in question was:

Hesse, Mary B. (1961) “Forces and Fields: The concept of Action at a Distance in the history of physics,” Philosophical Library, Edinburgh and New York.


It was the very first book I had found, I even today distinctly remember, in which someone—someone, anyone, other than me—had cared to think about the issues like the IAD, the concepts like fields and point particles—and had tried to trace their physical roots, to understand the physical origins behind these (and such) mathematical concepts. (And, had chosen to say “concepts” while meaning ones, rather than trying to hide behind poor substitute words like “ideas”, “experiences”, “issues”, “models”, etc.)

Twenty-five years later, I still remain hooked on to the topic. Despite having published a paper on IAD and diffusion [and yes, what the hell, I will say it: despite claiming a first in 200+ years in reference to this topic], I even today do find new things to think about, about this “kutty” [Original: IITM lingo; English translation: “small”] topic. And so, I keep returning to it and thinking about it. I still am able to gain new insights once in an odd while. … Indeed, my recent ‘net search on IAD (the one which led to Hesse and my buying the book) precisely was to see if someone had reported the conceptual [and of course, mathematical] observation which I have recently made, or not. [If too curious about it, the answer: looks like, none has.]


But now coming to Hesse’s writing style, let me quote a passage from one of her research papers. I ran into this paper only recently, last month (in July 2017), and it was while going through it that I happened [once again] to remember her book. Since I did have some money in hand, I did immediately decide to order my copy of this book.

Anyway, the paper I have in mind is this:

Hesse, Mary B. (1955) “Action at a Distance in Classical Physics,” Isis, Vol. 46, No. 4 (Dec., 1955), pp. 337–353, University of Chicago Press/The History of Science Society.

The paper (it has no abstract) begins thus:

The scholastic axiom that “matter cannot act where it is not” is one of the very general metaphysical principles found in science before the seventeenth century which retain their relevance for scientific theory even when the metaphysics itself has been discarded. Other such principles have been fruitful in the development of physics: for example, the “conservation of motion” stated by Descartes and Leibniz, which was generalized and given precision in the nineteenth century as the doctrine of the conservation of energy; …

Here is another passage, once again, from the same paper:

Now Faraday uses a terminology in speaking about the lines of force which is derived from the idea of a bundle of elastic strings stretched under tension from point to point of the field. Thus he speaks of “tension” and “the number of lines” cut by a body moving in the field. Remembering his discussion about contiguous particles of a dielectric medium, one must think of the strings as stretching from one particle of the medium to the next in a straight line, the distance between particles being so small that the line appears as a smooth curve. How seriously does he take this model? Certainly the bundle of elastic strings is nothing like those one can buy at the store. The “number of lines” does not refer to a definite number of discrete material entities, but to the amount of force exerted over a given area in the field. It would not make sense to assign points through which a line passes and points which are free from a line. The field of force is continuous.

See the flow of the writing? the authentic respect for the intellectual history, and yet, the overriding concern for having to reach a conclusion, a meaning? the appreciation for the subtle drama? the clarity of thought, of expression?

Well, these passages were from the paper, but the book itself, too, is similarly written.


Obviously, while I remain engaged in [re-]reading the book [after a gap of 25 years], don’t expect me to blog.

After all, even I cannot act “where” I am not.


A Song I Like:

[I thought a bit between this song and another song, one by R.D. Burman, Gulzar and Lata. In the end, it was this song which won out. As usual, in making my decision, the reference was exclusively made to the respective audio tracks. In fact, in the making of this decision, I happened to have also ignored even the excellent guitar pieces in this song, and the orchestration in general in both. The words and the tune were too well “fused” together in this song; that’s why. I do promise you to run the RD song once I return. In the meanwhile, I don’t at all mind keeping you guessing. Happy guessing!]

(Hindi) “bheegi bheegi…” [“bheege bheege lamhon kee bheegee bheegee yaadein…”]
Music and Lyrics: Kaushal S. Inamdar
Singer: Hamsika Iyer

[Minor additions/editing may follow tomorrow or so.]

 

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]