Hushshshsh…

The title word of this post is Marathi for “phew!”—not for “hush hush.” (But, to me, the Marthi word is more expressive. [BTW, the “hu” is to be pronounced as “hoo”.])

The reason for this somewhat accentuated and prolonged exhalation is this: I am done with the version “0.1-beta” of my note on flux (see my last post). I need a break.

As of now, this note is about 27 pages, and with figures and some further (< 5%) additions, the final number of pages for the version 0.1 should easily go into the early 30s. … To be readable, it will have to brought down to about 15 pages or fewer, including diagrams. Preferably, < 10 pages. Some other day. For now, I find that I have grown plain sick and tired of working on that topic. I need to get away from it all. I am sure that I will return to it later on—may be after a month or so. But for the time being, I simply need a break—from it. I’ve had enough of this flux and vectors and tensors thing. … Which brings me to the next topic of this post.

There are also other announcements.


I think that I have also had enough of QM.

QM was interesting, very interesting, to me. It remained that way for some four decades. But now that I have cracked it (to my satisfaction), my interest in the topic has begun dwindling down very rapidly.

Sure I will conduct a few simulations, deliver the proposed seminar(s) I had mentioned in the past, and also write a paper or two about it. But I anticipate that I won’t go much farther than what I have already understood. The topic, now, simply has ceased to remain all that interesting.

But, yes, I have addressed all the essential QM riddles. That’s for certain.


And then, I was taking a stock of my current situation, and here are a few things that stood out:

  • I am not getting an academic job in Pune (because of the stupid / evil SPPU rules), and frankly, the time when a (full) Professor’s job could have meant something to me is already over. If I were to get such a job well in time—which means years ago—then I could have done some engineering research (especially in CFD), guided a few students (in general in computational science and engineering), taught courses, developed notes, etc. But after having lost a decade or so due to that stupid and/or evil Metallurgy-vs-Mechanical Branch Jumping issue, I don’t have the time to pursue all that sort of a thing, any more.
  • You would know this: All my savings are over; I am already in debts.
  • I do not wish to get into a typical IT job. It could be well paying, but it involves absolutely no creativity and originality—I mean creativity involving theoretical aspects. Deep down in my heart, I remain a “theoretician”—and a programmer. But not a manager. There is some scope for creativity in the Indian IT industry, but at my “seniority,” it is mostly limited, in one way or the other, only to “herd-management” (to use an expression I have often heard from my friends in the industry). And, I am least bothered about that. So, to say that by entering the typical Indian IT job, my best skills (even “gifts”) would go under-utilized, would be an understatement.
  • For someone like me, there is no more scope, in Pune, in the CFD field either. Consultants and others are already well established. I could keep my eyes and ears open. But it looks dicey to rely on this option. The best period for launching industrial careers in CFD here was, say, up to the early naughties. … I still could continue with some research in CFD. But guess it no longer is a viable career option for me. Not in Pune.
  • Etc.

However, all is not gloomy. Not at all. Au contraire.

I am excited that I am now entering a new field.

I will not ask you to take a guess. This career route for people with my background and skills is so well-established by now, that there aren’t any more surprises left in it. Even an ANN would be able to guess it right.

Yes, that’s right. From now on, I am going to pursue Data Science.


This field—Data Science—has a lot of attractive features, as far as I am concerned. The way I see it, the following two stand out:

  1. There is a very wide variety of application contexts; and
  2. There is a fairly wide range of mathematical skills that you have to bring to bear on these problems.

Notice, the emphasis is on the width, not on the depth.

The above-mentioned two features, in turn, lead to or help explain many other features, like:

  1. A certain open ended-ness of solutions—pretty much like what you have in engineering research and design. In particular, one size doesn’t fit all.
  2. A relatively higher premium on the individual thinking skills—unlike what your run-of-the-mill BE in CS does, these days [^].

Yes, Data Science, as a field, will come to mature, too. The first sign that it is reaching the first stage of maturity would be an appearance of a book like “Design Patterns.”

However, even this first stage is, I anticipate, distant in future. All in all, I anticipate that the field will not come to mature before some 7–10 years pass by. And that’s long enough a time for me to find some steady career option in the meanwhile.


There are also some other plus points that this field holds from my point of view.

I have coded extensively—more than 1 lakh (100,000) lines of C++ code in all, before I came to stop using C++, which happened especially after I entered academia. I am already well familiar with Python and its eco-system, though am not an expert when it comes to writing the fastest possible numerical code in Python.

I have handled a great variety of maths. The list of equations mentioned in my recent post [^] is not even nearly exhaustive. (For instance, it does not even mention whole topics like probability and statistics, stereology, and many such matters.) When it comes to Data Science, a prior experience with a wide variety of maths is a plus point.

I have not directly worked with topics like artificial neural networks, deep learning, the more advanced regression analysis, etc.

However, realize that for someone like me, i.e., someone who taught FEM, and had thought of accelerating solvers via stochastic means, the topic of constrained optimization would not be an entirely unknown animal. Some acquaintance has already been made with the conjugate gradient (though I can’t claim mastery of it). Martingales theory—the basic idea—is not a complete unknown. (I had mentioned a basic comparison of my approach vis-a-vis the simplest or the most basic martingales theory, in my PhD thesis.)

Other minor points are these. This field also (i) involves visualization abilities, (ii) encourages good model building at the right level of abstraction, and (iii) places premium on presentation. I am not sure if I am good on the third count, but I sure am confident that I do pretty well on the first two. The roots of all my new research ideas, in fact, can be traced back to having to understand physical principles in 3+1 D settings.


Conclusion 1: I should be very much comfortable with Data Science. (Not sure if Data Science itself (i.e., Data Scientists themselves) would be comfortable with me or not. But that’s something I could deal later on.)

Conclusion 2: Expect blogging here going towards Data Science in the future.


A Song I Like:

(Marathi) “uff, teri adaa…”
Music: Shankar-Ahsaan-Loy
Singer: Shankar Mahadevan
Lyrics: Javed Akhtar

[By any chance, was this tune at least inspired (if not plagiarized) from some Western song? Or is it through and through original? …In any case, I like it a lot. I find it wonderful. It’s upbeat, but not at all banging on the ears. (For contrast, attend any Ganapati procession, whether on the first day, the last day, or any other day in between. You will have ample opportunities to know what having your ears banged out to deafness means. Nay, these opportunities will be thrust upon you, whether you like it or not. It’s our “culture.”)]

 

 

Advertisements

General update: Will be away from blogging for a while

I won’t come back for some 2–3 weeks or more. The reason is this.


As you know, I had started writing some notes on FVM. I would then convert my earlier, simple, CFD code snippets, from FDM to FVM. Then, I would pursue modeling Schrodinger’s equation using FVM. That was the plan.

But before getting to the nitty-gritties of FVM itself, I thought of jotting down a note, once and for all, putting in writing my thoughts thus far on the concept of flux.


If you remember, it was several years ago that I had mentioned on this blog that I had sort of succeeded in deriving the Navier-Stokes equation in the Eulerian but differential form (d + E for short).

… Not an achievement by any stretch of imagination—there are tomes written on say, differentiable manifolds and whatnot. I feel sure that deriving the NS equations in the (d + E) form would be less than peanuts for them.

Yet, the fact of the matter is: They actually don’t do that!

Show me a single textbook or a paper that does that. If not at the UG level, then at least at the PG level, but one that is written using the language of only plain calculus, as used by engineers—not that of advanced analysis.

And as to the UG/PG books from engineering:

What people normally do is to derive these equations in its integral form, whether using the Lagrangian or the Eulerian approach. That is, they adopt either the (i + L) approach or the (i + D) approach.

At some rare times, if they at all begin fluid dynamics with a differential form of the NS equations, then they invariably follow the Lagrangian approach, never the Eulerian. That is, they invariably begin with only (d + L)—even in those cases when their objective is to obtain (d + E). Then, after having derived (d +L) , they simply invoke some arbitrary-looking vector calculus identities to “transform” those equations from (d + L) to (d +E).

And, worse:

They never discuss the context, meaning, or proofs of those identities. None from fluid dynamics or CFD side does that. And neither do the books on maths written for scientists and engineers.

The physical bases of the “transformation” process must remain a mystery.


When I started working through it a few years ago, I realized that the one probable reason why they don’t use the (d +E) form right from the beginning is because: forget the NS equations, no one understands even the much simpler idea of the flux—if it is to be couched entirely in the settings of (d+E). You see, the idea of the flux too always remains couched in the integral form, never the differential. For example, see Narasimhan [^]. Or, any other continuum mechanics books that impresses you.

It’s no accident that the Wiki article on Flux [^] says that it

needs attention from an expert in Physics.

And then, more important for us, the text of the article itself admits that the formula it notes, for a definition of flux in differential terms, is

an abuse of notation

See the section here [^].

Also, ask yourself, why is a formula that is free of the abuse of notation not being made available? In spite of all those tomes having been written on higher mathematics?


Further, there were also other related things I wanted to write about, like an easy pathway to the idea of tensors in general, and to that of the stress tensor in particular.

So, I thought of writing it down it for once and for all, in one note. I possibly could convert some parts of it into a paper later on, perhaps. For the time being though, the note would be more in the nature of a tutorial.


I started writing down the note, I guess, from 17 August 2018. However, it kept on growing, and with growth came reorganization of material for a better hierarchy or presentation. It has already gone through some 4–5 thorough re-orgs (meaning: discarding the earlier LaTeX file entirely and starting completely afresh), and it has already become more than 10 LaTeX pages. Even then, I am nowhere near finishing it. I may be just about half-way through—even though I have been working on it for some 7–8 hours every day for the past fortnight.

Yes, writing something in original is a lot of hard work. I mean “original” not in the sense of discovery, but in the sense of a lack of any directly citable material whatsoever, on the topic. Forget copy-pasting. You can’t even just gather a gist of the issue so that you could cite it.

And, the trouble here is, this topic is otherwise so very mature. (It is some 150+ years old.) So, you know that if you go even partly wrong, the whole world is going to pile on you.

And that way, in my experience, when you write originally, there is at least 5–10 pages of material you typically end up throwing away for every page that makes it to the final, published, version. Yes, the garbage thrown out is some 5–10 times the material retained in—no matter how “simple” and “straightforward” the published material might look.

Indeed, I could even make a case that the simpler and the more straight-forward the published material looks, if it also happens to be original, then the more strenuous it has been, on the part of the author.

Few come to grasp this simple an observation, ever, in their entire life.


As a case in point, I wish to recall here my conference paper on diffusion. [To be added here soon enough.]

I have many times silently watched people as they were going through this paper for the first time.

Typically, when engineers read it, they invariably come out with a mild expression which suggests that they probably were thinking of something like: “isn’t it all so simple and straight-forward?” Sometimes they even explicitly ask: “And, what do you say was the new contribution here?” [Even after having gone through both the abstract and the conclusion part of it, that is.]

On the other hand, on the four-five rare occasions when I have had the opportunity to watch professional mathematicians go through this paper of mine, in each case, the expression they invariably gave at the end of finishing it was as if they still were very intently absorbed in it. In particular, they never do ask me what was new about it—they just remain deeply engaged in what looks like an exercise in “fault-finding”, i.e., in checking if any proof, theorem or lemma they had ever had come across could be used in order to demolish the new idea that has been presented. Invariably, they give the same argument by way of an objection. Invariably, I explain why their argument does not address the issue I have raised in the paper. Invariably they chuckle and then go back to the paper and to their intent thinking mode, to see if there is any other weakness to my basic argument…

Till date (even after more than a decade), they haven’t come back.

But in all cases, they were very ready to admit that they were coming across this argument for the first time. I didn’t have to explain it to them that though the language and the tone of the paper looked simple enough, the argument itself was not easy to derive originally.


No, the notes which I am currently working on are nowhere near as original as that. [But yes, original, these are.]

Yet, let me confess, even as I keep prodding through it for the better part of the day the way I have done over the past fortnight or so, I find myself dealing with a certain doubt: wouldn’t they just dismiss it all as being too obvious? as if all the time and effort I spent on it was, more or less, ill spent? that it was all meaningless to begin with?


Anyway, I want to finish this task before resuming blogging—simply because I’ve got a groove about it by now… I am in a complete and pure state of anti-procrastination.

… Well, as they say: Make the hay while the Sun shines…


A Song I Like:
(Marathi) “dnyaandev baaL maajhaa…”
Singer: Asha Bhosale
Lyrics: P. Savalaram
Music: Vasant Prabhu

 

Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]