General update: Will be away from blogging for a while

I won’t come back for some 2–3 weeks or more. The reason is this.


As you know, I had started writing some notes on FVM. I would then convert my earlier, simple, CFD code snippets, from FDM to FVM. Then, I would pursue modeling Schrodinger’s equation using FVM. That was the plan.

But before getting to the nitty-gritties of FVM itself, I thought of jotting down a note, once and for all, putting in writing my thoughts thus far on the concept of flux.


If you remember, it was several years ago that I had mentioned on this blog that I had sort of succeeded in deriving the Navier-Stokes equation in the Eulerian but differential form (d + E for short).

… Not an achievement by any stretch of imagination—there are tomes written on say, differentiable manifolds and whatnot. I feel sure that deriving the NS equations in the (d + E) form would be less than peanuts for them.

Yet, the fact of the matter is: They actually don’t do that!

Show me a single textbook or a paper that does that. If not at the UG level, then at least at the PG level, but one that is written using the language of only plain calculus, as used by engineers—not that of advanced analysis.

And as to the UG/PG books from engineering:

What people normally do is to derive these equations in its integral form, whether using the Lagrangian or the Eulerian approach. That is, they adopt either the (i + L) approach or the (i + D) approach.

At some rare times, if they at all begin fluid dynamics with a differential form of the NS equations, then they invariably follow the Lagrangian approach, never the Eulerian. That is, they invariably begin with only (d + L)—even in those cases when their objective is to obtain (d + E). Then, after having derived (d +L) , they simply invoke some arbitrary-looking vector calculus identities to “transform” those equations from (d + L) to (d +E).

And, worse:

They never discuss the context, meaning, or proofs of those identities. None from fluid dynamics or CFD side does that. And neither do the books on maths written for scientists and engineers.

The physical bases of the “transformation” process must remain a mystery.


When I started working through it a few years ago, I realized that the one probable reason why they don’t use the (d +E) form right from the beginning is because: forget the NS equations, no one understands even the much simpler idea of the flux—if it is to be couched entirely in the settings of (d+E). You see, the idea of the flux too always remains couched in the integral form, never the differential. For example, see Narasimhan [^]. Or, any other continuum mechanics books that impresses you.

It’s no accident that the Wiki article on Flux [^] says that it

needs attention from an expert in Physics.

And then, more important for us, the text of the article itself admits that the formula it notes, for a definition of flux in differential terms, is

an abuse of notation

See the section here [^].

Also, ask yourself, why is a formula that is free of the abuse of notation not being made available? In spite of all those tomes having been written on higher mathematics?


Further, there were also other related things I wanted to write about, like an easy pathway to the idea of tensors in general, and to that of the stress tensor in particular.

So, I thought of writing it down it for once and for all, in one note. I possibly could convert some parts of it into a paper later on, perhaps. For the time being though, the note would be more in the nature of a tutorial.


I started writing down the note, I guess, from 17 August 2018. However, it kept on growing, and with growth came reorganization of material for a better hierarchy or presentation. It has already gone through some 4–5 thorough re-orgs (meaning: discarding the earlier LaTeX file entirely and starting completely afresh), and it has already become more than 10 LaTeX pages. Even then, I am nowhere near finishing it. I may be just about half-way through—even though I have been working on it for some 7–8 hours every day for the past fortnight.

Yes, writing something in original is a lot of hard work. I mean “original” not in the sense of discovery, but in the sense of a lack of any directly citable material whatsoever, on the topic. Forget copy-pasting. You can’t even just gather a gist of the issue so that you could cite it.

And, the trouble here is, this topic is otherwise so very mature. (It is some 150+ years old.) So, you know that if you go even partly wrong, the whole world is going to pile on you.

And that way, in my experience, when you write originally, there is at least 5–10 pages of material you typically end up throwing away for every page that makes it to the final, published, version. Yes, the garbage thrown out is some 5–10 times the material retained in—no matter how “simple” and “straightforward” the published material might look.

Indeed, I could even make a case that the simpler and the more straight-forward the published material looks, if it also happens to be original, then the more strenuous it has been, on the part of the author.

Few come to grasp this simple an observation, ever, in their entire life.


As a case in point, I wish to recall here my conference paper on diffusion. [To be added here soon enough.]

I have many times silently watched people as they were going through this paper for the first time.

Typically, when engineers read it, they invariably come out with a mild expression which suggests that they probably were thinking of something like: “isn’t it all so simple and straight-forward?” Sometimes they even explicitly ask: “And, what do you say was the new contribution here?” [Even after having gone through both the abstract and the conclusion part of it, that is.]

On the other hand, on the four-five rare occasions when I have had the opportunity to watch professional mathematicians go through this paper of mine, in each case, the expression they invariably gave at the end of finishing it was as if they still were very intently absorbed in it. In particular, they never do ask me what was new about it—they just remain deeply engaged in what looks like an exercise in “fault-finding”, i.e., in checking if any proof, theorem or lemma they had ever had come across could be used in order to demolish the new idea that has been presented. Invariably, they give the same argument by way of an objection. Invariably, I explain why their argument does not address the issue I have raised in the paper. Invariably they chuckle and then go back to the paper and to their intent thinking mode, to see if there is any other weakness to my basic argument…

Till date (even after more than a decade), they haven’t come back.

But in all cases, they were very ready to admit that they were coming across this argument for the first time. I didn’t have to explain it to them that though the language and the tone of the paper looked simple enough, the argument itself was not easy to derive originally.


No, the notes which I am currently working on are nowhere near as original as that. [But yes, original, these are.]

Yet, let me confess, even as I keep prodding through it for the better part of the day the way I have done over the past fortnight or so, I find myself dealing with a certain doubt: wouldn’t they just dismiss it all as being too obvious? as if all the time and effort I spent on it was, more or less, ill spent? that it was all meaningless to begin with?


Anyway, I want to finish this task before resuming blogging—simply because I’ve got a groove about it by now… I am in a complete and pure state of anti-procrastination.

… Well, as they say: Make the hay while the Sun shines…


A Song I Like:
(Marathi) “dnyaandev baaL maajhaa…”
Singer: Asha Bhosale
Lyrics: P. Savalaram
Music: Vasant Prabhu

 

Advertisements

And to think…

Many of you must have watched the news headlines on TV this week; many might have gathered it from the ‘net.

Mumbai—and much of Maharashtra—has gone down under. Under water.

And to think that all this water is now going to go purely to waste, completely unused.

… And that, starting some time right from say February next year, we are once again going to yell desperately about water shortage, about how water-tankers have already begun plying on the “roads” near the drought-hit villages. … May be we will get generous and send not just 4-wheeler tankers but also an entire train to a drought-hit city or two…

Depressing!


OK. Here’s something less depressing. [H/t Jennifer Ouellette (@JenLucPiquant) ]:

“More than 2,000 years ago, people were able to create ice in the desert even with temperatures above freezing!” [^]

The write-up mentions a TED video by Prof. Aaswath Raman. Watched it out of idle interest, checked out his Web site, and found another TED video by him, here [^]. Raman cites statistics that blew me!

They spend “only” $24 billion on supermarket refrigeration (and other food-related cooling), but they already spend $42 billion on data-center cooling!!


But, any way, I did some further “research” and landed at a few links, like the Wiki on Yakhchal [^], on wind-catcher [^], etc.  Prof. Raman’s explanation in terms of the radiative cooling was straight-forwards, but I am not sure I understand the mechanism behind the use of a qanat [^] in Yakhchal/windcatcher cooling. It would be cool to do some CFD simulations though.


Finally, since I am once again out of a job (and out of all my saved money and in fact also into credit-card loans due to some health issue cropping up once again), I was just idly wondering about all this renewable energy business, when something struck me.


The one big downside of windmills is that the electricity they generate fluctuates too much. You can’t rely on it; the availability is neither 24X7 nor uniform. Studies in fact also show that in accommodating the more or less “random” output of windmills into the conventional grid, the price of electricity actually goes up—even if the cost of generation alone at the windmill tower may be lower. Further, battery technology has not improved to such a point that you could store the randomly generated electricity economically.

So, I thought, why not use that randomly fluctuating windmill electricity in just producing the hydrogen gas?

No, I didn’t let out a Eureka. Instead, I let out a Google search. After all, the hydrogen gas could be used in fuel-cells, right? Would the cost of packaging and transportation of hydrogen gas be too much? … A little searching later, I landed at this link: [^]. Ummm… No, no, no…. Why shoot it into the natural gas grid? Why not compress it into cylinders and transport by trains? How does the cost economics work out in that case? Any idea?


Addendum on the same day, but after about a couple of hours:

Yes, I did run into this link: “Hydrogen: Hope or Hype?” [^] (with all the links therein, and then, also this: [^]).

But before running into those links, even as my googling on “hydrogen fuel energy density” still was in progress, I thought of this idea…

Why at all transport the hydrogen fuel from the windmill farm site to elsewhere? Why not simply install a fuel cell electricity generator right at the windmill farm? That is to say, why not use the hydrogen fuel generated via electrolysis as a flywheel of sorts? Get the idea? You introduce a couple of steps in between the windmill’s electricity and the conventional grid. But you also take out the fluctuations, the bad score on the 24X7 availability. And, you don’t have to worry about the transportation costs either.

What do you think?


Addendum on 12th July 2018, 13:27 hrs IST

Further, I also browsed a few links that explore another,  solution: using compressed air: a press report [^], and a technical paper [^]. (PDF of the paper is available, but the paper would be accessible only to mechanical engineers though. Later Update: As to the press report, well, the company it talks about has already merged with another company, and has abandoned the above-ground storage of compressed air [^])

I think that such a design reduces the number of steps of energy conversions. However, that does not necessarily mean that the solution involving hydrogen fuel generation and utilization (both right at the wind-farm) isn’t going to be economical.

Economics determines (or at least must determine) the choice. Enough on this topic for now. Wish I had a student working with me; I could have then written a paper after studying the solution I have proposed above. (The idea is worth a patent too. Too bad I don’t have the money to file one. Depressing, once again!!)


OK. Enough for the time being. I may later on add the songs section if I feel like it. And, iterative modifications will always be done, but will be mostly limited to small editorial changes. Bye for now.

 

Some suggested time-pass (including ideas for Python scripts involving vectors and tensors)

Actually, I am busy writing down some notes on scalars, vectors and tensors, which I will share once they are complete. No, nothing great or very systematic; these are just a few notings here and there taken down mainly for myself. More like a formulae cheat-sheet, but the topic is complicated enough that it was necessary that I have them in one place. Once ready, I will share them. (They may get distributed as extra material on my upcoming FDP (faculty development program) on CFD, too.)

While I remain busy in this activity, and thus stay away from blogging, you can do a few things:


1.

Think about it: You can always build a unique tensor field from any given vector field, say by taking its gradient. (Or, you can build yet another unique tensor field, by taking the Kronecker product of the vector field variable with itself. Or, yet another one by taking the Kronecker product with some other vector field, even just the position field!). And, of course, as you know, you can always build a unique vector field from any scalar field, say by taking its gradient.

So, you can write a Python script to load a B&W image file (or load a color .PNG/.BMP/even .JPEG, and convert it into a gray-scale image). You can then interpret the gray-scale intensities of the individual pixels as the local scalar field values existing at the centers of cells of a structured (squares) mesh, and numerically compute the corresponding gradient vector and tensor fields.

Alternatively, you can also interpret the RGB (or HSL/HSV) values of a color image as the x-, y-, and z-components of a vector field, and then proceed to calculate the corresponding gradient tensor field.

Write the output in XML format.


2.

Think about it: You can always build a unique vector field from a given tensor field, say by taking its divergence. Similarly, you can always build a unique scalar field from a vector field, say by taking its divergence.

So, you can write a Python script to load a color image, and interpret the RGB (or HSL/HSV) values now as the xx-, xy-, and yy-components of a symmetrical 2D tensor, and go on to write the code to produce the corresponding vector and scalar fields.


Yes, as my resume shows, I was going to write a paper on a simple, interactive, pedagogical, software tool called “ToyDNS” (from Toy + Displacements, Strains, Stresses). I had written an extended abstract, and it had even got accepted in a renowned international conference. However, at that time, I was in an industrial job, and didn’t get the time to write the software or the paper. Even later on, the matter kept slipping.

I now plan to surely take this up on priority, as soon as I am done with (i) the notes currently in progress, and immediately thereafter, (ii) my upcoming stress-definition paper (see my last couple of posts here and the related discussion at iMechanica).

Anyway, the ideas in the points 1. and 2. above were, originally, a part of my planned “ToyDNS” paper.


3.

You can induce a “zen-like” state in you, or if not that, then at least a “TV-watching” state (actually, something better than that), simply by pursuing this URL [^], and pouring in all your valuable hours into it. … Or who knows, you might also turn into a closet meteorologist, just like me. [And don’t tell anyone, but what they show here is actually a vector field.]


4.

You can listen to this song in the next section…. It’s one of those flowy things which have come to us from that great old Grand-Master, viz., SD Burman himself! … Other songs falling in this same sub-sub-genre include, “yeh kisine geet chheDaa,” and “ThanDi hawaaein,” both of which I have run before. So, now, you go enjoy yet another one of the same kind—and quality. …


A Song I Like:

[It’s impossible to figure out whose contribution is greater here: SD’s, Sahir’s, or Lata’s. So, this is one of those happy circumstances in which the order of the listing of the credits is purely incidental … Also recommended is the video of this song. Mona Singh (aka Kalpana Kartik (i.e. Dev Anand’s wife, for the new generation)) is sooooo magical here, simply because she is so… natural here…]

(Hindi) “phailee huyi hai sapanon ki baahen”
Music: S. D. Burman
Lyrics: Sahir
Singer: Lata Mangeshkar


But don’t forget to write those Python scripts….

Take care, and bye for now…

 

How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.


Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…


But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.


As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.


When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…


I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…


A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan

 

A prediction. Also, a couple of wishes…

The Prediction:

While the week of the Nobel prizes always has a way to generate a sense of suspense, of excitement, and even of wonderment, as far as I am concerned, the one prize that does that in the real sense to me is, of course, the Physics Nobel. … Nothing compares to it. Chemistry can come close, but not always. [And, Mr. Nobel was a good guy; he instituted no prize for maths! [LOL!]]. …

The Physics Nobel is the King of all awards in all fields, as far as I am concerned.

That’s why, this year, I have this feeling of missing something. … The reason is, this year’s Physics Nobel is already “known”; it will go to Kip Thorne and pals.

[I will not eat crow even if they don’t get it. [… Unless, of course, you know a delicious recipe or two for the same, and also demonstrate it to me, complete with you sampling it first.]]

But yes, Kip Thorne richly deserves it, and he will get it. That’s the prediction. I wanted to slip it in even if only few hours before the announcement arrives.

I will update this post later right today/tonight, after the Physics Nobel is actually announced.


Now let me come to the couple of wishes, as mentioned in the title. I will try to be brief. [Have been too busy these days… OK. Will let you know. We are going in for accreditation, and so, it’s been all heavy documentation-related work for the past few months. Despite all that hard-work, we still have managed to slip a bit on the progress, and so, currently, we are working on all week-ends and on most public holidays, too. [Yes, we came to work yesterday.] So, it’s only somehow that I manage to find some time to slip in this post—which is written absolutely on the fly, with no second thoughts or re-reading before posting. … So excuse me if there is a bit of lack of balance in the presentation, and of course, typos etc.]


Wish # 1:

The first wish is that a Physics Nobel should go, in a combined way, to what actually are two separate, but very intimately related, and two most significant advances in the physical understanding of man: (i) chaos theory (including fractals) and (ii)catastrophe theory.

If you don’t like the idea of two ideas being given a single Nobel, then, well, let me put it this way: the Nobel should be given for achieving the most significant advancements in the field of the differential nonlinearities, for a very substantial progress in the physical understanding of the behaviour of nonlinear physical systems, forging pathways for predictive capacity.

Let me emphasize, this has been one of the most significant advances in physics in the last century. No, saying so is emphatically not a hyperbole.

And, yes, it’s an advance in physics, primarily, and then, also in maths—but only secondarily.

… It’s unfortunate that an advancement which has been this remarkable never did register as such with most of the S&T “manpower”, esp., engineers and practical designers. It’s also unfortunate that the twin advancement arrived on the scene at the time of bad cultural (even epistemological) trends, and so, the advancements got embedded in a fabric of hyperbole, even nonsense.

But regardless of the cultural tones in which the popular presentations of these advancements (esp. of the chaos theory) got couched, taken as a science, the studies of nonlinearity in the physical systems has been a very, very, original, and a very, very creative, advancement. It needs to be recognized as such.

That way, I don’t much care for what it helped produce on the maths side of it. But yes, even a not very extraordinarily talented undergraduate in CS (one with a special interest in deterministic methods in cryptography) would be able to tell you how much light got shone on their discipline because of the catastrophe and chaos theories.

The catastrophe theory has been simply marvellous in one crucial aspect: it actually pushed the boundaries of what is understood by the term: mathematics. The theory has been daring enough to propose, literally for the first time in the entire history of mankind, a well-refined qualitative approach to an infinity of quantitative processes taken as a group.

The distinction between the qualitative and the quantitative had kept philosophers (and laymen) pre-occupied for millenia. But the nonlinear theory has been the first theoretical approach that tells you how to spot and isolate the objective bases for distinguishing what we consider as the qualitative changes.

Remove the understanding given by the nonlinear theory—by the catastrophe-theoretical approach—and, once in the domain of the linear theory, the differences in kind immediately begin to appear as more or less completely arbitrary. There is no place in theory for them—the qualitative distinctions are external to the theory because a linear system always behaves exactly the same with any quantitative changes made, at any scale, to any of the controlling parameters. Since in the linear theory the qualitative changes are not produced from within the theory itself, such distinctions must be imported into it out of some considerations that are in principle external to the theory.

People often confuse such imports with “applications.” No, when it comes to the linear theory, it’s not the considerations of applications which can be said to be driving any divisions of qualitative changes. The qualitative distinctions are basically arbitrary in a linear theory. It is important to realize that that usual question: “Now where do we draw the line?” is basically absolutely superfluous once you are within the domain of the linear systems. There are no objective grounds on the basis of which such distinctions can be made.

Studies of the nonlinear phenomena sure do precede the catastrophe and the chaos theories. Even in the times before these two theories came on the scene, applied physicists would think of certain ideas such as differences of regimes, esp. in the areas like fluid dynamics.

But to understand the illuminating power of the nonlinear theory, just catch hold of an industrial CFD guy (or a good professor of fluid dynamics from a good university [not, you know, from SPPU or similar universities]), and ask him whether there can be any deeper theoretical significance to the procedure of the Buckingham Pi Theorem, to the necessity, in his art (or science) of having to use so many dimensionless numbers. (Every mechanical/allied engineering undergraduate has at least once in life cursed the sheer number of them.) The competent CFD guy (or the good professor) would easily be at a loss. Then, toss a good book on the Catastrophe Theory to him, leave him alone for a couple of weeks or may be a month, return, and raise the same question again. He now may or may not have a very good, “flowy” sort of a verbal answer ready for you. But one look at his face would tell you that it has now begun to reflect a qualitatively different depth of physical understanding even as he tries to tackle that question in his own way. That difference arises only because of the Catastrophe Theory.

As to the Chaos Theory (and I club the fractal theory right in it), more number of people are likely to know about it, and so, I don’t have to wax a lot (whether eloquently or incompetently). But let me tell you one thing.

Feigenbaum’s discovery of the universal constant remains, to my mind, one of the most ingenious advancements in the entire history of physics, even of science. Especially, given the experimental equipment with which he made that discovery—a handheld HP Calculator (not a computer) in the seventies (or may be in the sixties)! … And yes, getting to that universal constant was, if you ask me, an act of discovery, and not of invention. (Invention was very intimately involved in the process; but the overall act and the end-product was one of discovery.)

So, here is a wish that these fundamental studies of the nonlinear systems get their due—the recognition they so well deserve—in the form of a Physics Nobel.

…And, as always, the sooner the better!


Wish # 2:

The second wish I want to put up here is this: I wish there was some commercial/applied artist, well-conversant with the “art” of supplying illustrations for a physics book, who also was available for a long-term project I have in mind.

To share a bit: Years ago (actually, almost two decades ago, in 1998 to be precise), I had made a suggestion that novels by Ayn Rand be put in the form of comics. As far as I was concerned, the idea was novel (i.e. new). I didn’t know at that time that a comics-book version of The Fountainhead had already been conceived of by none other than Ayn Rand herself, and it, in fact, had also been executed. In short, there was a comics-book version of The Fountainhead. … These days, I gather, they are doing something similar for Atlas Shrugged.

If you think about it, my idea was not at all a leap of imagination. Newspapers (even those in India) have been carrying comic strips for decades (right since before my own childhood), and Amar Chitrakatha was coming of age just when I was. (It was founded in 1967 by Mr. Pai.)

Similarly, conceiving of a comics-like book for physics is not at all a very creative act of imagination. In fact, it is not even original. Everyone knows those books by that Japanese linguistics group, the books on topics like the Fourier theory.

So, no claim of originality here.

It’s just that for my new theory of QM, I find that the format of a comics-book would be most suitable. (And what the hell if physicists don’t take me seriously because I put it in this form first. Who cares what they think anyway!)

Indeed, I would even like to write/produce some comics books on maths topics, too. Topics like grads, divs, curls, tensors, etc., eventually. … Guess I will save that part for keeping me preoccupied during my retirement. BTW, my retirement is not all that far away; it’s going to be here pretty soon, right within just five years from now. (Do one thing: Check out what I was writing, say in 2012 on this blog.)

But the one thing I would like write/produce right in the more immediate future is: the comics book on QM, putting forth my new approach.

So, in the closing, here is a request. If you know some artist (or an engineer/physicist with fairly good sketching/computer-drawing skills), and has time at hand, and has the capacity to stay put in a sizeable project, and won’t ask money for it (a fair share in the royalty is a given—provided we manage to find a publisher first, that is), then please do bring this post to his notice.

 


A Song I Like:

And, finally, here is the Marathi song I had promised you the last time round. It’s a fusion of what to my mind is one of the best tunes Shrinivas Khale ever produced, and the best justice to the words and the tunes by the singer. Imagine any one else in her place, and you will immediately come to know what I mean. … Pushpa Pagdhare easily takes this song to the levels of the very best by the best, including Lata Mangeshkar. [Oh yes, BTW, congrats are due to the selection committe of this year’s Lata Mangeshkar award, for selecting Pushpa Pagdhare.]

(Marathi) “yeuni swapnaat maajhyaa…”
Singer: Pushpa Pagdhare
Music: Shrinivas Khale
Lyrics: Devakinandan Saraswat

[PS: Note: I am going to come back and add an update once this year’s Physics Nobel is announced. At that time (or tonight) I will also try to streamline this post.

Then, I will be gone off the blogging for yet another couple of weeks or so—unless it’s a small little “kutty” post of the “Blog-Filler” kind or two.]

 

Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]

Machine “Learning”—An Entertainment [Industry] Edition

Yes, “Machine ‘Learning’,” too, has been one of my “research” interests for some time by now. … Machine learning, esp. ANN (Artificial Neural Networks), esp. Deep Learning. …

Yesterday, I wrote a comment about it at iMechanica. Though it was made in a certain technical context, today I thought that the comment could, perhaps, make sense to many of my general readers, too, if I supply a bit of context to it. So, let me report it here (after a bit of editing). But before coming to my comment, let me first give you the context in which it was made:


Context for my iMechanica comment:

It all began with a fellow iMechanician, one Mingchuan Wang, writing a post of the title “Is machine learning a research priority now in mechanics?” at iMechanica [^]. Biswajit Banerjee responded by pointing out that

“Machine learning includes a large set of techniques that can be summarized as curve fitting in high dimensional spaces. [snip] The usefulness of the new techniques [in machine learning] should not be underestimated.” [Emphasis mine.]

Then Biswajit had pointed out an arXiv paper [^] in which machine learning was reported as having produced some good DFT-like simulations for quantum mechanical simulations, too.

A word about DFT for those who (still) don’t know about it:

DFT, i.e. Density Functional Theory, is “formally exact description of a many-body quantum system through the density alone. In practice, approximations are necessary” [^]. DFT thus is a computational technique; it is used for simulating the electronic structure in quantum mechanical systems involving several hundreds of electrons (i.e. hundreds of atoms). Here is the obligatory link to the Wiki [^], though a better introduction perhaps appears here [(.PDF) ^]. Here is a StackExchange on its limitations [^].

Trivia: Kohn and Sham received a Physics Nobel for inventing DFT. It was a very, very rare instance of a Physics Nobel being awarded for an invention—not a discovery. But the Nobel committee, once again, turned out to have put old Nobel’s money in the right place. Even if the work itself was only an invention, it did directly led to a lot of discoveries in condensed matter physics! That was because DFT was fast—it was fast enough that it could bring the physics of the larger quantum systems within the scope of (any) study at all!

And now, it seems, Machine Learning has advanced enough to be able to produce results that are similar to DFT, but without using any QM theory at all! The computer does have to “learn” its “art” (i.e. “skill”), but it does so from the results of previous DFT-based simulations, not from the theory at the base of DFT. But once the computer does that—“learning”—and the paper shows that it is possible for computer to do that—it is able to compute very similar-looking simulations much, much faster than even the rather fast technique of DFT itself.

OK. Context over. Now here in the next section is my yesterday’s comment at iMechanica. (Also note that the previous exchange on this thread at iMechanica had occurred almost a year ago.) Since it has been edited quite a bit, I will not format it using a quotation block.


[An edited version of my comment begins]

A very late comment, but still, just because something struck me only this late… May as well share it….

I think that, as Biswajit points out, it’s a question of matching a technique to an application area where it is likely to be of “good enough” a fit.

I mean to say, consider fluid dynamics, and contrast it to QM.

In (C)FD, the nonlinearity present in the advective term is a major headache. As far as I can gather, this nonlinearity has all but been “proved” as the basic cause behind the phenomenon of turbulence. If so, using machine learning in CFD would be, by the simple-minded “analysis”, a basically hopeless endeavour. The very idea of using a potential presupposes differential linearity. Therefore, machine learning may be thought as viable in computational Quantum Mechanics (viz. DFT), but not in the more mundane, classical mechanical, CFD.

But then, consider the role of the BCs and the ICs in any simulation. It is true that if you don’t handle nonlinearities right, then as the simulation time progresses, errors are soon enough going to multiply (sort of), and lead to a blowup—or at least a dramatic departure from a realistic simulation.

But then, also notice that there still is some small but nonzero interval of time which has to pass before a really bad amplification of the errors actually begins to occur. Now what if a new “BC-IC” gets imposed right within that time-interval—the one which does show “good enough” an accuracy? In this case, you can expect the simulation to remain “sufficiently” realistic-looking for a long, very long time!

Something like that seems to have been the line of thought implicit in the results reported by this paper: [(.PDF) ^].

Machine learning seems to work even in CFD, because in an interactive session, a new “modified BC-IC” is every now and then is manually being introduced by none other than the end-user himself! And, the location of the modification is precisely the region from where the flow in the rest of the domain would get most dominantly affected during the subsequent, small, time evolution.

It’s somewhat like an electron rushing through a cloud chamber. By the uncertainty principle, the electron “path” sure begins to get hazy immediately after it is “measured” (i.e. absorbed and re-emitted) by a vapor molecule at a definite point in space. The uncertainty in the position grows quite rapidly. However, what actually happens in a cloud chamber is that, before this cone of haziness becomes too big, comes along another vapor molecule, and “zaps” i.e. “measures” the electron back on to a classical position. … After a rapid succession of such going-hazy-getting-zapped process, the end result turns out to be a very, very classical-looking (line-like) path—as if the electron always were only a particle, never a wave.

Conclusion? Be realistic about how smart the “dumb” “curve-fitting” involved in machine learning can at all get. Yet, at the same time, also remain open to all the application areas where it can be made it work—even including those areas where, “intuitively”, you wouldn’t expect it to have any chance to work!

[An edited version of my comment is over. Original here at iMechanica [^]]


 

“Boy, we seem to have covered a lot of STEM territory here… Mechanics, DFT, QM, CFD, nonlinearity. … But where is either the entertainment or the industry you had promised us in the title?”

You might be saying that….

Well, the CFD paper I cited above was about the entertainment industry. It was, in particular, about the computer games industry. Go check out SoHyeon Jeong’s Web site for more cool videos and graphics [^], all using machine learning.


And, here is another instance connected with entertainment, even though now I am going to make it (mostly) explanation-free.

Check out the following piece of art—a watercolor landscape of a monsoon-time but placid sea-side, in fact. Let me just say that a certain famous artist produced it; in any case, the style is plain unmistakable. … Can you name the artist simply by looking at it? See the picture below:

A sea beach in the monsoons. Watercolor.

If you are unable to name the artist, then check out this story here [^], and a previous story here [^].


A Song I Like:

And finally, to those who have always loved Beatles’ songs…

Here is one song which, I am sure, most of you had never heard before. In any case, it came to be distributed only recently. When and where was it recorded? For both the song and its recording details, check out this site: [^]. Here is another story about it: [^]. And, if you liked what you read (and heard), here is some more stuff of the same kind [^].


Endgame:

I am of the Opinion that 99% of the “modern” “artists” and “music composers” ought to be replaced by computers/robots/machines. Whaddya think?

[Credits: “Endgame” used to be the way Mukul Sharma would end his weekly Mindsport column in the yesteryears’ Sunday Times of India. (The column perhaps also used to appear in The Illustrated Weekly of India before ToI began running it; at least I have a vague recollection of something of that sort, though can’t be quite sure. … I would be a school-boy back then, when the Weekly perhaps ran it.)]

 

A bit about my trade…

Even while enjoying my writer’s block, I still won’t disappoint you. … My browsing has yielded some material, and I am going to share it with you.

It all began with googling for some notes on CFD. One thing led to another, and soon enough, I was at this page [^] maintained by Prof. Praveen Chandrashekhar of TIFR Bangalore.

Do go through the aforementioned link; highly recommended. It tells you about the nature of my trade [CFD]…

As that page notes, this article had first appeared in the AIAA Student Journal. Looking at the particulars of the anachronisms, I wanted to know the precise date of the writing. Googling on the title of the article led me to a PDF document which was hidden under a “webpage-old” sub-directory, for the web pages for the ME608 course offered by Prof. Jayathi Murthy at Purdue [^]. At the bottom of this PDF document is a note that the AIAA article had appeared in the Summer of 1985. … Hmm…. Sounds right.

If you enjoy your writer’s block [the way I do], one sure way to continue having it intact is to continue googling. You are guaranteed never to come out it. I mean to say, at least as far as I know, there is no equivalent of Godwin’s law [^] on the browsing side.

Anyway, so, what I next googled on was: “wind tunnels.” I was expecting to see the Wright brothers as the inventors of the idea. Well, I was proved wrong. The history section on the Wiki page [^] mentions Benjamin Robins and his “whirling arm” apparatus to determine drag. The reference for this fact goes to a book bearing the title “Mathematical Tracts of the late Benjamin Robins, Esq,” published, I gathered, in 1761. The description of the reference adds the sub-title (or the chapter title): “An account of the experiments, relating to the resistance of the air, exhibited at different times before the Royal Society, in the year 1746.” [The emphasis in the italics is mine, of course! [Couldn’t you have just guessed it?]]

Since I didn’t know anything about the “whirling arm,” and since the Wiki article didn’t explain it either, a continuation of googling was entirely in order. [The other reason was what I’ve told you already: I was enjoying my writer’s block, and didn’t want it to go away—not so soon, anyway.] The fallout of the search was one k-12 level page maintained by NASA [^]. Typical of the government-run NASA, there was no diagram to illustrate the text. … So I quickly closed the tab, came back to the next entries in the search results, and landed on this blog post [^] by “Gina.” The name of the blog was “Fluids in motion.”

… Interesting…. You know, I knew about, you know, “Fuck Yeah Fluid Dynamics” [^] (which is a major time- and bandwidth-sink) but not about “Fluids in motion.” So I had to browse the new blog, too. [As to the FYFD, I only today discovered the origin of the peculiar name; it is given in the Science mag story here [^].]

Anyway, coming back to Gina’s blog, I then clicked on the “fluids” category, and landed here [^]… Turns out that Gina’s is a less demanding on the bandwidth, as compared to FYFD. [… I happen to have nearly exhausted my monthly data limit of 10 GB, and the monthly renewal is on the 5th June. …. Sigh!…]

Anyway, so here I was, at Gina’s blog, and the first post in the “fluids” category was on “murmuration of starlings,” [^]. There was a link to a video… Video… Video? … Intermediate Conclusion: Writer’s blocks are costly. … Soon after, a quiet temptation thought: I must get to know what the phrase “murmuration of starlings” means. … A weighing in of the options, and the final conclusion: what the hell! [what else], I will buy an extra 1 or 2 GB add-on pack, but I gotta see that video. [Writer’s block, I told you, is enjoyable.] … Anyway, go, watch that video. It’s awesome. Also, Gina’s book “Modeling Ships and Space Craft.” It too seems to be awesome: [^] and [^].

The only way to avoid further spending on the bandwidth was to get out of my writer’s block. Somehow.

So, I browsed a bit on the term [^], and took the links on the first page of this search. To my dismay, I found that not even a single piece was helpful to me, because none was relevant to my situation: every piece of advice there was obviously written only after assuming that you are not enjoying your writer’s block. But what if you do? …

Anyway, I had to avoid any further expenditure on the bandwidth—my expenditure—and so, I had to get out of my writer’s block.

So, I wrote something—this post!


[Blogging will continue to remain sparse. … Humor apart, I am in the middle of writing some C++ code, and it is enjoyable but demanding on my time. I will remain busy with this code until at least the middle of June. So, expect the next post only around that time.]

[May be one more editing pass tomorrow… Done.]

[E&OE]

 

Papers must fall out…

Over the past couple of weeks or so, I’ve been going over SPH (smoothed particle hydrodynamics).

I once again went through the beginning references noted in my earlier post, here [^]. However, instead of rewriting my notes (which I lost in the last HDD crash), this time round, I went straight to programming. … In this post, let me recap recall what all I did.


First, I went through the great post “Why my fluids don’t flow” [^] by Tom Madams. … His blog has the title: “I am doing it wrong,” with the sub-text: “most of the time.” [Me too, Tom, me too!] This post gave a listing of what looked like a fully working C++ code. Seeing this code listing (even if the videos are no longer accessible), I had to try it.

So, I installed the Eclipse CDT. [Remember my HDD crash?—the OS on the new HDD had no IDEs/C++ compilers installed till now; I had thus far installed only Python and PyCharm]. I also installed MinGW, freeglut, Java JRE, but not all of them in the correct order. [Remember, I too do it wrong, most of the time.] I then created a “Hello World” project, and copy-pasted Tom’s code.

The program compiled well. [If you are going to try Tom’s code in Eclipse CDT + MinGW on Windows, the only issue you would now (in 2016) run into would be in the Project Settings, both in the compiler and linker settings parts, and both for OpenGL and freeglut. The latest versions of Eclipse & MinGW have undergone changes and don’t work precisely as explained even in the most helpful Web resources about this combination. … It’s not a big deal, but it’s not exactly what the slightly out-of-date online resources on this topic continue telling you either. … The options for the linker are a bit trickier to get than those for the compiler; the options for freeglut certainly are trickier to get than those for OpenGL. … If you have problems with this combination (Eclipse + MinGW on Windows 7 64-bit, with OpenGL and freeglut), then drop me a line and I will help you out.]

Tom’s program not only compiled well, but it also worked beautifully. Quite naturally, I had to change something about it.

So I removed his call to glDrawArrays(), and replaced the related code with the even older glBegin(GL_POINTS), glVertex2d(), glEnd() sort of a code. As I had anticipated,  there indeed was no noticeable performance difference. If the fluid in the original code required something like a minute (of computer’s physical time) to settle down to a certain quiescent state, then so did the one with the oldest-style usage of OpenGL. The FPS in the two cases were identical in almost all of the release runs, and they differed by less than 5–7% for the debug runs as well, when the trials were conducted on absolutely fresh cold-starts (i.e. with no ready-to-access memory pages in either physical or virtual memory).

Happy with my change, I then came to study Tom’s SPH code proper. I didn’t like the emitters. True to my core engineering background, what I wanted to simulate was the dam break. That means, all the 3000 particles would be present in the system right from the word go, thereby also having a slower performance throughout, including in the beginning. But Tom’s code was too tied up with the emitters. True to my software engineering background, rather than search and remove the emitters-related portion and thus waste my time fixing the resulting compiler errors, I felt like writing my own code. [Which true programmer doesn’t?]

So I did that, writing only stubs for the functions involving the calculations of the kernels and the accelerations. … I, however, did implement the grid-based nearest-neighbor search. Due to laziness, I simply reused the STL lists, rather than implementing the more basic (and perhaps slightly more efficient) “p->next” idiom.

Then I once again came back to Tom’s code, and began looking more carefully at his SPH-specific computations.

What I now didn’t like was the variables defined for the near-density and the near-pressure. These quantities didn’t fit very well into my preconceived notions of how a decent SPH code ought to look like.

So, I decided to deprove [which word is defined as an antonym of “improve”] this part, by taking this 2010 code from its 2007 (Becker et al.) theoretical basis, to a 2003 basis (Muller et al., Eurographics).

Further following my preconceived notions, I also decided to keep the values of the physical constants (density, gas stiffness, viscosity, surface tension) the same as those for the actual water.

The code, of course, wouldn’t work. The fluid would explode as if it were a gas, not water.

I then turned my learner’s attention to David Bindel’s code (see the “Resources” section at the bottom of his page here [^]).

Visiting Bindel’s pages once again, this time round, I noticed that he had apparently written this code only as a background material for a (mere) course-assignment! It was not even an MS thesis! And here I was, still struggling with SPH, even after having spent something like two weeks of full-time effort on it! [The difference was caused by the use of the realistic physical constants, of course. But I didn’t want to simply copy-paste Tom’s or Bindel’s parameter values; I wanted to understand where they came from—what kind of physical and computational contexts made those specific values give reasonable results.]

I of course liked some of the aspects of Bindel’s code better—e.g. kernels—and so, I happily changed my code here and there to incorporate them.

But I didn’t want to follow Bindel’s normalize_mass routine. Two reasons: (i) Once again according to my preconceived notions, I wanted to first set aside a sub-region of the overall domain for the fluid; then decide with how many particles to populate it, and what lattice arrangement to follow (square? body centered-cubic? hexagonal close-packed?); based on that, calculate each particle’s radius; then compute the volume of each particle; and only then set its mass using the gross physical density of the material from which it is composed (using the volume the particle would occupy if it were to be isolated from all others, as an intermediate step). The mass of a particle, thus computed (and not assumed) would remain fixed for all the time-steps in the program. (ii) I eventually wanted a multi-phase dam-break, and so wasn’t going to assume a global constant for the mass. Naturally, my code wouldn’t be able to blindly follow Bindel on all counts.

I also didn’t like the version of the leapfrog he has implemented. His version requires you to maintain additional quantities of the velocities at the half time-steps (I didn’t mind that), and also to implement a separate leapfrog_start() function (which I did mind—an additional sequence of very similar-looking function calls becomes tricky to modify and maintain). So, I implemented the other version of the leapfrog, viz., the “velocity Verlet.” It has exactly the same computational properties (of being symplectic and time-reversible), the same error/convergence properties (it too is second-order accurate), but it comes with the advantage that the quantities are defined only at the integer time-steps—no half-time business, and no tricky initialization sequence to be maintained.

My code, of course, still  didn’t work. The fluid would still explode. The reason, still, was: the parameter values. But the rest of the code now was satisfactory. How do I know this last part? Simple. Because, I commented out the calls to all the functions involving all other accelerations, and retained only the acceleration due to gravity. I could then see the balls experiencing the correct free-fall under gravity, with the correct bouncing-back from the floor of the domain. Both the time for the ball to hit the floor as well as the height reached after bouncing were in line with what physics predicts. Thus I knew that my time integration routines would be bug-free. Using some debug tracings, I also checked that the nearest-neighbour routines were working correctly.

I then wrote a couple of Python scripts to understand the different kernels better; I even plotted them using MatPlotLib. I felt better. A program I wrote was finally producing some output that I could in principle show someone else (rather than having just randomly exploding fluid particles). Even if it was doing only kernel calculations and not the actual SPH simulation. I had to feel [slightly] better, and I did.

At this stage, I stopped writing programs. I began thinking. [Yes, I do that, too.]


To cut a long story short, I ended up formulating two main research ideas concerning SPH. Both these ideas are unlike my usual ones.

Usually, when I formulate some new research idea, it is way too conceptual—at least as compared to the typical research reported in the engineering journals. Typically, at that stage (of my formulation of a new research idea), I am totally unable to see even an outline of what kind of a sequence of journal papers could possibly follow from it.

For instance, in the case of my diffusion equation-related result, it took me years before an outline for a good conference paper—almost like really speaking, at par with a journal paper—could at all evolve. I did have the essential argument ready. But I didn’t know what all context—the specifically mathematical context—would be expected in a paper based on that idea. I (and all the mathematicians I contacted) also had no idea as to how (or where) to go hunting for that context. And I certainly didn’t have any concrete idea as to how I would pull it all together to build a concrete and sufficiently rigorous argument. I knew nothing of that; I only knew that the instantaneous action-at-a-distance (IAD) was now dead; summarily dead. Similarly, in case of QM, I do have some new ideas, but I am still light-years away from deciding on a specific sequence of what kind of papers could be written about it, let alone have a good, detailed idea for the outline of the next journal paper to write on the topic.

However, in this case—this research on SPH—my ideas happen to be more like what [other] people typically use when they write papers for [even very high impact] journals those which lie behind the routine journal papers. So, papers should follow easily, once I work on these ideas.


Indeed, papers must follow those ideas. …There is another reason to it, too.

… Recently, I’ve come to develop an appreciation, a very deep kind of an appreciation, of the idea of having one’s own Google Scholar page, complete with a [fairly] recent photo, a verified email account at an educational institution (preferably with a .edu, or a .ac.in (.in for India) domain, rather than a .org or a .com domain), and a listing of one’s own h-index. [Yes, my own Google Scholar page, even if the h-Index be zero, initially. [Time heals all wounds.] I have come to develop that—an appreciation of this idea of having a Google Scholar page. … One could provide a link to it from one’s personal Web site, one could even cite the page in one’s CV, it could impress UGC/NBA/funding folks…. There are many uses to having a Google Scholar page.

…That is another reason why [journal] papers must come out, at least now.

And I expect that the couple of ideas regarding SPH should lead to at least a couple of journal papers.

Since these ideas are more like the usual/routine research, it would be possible to even plan for their development execution. Accordingly, let me say (as of today) that I should be able to finish both these papers within the next 4–5 months. [That would be the time-frame even if I have no student assistant. [Having a student assistant—even a very brilliant student studying at an IIT, say at IIT Bombay—would still not shorten the time to submission, neither would it reduce my own work-load any more than by about 10–20% or so. That’s the reason I am not planning on a student assistant on these ideas.]

But, yes, with all this activity in the recent past, and with all the planned activity, it is inevitable that papers would fall out. Papers must, in fact, fall out. …. Journal papers! [Remember Google Scholar?]


Of course, when it comes to execution, it’s a different story that even before I begin any serious work on them, I still have to first complete writing my CFD notes, and also have to write a few FDM, FVM and VoF/LevelSet programs scripts or OpenFOAM cases. Whatever I had written in the past, most of it was lost in my last HDD crash. I thus have a lot of territory to recover first.

Of course, rewriting notes/codes is fast. I could so rapidly progress on SPH this year—a full working C++ code in barely 2–3 weeks flat—only because I had implemented some MD (molecular dynamics) code in 2014, no matter how simple MD it was. The algorithms for collision detection and reflections at boundaries remain the same for all particles approaches: MD with hard disks, MD with LJ potential, and SPH. Even if I don’t have the previously written code, the algorithms are no longer completely new to me. As I begin to write code, the detailed considerations and all come back very easily, making the progress very swift, as far as programming is concerned.

When it comes to notes, I somehow find that writing them down once again takes almost the same length of time—just because you had taken out your notes earlier, it doesn’t make writing them down afresh takes any less time.

Thus, overall, recovering the lost territory would still take quite some effort and time.

My blogging would therefore continue to remain sparse even in the near future; expect at the most one more post this month (May 2016).

The work on the journal papers itself should begin in the late-June/early-July, and it should end by mid-December. It could. Nay, it must. … Yes, it must!

Papers must come out of all these activities, else it’s no research at all—it’s nothing. It’s a zero, a naught, a nothing, if there are no papers to show that you did research.

Papers must fall out! … Journal papers!!


A Song I Like:

(Western, Instrumental) “The rain must fall”
Composer: Yanni


[May be one quick editing pass later today, and I will be done with this post. Done on 12th May 2016.]

[E&OE]

A bit about the Dirac delta (and the SPH)

I have been thinking about (and also reading on!) SPH recently.

“SPH” here means: Smoothed Particle Hydrodynamics. Here is the Wiki article on SPH [^] if all you want is to gain some preliminary idea (or better still, if that’s your purpose, just check out some nice YouTube videos after googling on the full form of the term).


If you wish to know the internals of SPH in a better way: The SPH literature is fairly large, but a lot of it also happens to be in the public domain. Here are a few references:

  • A neat presentation by Maneti [^]
  • Micky Kelager’s project report listed here [^]. The PDF file is here [(5.4 MB) ^]
  • Also check out Cossins for a more in-depth working out of the maths [^].
  • The 1992 review by Monaghan himself also is easily traceable on the ‘net
  • The draft of a published book [(large .PDF file, 107 MB) ^] by William Hoover; this link is listed right on his home page [^]. Also check out another book on molecular dynamics which he has written and also put in the public domain.

For gentler introductions to SPH that come with pseudo-code, check out:

  • Browne and Lewinder [(.PDF, 5.2 MB) ^], and
  • David Bindel’s notes [(.PDF, ) ^].

I have left out several excellent introductory articles/slides by others, e.g. by Mathias Muller (and may expand on this list a day or two later).


The SPH theory begins with the identity:

f(x) = \int\limits_{\Omega} \text{d}\Omega_{x'}\,f(x')\,\delta(x- x')

where \delta(x- x') is Dirac’s delta, and x' is not a derivative of x but a dummy variable mimicking x; for a diagrammatic illustration, see Maneti’s slides mentioned above.

It is thus, in connection with SPH (but not of QM) that I thought of going a little deeper with Dirac’s delta.

After some searches, I found an article by Balki on this topic [^], and knowing the author, immediately sat reading it. [Explanations and clarifications: 1. “Balki” means: Professor Balakrishnan of the Physics department of IIT Madras. 2. I know the author; the author does not know me. 3. Everyone on the campus calls him Balki (though I don’t know if they do that in his presence, too).] The link given here is to a draft version; the final print version is available for free from the Web site of the journal: [^].

A couple of days later, I was trying to arrange in my mind the material for an introductory presentation on SPH. (I was doing that even if no one has invited me yet to deliver it.) It was in this connection that I did some more searches on Dirac’s delta. (I began by going one step“up” the directory tree of the first result and thus landed at this directory [^] maintained by Dr. Pande of IIT Hyderabad [^]. … There is something to be said about keeping your directories brows-able if you are going share the entire content one way or the other; it just makes searching related contents easier!)

Anyway, thus, starting there, my further Google searches yielded the following articles/essays/notes: [^], [^], [^], [^], [^], [^], [^], and [^] . And, of course, the Wiki [^].

As any one would expect, some common points were of course repeated in each of these references. However, going through the articles/notes, though quite repetitive, didn’t get all that boring to me: each individual brings his own unique way of explaining a certain material, and Dirac’s delta being a concept that is both so subtle and so abstract, any person who [dare] attempts explaining it cannot help but bring his own individuality to that explanation. (Yes, the concept is subtle. The gifted American mathematician John von Neumann had spent some time showing how Dirac’s notions were mathematically faulty/untenable/not rigorous/something similar. … Happens.)

Anyway, as I expected, Balki’s article turned out to be the easiest and the most understanding-inducing a read among them all! [No, my attending IIT M had nothing to do with this expectation.]

Yet, there remained one minor point which was not addressed very directly in the above-mentioned references—not even by Balki. (Though his treatment is quite clear about the point, he seems to have skipped one small step I think is necessary.) The point I was looking for, is concerned with a more complete answer to this question:

Why is it that the \delta is condemned to live only under an integral sign? Why can’t it have any life of its own, i.e., outside the integral sign?

The question, of course is intimately related to the other peculiar aspects of Dirac’s delta as well. For instance, as the tutorial at Pande’s site points out [^]:

The delta functions should not be considered to be an infinitely high spike of zero width since it scales as: \int_{-\infty}^{\infty} a\,\delta(x)\,\text{d}x = a .

Coming back to the caged life of the poor \delta, all authors give hints, but none jots down all the details of the physical (“intuitive”) reasoning lying behind this peculiar nature of the delta.

Then, imagining as if I am lecturing to an audience of engineering UG students led me to a clue which answers that question—to the detail I wanted to see. I of course don’t know if this clue of mine is mathematically valid or not. … It’s just that I “day-dreamt” one form of a presentation, found that it wouldn’t be hitting the chord with the audience and so altered it a bit, tried “day-dreaming” again, and repeated the process some 3–4 times over the past week. Finally, this morning, I got to the point where I thought I now have got the right clue which can make the idea clearer to the undergraduates of engineering.

I am going to cover that point (the clue which I have) in my next post, which I expect to write, may be, next week-end or so. (If I thought I could write that post without drawing figures, I would have written the answer right away.) Anyway, in the meanwhile, I would like to share all these references on SPH and on Dirac’s delta, and bring the issue (i.e., the question) to your attention.

… No, the point I have in mind isn’t at all a major one. It’s just that it leads to a presentation of the concept that is more direct than what the above references cover. (I can’t better Balki, but I can fill in the gaps in his explanations—at least once in a while.)

Anyway, if you know of any other direct and mathematically valid answers to that question, please point them out to me. Thanks in advance.

 


A Song I Like:

(Marathi) “mana chimba paavasaaLi, jhaaDaat rang ole…”
Music: Kaushal Inamdar
Lyrics: N. D. Mahanor
Singer: Hamsika Iyer

 

[E&OE]