How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.


Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…


But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.


As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.


When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…


I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…


A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan

 

Advertisements

“Math rules”?

My work (and working) specialization today is computational science and engineering. I have taught FEM, and am currently teaching both FEM and CFD.

However, as it so happens, all my learning of FEM and CFD has been through self-studies. I have never sat in a class-room and learnt these topics that way. Naturally, there were, and are, gaps in my knowledge.


The most efficient way of learning any subject matter is through traditional formal learning—I mean to say, our usual university system. The reason is not just that a teacher is available to teach you the material; even books can do that (and often times, books are actually better than teachers). The real advantage of the usual university education is the existence of those class-mates of yours.

Your class-mates indirectly help you in many ways.

Come the week of those in-semester unit tests, at least in the hostels of Indian engineering schools, every one suddenly goes in the studies mode. In the hostel hallways, you casually pass someone by, and he puts a simple question to you. It is, perhaps, his genuine difficulty. You try to explain it to him, find that there are some gaps in your own knowledge, too. After a bit of a discussion, some one else joins the discussion, and then you all have to sheepishly go back to the notes or books or solve a problem together. It helps all of you.

Sometimes, the friend could be even just showing off to you—he wouldn’t ask you a question if he knew you could answer it. You begin answering in your usual magnificently nonchalant manner, and soon reach the end of your wits. (A XI standard example: If the gravitational potential inside the earth is constant, how come a ball dropped in a well falls down? [That is your friend’s question, just to tempt you in the wrong direction all the way through.]… And what would happen if there is a bore all through the earth’s volume, assuming that the earth’s core is solid all the way through?) His showing off helps you.

No, not every one works in this friendly a mode. But enough of them do that one gets [too] used to this way of studying/learning.

And, it is this way of studying which is absent not only in the learning by pure self-studies alone, but also in those online/MOOC courses. That is the reason why NPTEL videos, even if downloaded and available on the local college LAN, never get referred to by individual students working in complete isolation. Students more or less always browse them in groups even if sitting on different terminals (and they watch those videos only during the examination weeks!)

Personally, I had got [perhaps excessively] used to this mode of learning. [Since my Objectivist learning has begun interfering here, let me spell the matter out completely: It’s a mix of two modes: your own studies done in isolation, and also, as an essential second ingredient, your interaction with your class-mates (which, once again, does require the exercise of your individual mind, sure, but the point is: there are others, and the interaction is exposing the holes in your individual understanding).]

It is to this mix that I have got too used to. That’s why, I have acutely felt the absence of the second ingredient, during my studies of FEM and CFD. Of course, blogging fora like iMechanica did help me a lot when it came to FEM, but for CFD, I was more or less purely on my own.

That’s the reason why, even if I am a professor today and am teaching CFD not just to UG but also to PG students, I still don’t expect my knowledge to be as seamlessly integrated as for the other things that I know.

In particular, one such a gap got to the light recently, and I am going to share my fall—and rise—with you. In all its gloriously stupid details. (Feel absolutely free to leave reading this post—and indeed this blog—any time.)


In CFD, the way I happened to learn it, I first went through the initial parts (the derivations part) in John Anderson, Jr.’s text. Then, skipping the application of FDM in Anderson’s text more or less in its entirety, I went straight to Versteeg and Malasekara. Also Jayathi Murthy’s notes at Purdue. As is my habit, I was also flipping through Ferziger+Peric, and also some other books in the process, but it was to a minor extent. The reason for skipping the rest of the portion in Anderson was, I had gathered that FVM is the in-thing these days. OpenFOAM was already available, and its literature was all couched in terms of FVM, and so it was important to know FVM. Further, I could also see the limitations of FDM (like requirement of a structured Cartesian mesh, or special mesh mappings, etc.)

Overall, then, I had never read through the FDM modeling of Navier-Stokes until very recent times. The Pune University syllabus still requires you to do FDM, and I thus began reading through the FDM parts of Anderson’s text only a couple of months ago.

It is when I ran into having to run the FDM Python code for the convection-diffusion equation that a certain lacuna in my understanding became clear to me.


Consider the convection-diffusion equation, as given in Prof. Lorena Barba’s Step No.8, here [^]:

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

I had never before actually gone through these equations until last week. Really.

That’s because I had always approached the convection-diffusion system via FVM, where the equation would be put using the Eulerian frame, and it therefore would read something like the following (using the compact vector/tensor notation):

\dfrac{\partial}{\partial t}(\rho \phi) +  \nabla \cdot (\rho \vec{u} \phi)  = \nabla \cdot (\Gamma \nabla \phi) + S
for the generic quantity \phi.

For the momentum equations, we substitute \vec{u} in place of \phi, \mu in place of \Gamma, and S_u - \nabla P in place of S, and the equation begins to read:
\dfrac{\partial}{\partial t}(\rho \vec{u}) +  \nabla \cdot (\rho \vec{u} \otimes \vec{u})  = \nabla \cdot (\mu \nabla \vec{u}) - \nabla P + S_u

For an incompressible flow of a Newtonian fluid, the equation reduces to:

\dfrac{\partial}{\partial t}(\vec{u}) +  \nabla \cdot (\vec{u} \otimes \vec{u})  = \nu \nabla^2 \vec{u} - \dfrac{1}{\rho} \nabla P + \dfrac{1}{\rho} S_u

This was the framework—the Eulerian framework—which I had worked with.

Whenever I went through the literature mentioning FDM for NS equations (e.g. the computer graphics papers on fluids), I more or less used to skip looking at the maths sections, simply because there is such a variety of reporting NS, and those initial sections of the papers all are covering the same background material. (Ferziger and Peric, I recall off-hand, mention of some 72 ways of writing down the NS equations.)  The meat of the paper comes only later.


The trouble occurred when, last week, I began really reading through (as in contrast to rapidly glancing over) Barba’s Step No. 8 as mentioned above. Let me copy-paste the convection-diffusion equations once again here, for ease of reference.

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

Look at the left hand-side (LHS for short). What do you see?

What I saw was an application of the following operator—an operator that appears only in the Lagrangian framework:

\dfrac{\partial}{\partial t} + (\vec{u} \cdot \nabla)

Clearly, according to what I saw, the left hand-side of the convection-diffusion equation, as written above, is nothing but this operator, as applied to \vec{u}.

And with that “vision,” began my fall.

“How can she use the Lagrangian expression if she is going to use a fixed Cartesian—i.e. Eulerian—grid? After all, she is doing FDM here, isn’t she?” I wondered.

If it were to be a computer graphics paper using FDM, I would have skipped over it, presuming that they would sure transform this equation to the Eulerian form some time later on. But, here, I was dealing with a resource for the core engineering branches (like mech/aero/met./chem./etc.), and I also had a lab right this week to cover this topic. I couldn’t skip over it; I had to read it in detail. I knew that Prof. Barba couldn’t possibly make a mistake like that. But, in this lesson, even right up to the Python code (which I read for the first time only last week), there wasn’t even a hint of a transformation to the Eulerian frame. (Initially, I even did a search on the string: “Euler” on that page; no beans.)

There must be some reason to it, I thought. Still stuck with reading a Lagrangian frame for the equation, I then tried to imagine a reasonable interpretation:

Suppose there is one material particle at each of the FDM grid nodes? What would happen with time? Simplify the problem all the way down. Suppose the velocity field is already specified at each node as the initial condition, and we are concerned only with its time-evolution. What would happen with time? The particles would leave their initial nodal positions, and get advected/diffused away. In a single time-step, they would reach their new spatial positions. If the problem data are arbitrary, their positions at the end of the first time-step wouldn’t necessarily coincide with grid points. If so, how can she begin her next time iteration starting from the same grid points?

I had got stuck.

I thought through it twice, but with the same result. I searched through her other steps. (Idly browsing, I even looked up her CV: PhD from CalTech. “No, she couldn’t possibly be skipping over the transformation,” I distinctly remember telling myself for the nth time.)

Faced with a seemingly unyielding obstacle, I had to fall back on to my default mode of learning—viz., the “mix.” In other words, I had to talk about it with someone—any one—any one, who would have enough context. But no one was available. The past couple of days being holidays at our college, I was at home, and thus couldn’t even catch hold of my poor UG students.

But talking, I had to do. Finally, I decided to ask someone about it by email, and so, looked up the email ID of a CFD expert, and asked him if he could help me with something that is [and I quote] “seemingly very, very simple (conceptual) matter” which “stumps me. It is concerned with the application of Lagrangian vs. Eulerian frameworks. It seems that the answer must be very simple, but somehow the issue is not clicking-in together or falling together in place in the right way, for me.” That was yesterday morning.

It being a week-end, his reply came fairly rapidly, by the yesterday afternoon (I re-checked emails at around 1:30 PM); he had graciously agreed to help me. And so, I rapidly wrote up a LaTeX document (for equations) and sent it to him as soon as I could. That was yesterday, around 3:00 PM. Satisfied that finally I am talking to someone, I had a late lunch, and then crashed for a nice ciesta. … Holidays are niiiiiiiceeeee….

Waking up at around 5:00 PM, the first thing I did, while sipping a cup of tea, was to check up on the emails: no reply from him. Not expected this soon anyway.

Still lingering in the daze of that late lunch and the ciesta, idly, I had a second look at the attached document which I had sent. In that problem-document, I had tried to make the comparison as easy for the receiver to see, and so, I had taken care to write down the particular form of the equation that I was looking for:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + \dfrac{\partial uv}{\partial x} + \dfrac{\partial v^2}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

“Uh… But why would I keep the product terms u^2 inside the finite difference operator?” I now asked myself, still in the lingering haze of the ciesta. “Wouldn’t it complicate, say, specifying boundary conditions and all?” I was trying to pick up my thinking speed. Still yawning, I idly took a piece of paper, and began jotting down the equations.

And suddenly, way before writing down the very brief working-out by hand, the issue had become clear to me.

Immediately, I made me another cup of tea, and while still sipping it, launched TexMaker, wrote another document explaining the nature of my mistake, and attached it to a new email to the expert. “I got it” was the subject line of the new email I wrote. Hitting the “Send” button, I noticed what time it was: around 7 PM.

Here is the “development” I had noted in that document:

Start with the equation for momentum along the x-axis, expressed in the Eulerian (conservation) form:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)

Consider only the left hand-side (LHS for short). Instead of treating the product terms $u^2$ and $uv$ as final variables to be discretized immediately, use the product rule of calculus in the same Eulerian frame, rearrange, and apply the zero-divergence property for the incompressible flow:

\text{LHS} = \dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u\dfrac{\partial u}{\partial x} + u \dfrac{\partial v}{\partial y} + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[\dfrac{\partial u}{\partial x} + \dfrac{\partial v}{\partial y} \right] + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[ 0 \right] + v \dfrac{\partial u}{\partial y}; \qquad\qquad \because \nabla \cdot \vec{u} = 0 \text{~if~} \rho = \text{~constant}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y}

We have remained in the Eulerian frame throughout these steps, but the final equation which we got in the end, happens to be identical in its terms to that for the Lagrangian frame—when the flow is incompressible.

For a compressible flow, the equations should continue looking different, because \rho would be a variable, and so would have to be accounted for with a further application of the product rule, in evaluating \frac{\partial}{\partial t}(\rho u), \frac{\partial}{\partial x}(\rho u^2) and \frac{\partial}{\partial x}(\rho uv) etc.

But as it so happens, for the current case, even if the final equations look exactly the same, we should not supply the same physical imagination. We don’t imagine the Lagrangian particles at nodes. Our imagination continues remaining Eulerian throughout the development, with our focus not on the advected particles’ positions but on the flow variables u and v at the definite (fixed) points in space.


Sometimes, just expressing your problem to someone else itself pulls you out of your previous mental frame, and that by itself makes the problem disappear—in other words, the problem gets solved without your “solving” it. But to do that, you need someone else to talk to!


But how could I make such stupid and simple a mistake, you ask? This is something even a UG student at an IIT would be expected to know! [Whether they always do, or not, is a separate issue.]

Two reasons:

First: As I said, there are gaps in my knowledge of computational mechanics. More gaps than you would otherwise expect, simply because I had never had class-mates with whom to discuss my learning of computational  mechanics, esp., CFD.

Second: I was getting deeper into the SPH in the recent weeks, and thus was biased to read only the Lagrangian framework if I saw that expression.

And a third, more minor reason: One tends to be casual with the online resources. “Hey it is available online already. I could reuse it in a jiffy, if I want.” Saying that always, and indefinitely postponing actually reading through it. That’s the third reason.


And if I could make so stupid a mistake, and hold it for such a long time (a day or so), how could I then see through it, even if only eventually?

One reason: Crucial to that development is the observation that the divergence of velocity is zero for an incompressible flow. My mind was trained to look for it because even if the Pune University syllabus explicitly states that derivations will not be asked on the examinations, just for the sake of solidity in students’ understanding, I had worked through all the details of all the derivations in my class. During those routine derivations, you do use this crucial property in simplifying the NS equations, but on the right hand-side, i.e., for the surface forces term, in simplifying for the Newtonian fluid. Anderson does not work it out fully [see his p. 66] nor do Versteeg and Malasekara, but I anyway had, in my class… It was easy enough to spot the same pattern—even before jotting it down on paper—once it began appearing on the left hand-side of the same equation.

Hard-work pays off—if not today, tomorrow.


CFD books always emphasize the idea that the 4 combinations produced by (i) differential-vs-integral forms and (ii) Lagrangian-vs-Eulerian forms all look different, and yet, they still are the same. Books like Anderson’s take special pains to emphasize this point. Yes, in a way, all equations are the same: all the four mathematical forms express the same physical principle.

But seen from another perspective, here is an example of two equations which look exactly the same in every respect, but in fact aren’t to be viewed as such. One way of reading this equation is to imagine inter-connected material particles getting advected according to that equation in their local framework. Another way of reading exactly the same equation is to imagine a fluid flowing past those fixed FDM nodes, with only the nodal flow properties changing according to that equation.

Exactly the same maths (i.e. the same equation), and of course, also the same physical principle, but a different physical imagination.

And you want to tell me “math [sic] rules?”


I Song I Like:

(Hindi) “jaag dil-e-deewaanaa, rut jaagee…”
Singer: Mohamad Rafi
Music: Chitragupt
Lyrics: Majrooh Sultanpuri

[As usual, may be another editing pass…]

[E&OE]