Stress is defined as the quantity equal to … what?

In this post, I am going to note a bit from my personal learning history. I am going to note what had happened when a clueless young engineering student that was me, was trying hard to understand the idea of tensors, during my UG years, and then for quite some time even after my UG days. May be for a decade or even more….

There certainly were, and are likely to be even today, many students like [the past] me. So, in the further description, I will use the term “we.” Obviously, the “we” here is the collegial “we,” perhaps even the pedagogical “we,” but certainly neither the pedestrian nor the royal “we.”


What we would like to understand is the idea of tensors; the question of what these beasts are really, really like.

As with developing an understanding of any new concept, we first go over some usage examples involving that idea, some instances of that concept.

Here, there is not much of a problem; our mind easily picks up the stress as a “simple” and familiar example of a tensor. So, we try to understand the idea of tensors via the example of the stress tensor. [Turns out that it becomes far more difficult this way… But read on, anyway!]

Not a bad decision, we think.

After all, even if the tensor algebra (and tensor calculus) was an achievement wrought only in the closing decade(s) of the 19th century, Cauchy was already been up and running with the essential idea of the stress tensor right by 1822—i.e., more than half a century earlier. We come to know of this fact, say via James Rice’s article on the history of solid mechanics. Given this bit of history, we become confident that we are on the right track. After all, if the stress tensor could not only be conceived of, but even a divergence theorem for it could be spelt out, and the theorem even used in applications of engineering importance, all some half a century before any other tensors were even conceived of, then developing a good understanding of the stress tensor ought to provide a sound pathway to understanding tensors in general.

So, we begin with the stress tensor, and try [very hard] to understand it.


We recall what we have already been taught: stress is defined as force per unit area. In symbolic terms, read for the very first time in our XI standard physics texts, the equation reads:

\sigma \equiv \dfrac{F}{A}               … Eq. (1)

Admittedly, we had been made aware, that Eq. (1) holds only for the 1D case.

But given this way of putting things as the starting point, the only direction which we could at all possibly be pursuing, would be nothing but the following:

The 3D representation ought to be just a simple generalization of Eq. (1), i.e., it must look something like this:

\overline{\overline{\sigma}} = \dfrac{\vec{F}}{\vec{A}}                … Eq. (2)

where the two overlines over \sigma represents the idea that it is to be taken as a tensor quantity.

But obviously, there is some trouble with the Eq. (2). This way of putting things can only be wrong, we suspect.

The reason behind our suspicion, well-founded in our knowledge, is this: The operation of a division by a vector is not well-defined, at least, it is not at all noted in the UG vector-algebra texts. [And, our UG maths teachers would happily fail us in examinations if we tried an expression of that sort in our answer-books.]

For that matter, from what we already know, even the idea of “multiplication” of two vectors is not uniquely defined: We have at least two “product”s: the dot product [or the inner product], and the cross product [a case of the outer or the tensor product]. The absence of divisions and unique multiplications is what distinguishes vectors from complex numbers (including phasors, which are often noted as “vectors” in the EE texts).

Now, even if you attempt to “generalize” the idea of divisions, just the way you have “generalized” the idea of multiplications, it still doesn’t help a lot.

[To speak of a tensor object as representing the result of a division is nothing but to make an indirect reference to the very operation [viz. that of taking a tensor product], and the very mathematical structure [viz. the tensor structure] which itself is the object we are trying to understand. … “Circles in the sand, round and round… .” In any case, the student is just as clueless about divisions by vectors, as he is about tensor products.]

But, still being under the spell of what had been taught to us during our XI-XII physics courses, and later on, also in the UG engineering courses— their line and method of developing these concepts—we then make the following valiant attempt. We courageously rearrange the same equation, obtain the following, and try to base our “thinking” in reference to the rearrangement it represents:

\overline{\overline{\sigma}} \vec{A} = \vec{F}                  … Eq (3)

It takes a bit of time and energy, but then, very soon, we come to suspect that this too could be a wrong way of understanding the stress tensor. How can a mere rearrangement lead from an invalid equation to a valid equation? That’s for the starters.

But a more important consideration is this one: Any quantity must be definable via an equation that follows the following format:

the quantiy being defined, and nothing else but that quantity, as appearing on the left hand-side
=
some expression involving some other quantities, as appearing on the right hand-side.

Let’s call this format Eq. (4).

Clearly, Eq. (3) does not follow the format of Eq. (4).

So, despite the rearrangement from Eq. (2) to Eq. (3), the question remains:

How can we define the stress tensor (or for that matter, any tensors of similar kind, say the second-order tensors of strain, conductivity, etc.) such that its defining expression follows the format given in Eq. (4)?


Can you answer the above question?

If yes, I would love to hear from you… If not, I will post the answer by way of an update/reply/another blog post, after some time. …

Happy thinking…


A Song I Like:
(Hindi) “ye bholaa bhaalaa man meraa kahin re…”
Singers: Kishore Kumar, Asha Bhosale
Music: Kishore Kumar
Lyrics: Majrooh Sultanpuri


[I should also be posting this question at iMechanica, though I don’t expect that they would be interested too much in it… Who knows, someone, say some student somewhere, may be interested in knowing more about it, just may be…

Anyway, take care, and bye for now…]

Advertisements

Physics is more fundamental than maths.

Physics is more fundamental than maths.

And, boring blog-posts [still] have to be written about this sheer mundane topic.

But permit me to do so.

[To those who yawn already: I promise to write a [for you] more interesting post within a span of just about a month or two, or so.]


But, yes, physics is more fundamental than maths.

“Elementary,” Sherlock [of the Holmes family] would have said, upon being probed to explain the reasons behind the title assertion.


Actually, the operative word here is not “elementary,” but a bit more philosophical in nature. [In particular, it is metaphysical and epistemological in nature, including logical.]

In short, the reason behind the title-assertion itself is, how to put it, “fundamental.”

The fundamental truth that proper metaphysics teaches is, to cut a long story short, that the what precedes the how.

A higher-level but still damn very fundamental truth that proper epistemology teaches is, to cut a long story short, abstractions are abstractions, formed from [the perceptually evident] concretes.

A slightly even higher-level but very closely associated truth which—you guessed it right—also still very fundamental, is that one looks for completeness of propositions, of assertions, in proofs, etc.


With that being said, if you are still with me, let me illustrate the core of my argument. (Note, it’s merely an illustration, not the actual argument—the actual one will have to be grounded in philosophy, and couched in terms of philosophy of physics. Both are corrupt today, and at least for today, I don’t want to get into either. But I can, and will, illustrate what I have in mind, via a couple of examples.)


First, observe that mathematics cannot describe, not even in terms of principles, the entirety of the physical world.

There are any number of ways to verify the assertion. (To verify is not to prove, but the said activity can be helpful for proofs i.e. for the process of proving.) For instance, consider the following facts: Discoveries (and not just inventions) are possible. No theory of mathematical physics has ever been without some empirically determined constants. People look for a suitable mathematical model, e.g. whether it should be a linear or nonlinear theory, accurate to what order—the first, second or higher order, etc. (The idea of the differential order itself is a dead give-away that a completed work of maths cannot hope to cover describing the entirety of the physical world; the power series is infinite in the number of its terms).

Second (and this part might seem to ensure logical completeness) is this: Mathematics not is only capable of describing a non-physical, non-real, purely imaginary world, but it is exceedingly easy to do that.

In evidence (i.e. of verification), refer to my gravatar icon (which appears in the title-bar of your browser when you browse this blog). It is actually the result of a simulation. The problem was that of the ideal fluid flow (i.e. a potential field) in 2D. Now, observe that no physical object exists in 2D. No object which exists even has an infinitely small thickness let alone a zero thickness. (And there is a difference between the two.) Yet, it is so easy in maths to conceive of such an object. So easy, in fact, that test-cases (or analyses) like that (I mean those in 1D and 2D) are routinely used as “sand-box”es of sort, in engineering. Every one knows (except for physicists and philosophers, of course), that such things are, taken by themselves, unreal. The 1D and 2D models (and often, for that matter, even 3D models) just show some certain similarities to the actual physical behavior, that’s all. But regard them as completely real things in your actual engineering job, and you will soon turn looney. (Many respected physicists, mathematicians, and philosophers in fact are, in terms of their professed convictions, indistinguishable from mere lunatics.)


So there. I presented two complementary aspects. And these two seem to complete the argument—or at least the illustration.

The illustration of the argument seems to be, logically speaking, almost complete.


But how to verify the completeness? … Let me help you out.

Just in case you missed it, here is the summary of the two points I made (but did not prove) above:

(i) Maths by itself (i.e. divorced from or not based on physics) is incapable of describing physical reality in its entirety. In fact, to describe anything concretely real in terms of maths very rapidly gets extraordinarily difficult and very soon collapses into impossible.

(ii) Left to its own devices (i.e. as divorced from or not based on physics), the methods of maths can very easily describe purely imaginary things—things which in principle can have no physical existence.


Elementary, wasn’t it?

Yes, it was.


But then, even while talking about a mere illustration of the real argument, why might have I added the word “almost”—almost as if it were an after-thought?

Completeness requires that I address this part too.

But being too busy for now [affiliations- and accreditations-related work], I would like to leave the answer of that question—and the point of proving the completeness in the “real” complete sense, as an exercise for the reader. [Don’t worry, I will cover it in a short post, sometime in future. [Just remind me, that’s all!]]


A Song I Like:

(Hindi) “woh chaand khilaa, woh taare hanse…”
Singer: Lata Mangeshkar
Music: Shankar-Jaikishan
Lyrics: Hasrat Jaipuri


[PS: Really short of time to add categories and all… But you take care, and bye for now…]

 

There is maths beyond calculus

Update on 2018.01.29; 23.13 IST: No one said I must note an update when I add one here. But I will make an exception, for now. See at the end.


When you say “maths,” what most engineers immediately come to think of is, first and foremost, calculus. There are several reasons for that.

First, admissions to good engineering colleges are competitive (think JEE!), and most students find maths to be the most difficult subject to master. And then, the most difficult portion of the XII standard maths involves calculus. Calculus also is remarkably unlike the maths they already know from their school-time studies (e.g. geometry and algebra). Physics is the next most difficult subject, and at XII standard level, it is calculus-based.

Second, the courses on engineering maths also are heavily involve the ideas first encountered in calculus, such as differential equations. OK, there is some statistics and linear algebra, too, both in XII and later in engineering. But their real usage (distibution functions, moments, coupled linear systems) would be inaccessible without knowing the differential equations.

By way of percentages, a majority of engineers do not in fact pursue any master’s. Even among those who do, a great many of them end up pursuing programs or specializations that don’t actually require much expansion of their mathematical repertoire. For instance, manufacturing engineering, environmental engineering, digital electronics, computer science, etc. Others simply pursue MBA, which actually has an effect of dumbing down on the maths side of their skills.

Thus, a large body of trained S&T people never come across some really wonderful mathematical ideas, ever in their life, even while continuing to believe that they have a fairly good idea of what the further maths might involve. Wrong. These further mathematical ideas are not actually more difficult than the maths already encountered during UG education. But their conceptual character is remarkably different. Perhaps popular science writers could help balance the situation.

Anyway, let me reserve this post for just a listing of certain important ones among such “further” mathematical ideas that I had to learn, mostly on my own, during my studies of Computational Science and Engineering (in the main: FEM and CFD), and quantum physics. Many postgraduate engineers and physicists would of course know well about them. But the fact is, most engineers with only a bachelor’s in engineering wouldn’t. If they are inclined to learn maths (with no examinations to be taken!), they may consider these ideas (whether through history of maths books, or pop-science, or MOOC courses, blog posts, or whatever). I will try to order the topics hopefully from simpler to complex order, but haven’t given much thought to it. In any case, the order is difficult to achieve; many topics have rather big overlaps with each other. In advanced maths, that often happens. Thus, the topics I list here are often just different aspects of some more general techniques/approaches.


Integral Equations: The differential equations paradigm is used throughout the UG engg; think of any place where you invoked the Taylor series. Here the idea is that you capture the physics of some phenomenon over an infinitesimally small region of space, and express a simple algebraic combination of its factors (e.g. balance or conservation of quantities, or their evolution over time) via a differential equation. Then you apply this governing equation to the models of various situations arising in application. Thus, the idea implicitly reinforced is this: the _problem_ formulation proceeds through differential equations, and the _solution_ techniques involve BV/IV’s and techniques of integration.

However, once in a more advanced settings, you find it routine to express the problem itself in terms of an integral equation. For instance, RTT (the Reynolds Transport Theorem) in fluid mechanics, or the path-integral approach in QM. The switch is from integrals as expressing the final solution to the integral terms as expressing various aspects of a problem itself.


Variational Calculus: For simpler problems (rather, for problems where simpler solution techiques are well-suited) such as rigid body mechanics in simpler fields (say uniform and time-invariant gravity), differential equations approach is well suited. But once you come to studying fields—the spatially distributed objects or attributes—it’s the variational approach which makes things simpler. In the differential equations, you are comparing two neighbouring points or instants lying on the curve of a function. In integral equations, you begin considering the entire function at a time, else you couldn’t calculate its definite integral. But still, in a way, the idea of taking an entire function in one go still is rather implicit. In the variational calculus, it begins a full-blown thing. A variation itself is a function—it’s a function obtained by taking differences between the entirety of two functions in one go. Further, it’s an abstract function, because the two functions whose difference it represents, themselves aren’t concretely specified. This is a big leap, and unfortunately, even the best and most helpful among books don’t point it out. The huge difference in thinking, represented by the Lagranian approach, is simply poured onto an unsuspecting student. (Reddy’s or Lancocz’s books are no exception.)

There are several new ideas here. One of the most basic and important ones is: the idea of the delta operator.


Expansion of Functions: Some idea about this is already given during UG engg. But not in the way professional/working physicists, or numerical modelers, or CSE engineers routinely use this idea. Let me illustrate with a concrete example.

To a UG engineer, say an electrical engineer, “expansion of function” means: taking FFT. Or, taking a Fourier transform. But to a quantum physicists, what it means is: a linear combination of basis functions taken as a vector space. To both UG electrical/electronics engineer and a quantum physicist, the basis functions are with complex exponentials. They are wont to list the advantages of the complex-Fourier expansion over the real-polynomial expansion. But to a mechanical engineer doing FEM via the method of weighted residuals, the expansion mostly means only a real-valued polynomial. If he is sufficiently smart, he might even retort back to the EE/QM folks: and how do you prove Euler’s identity, if not in reference to the Taylor series expansion? (Yes, his point is valid. Yes, EE folks’ point also is valid. The thing is: the power series expansion _is_ more fundamental, but given the completeness theorem of complex numbers, when you do the power series expansion using complex numbers, it naturally becomes more powerful.)

But the most remarkable difference in the grasp of what “expansion of function” means, drilled down to the level of an intuitive absolute, is this: An engineer/physicist with advanced training (of QM/CSE/FEM), over a period of time, becomes _unable_ to think of a field as a spatially spread entity. His natural proclivity has already become thinking of an arrow in an abstract vector space of basis functions—in some arbitrary basis set!

He also instinctively keeps the connection to eigenbases ready in his mind.


Ansatz: I won’t write anything new on it. Instead, I will direct you to my past writing here [^] and here [^] and Gershenfeld’s essay here [^].


Operator: I got tired of writing today, so I will expand this point later on. In any case, as I told you, this post is going to grow over a period of time. I will come back and add to it, and also edit it a lot, all unannounced. When I feel that a sufficient amount of material sufficiently well arranged has gathered here, I will then publish a separate post based on the material here.


Eigenbases of Operators: Ditto.


Tensors: The UG engineer understands (if he at all does) tensors as a 3X3 array of some differential terms, most often, a symmetrical arrangement. He may or may not understand tensors as objects that remain invariant under rotation. He certainly does not understand tensors as linear maps between vector spaces, neither does his mind immediately throw up this intimately connected contrast between the inner and the outer products. Nor does he understand a tensor containing differential terms as the first-order approximation in a power-series expansion. Nor does he realize the tensor product over infinite dimensional spaces let alone those over infinite-dimensional function spaces in some arbitrary eigen-basis. And more (which I myself don’t understand, but the QM guys do.)


A group of many intimately related ideas, here:

8.1. Catastrophe Theory: Many UG engineers might have never even heard of the term! Here is my post covering a bit on it. The UG engineering maths syllabii typically don’t cover the idea that properties such as existence, regularity, and uniqueness have to be proved! Even if the syllabus (or the text) cursorily mentions these ideas (e.g. Kreyszig does!), you can safely bet that the student never bothered to read through them because he “knew” that no exam-question will test him on that part. The idea that some neat initial condition may eventually evolve into multiple branches of solutions (i.e. non-uniqueness of solutions simply due to evolution) is a complete unknown to him. So is the non-uniqueness arising due to differing physical contexts having the same governing differential equation.

8.2. Deterministic Chaos: Most UG engineers by now have come to hear of this term. But they don’t understand what it means.

8.3 Well- vs. Ill-Posed Problems: Some UG engineers might have occasionally run into this term.


TBD: A laundry list of things to expand on, or to insert into the right places in the above list:

Differentiation under the integral sign/operator. Integration by parts and orders of continuity. Infinite sequences of functions (via a limiting process) under the integral operator (i.e., Dirac’s delta). Operators that make sense only under an integral sign. Functionals.

Infinite matrices. Vectors and matrices that have functions as elements. Projections of vectors, esp. in function spaces.

Tensors as fluxes of vectors (more accurately: tensor fields as flux-fields of vector fields).


No, I am not an expert on any of the above-mentioned ideas. It’s just that I have run into all of them, have tried to think about them, and have succeeded in understanding the essence of many of them. That’s all. I claim no good mastery. So, don’t come to me with your difficulties on these topics; ask the real experts. (In fact, I can only hope that the above description has come out more right than wrong, that’s all.)

But there are other things in which I seem to know better. For instance, the physical meaning of the delta operator of the calculus of variations.

Alright, bye for now.


Update (not necessary that I note updates here, but I will make an exception for now) on 2018.01.29.23:00 HRS IST: Added the Song section.

A Song I Like:

(Hindi) “dil me jaagee dhaDakan aise…”
Singer: Sunidhi Chauhan
Music: M. M. Kareem (i.e. M. M. Keeravani )
Lyrics: Nida Fazli

 

How time flies…

I plan to conduct a smallish FDP (Faculty Development Program), for junior faculty, covering the basics of CFD sometime soon (may be starting in the second-half of February or early March or so).

During my course, I plan to give out some simple, pedagogical code that even non-programmers could easily run, and hopefully find easy to comprehend.


Don’t raise difficult questions right away!

Don’t ask me why I am doing it at all—especially given the fact that I myself never learnt my CFD in a class-room/university course settings. And especially given the fact that excellent course materials and codes already exist on the ‘net (e.g. Prof. Lorena Barba’s course, Prof. Atul Sharma’s book and Web site, to pick up just two of the so many resources already available).

But, yes, come to think of it, your question, by itself, is quite valid. It’s just that I am not going to entertain it.

Instead, I am going to ask you to recall that I am both a programmer and a professor.

As a programmer, you write code. You want to write code, and you do it. Whether better code already exists or not is not a consideration. You just write code.

As a professor, you teach. You want to teach, and you just do it. Whether better teachers or course-ware already exist or not is not a consideration. You just teach.

Admittedly, however, teaching is more difficult than coding. The difference here is that coding requires only a computer (plus software-writing software, of course!). But teaching requires other people! People who are willing to seat in front of you, at least faking listening to you with a rapt sort of an attention.

But just the way as a programmer you don’t worry whether you know the algorithm or not when you fire your favorite IDE, similarly, as a professor you don’t worry whether you will get students or not.

And then, one big advantage of being a senior professor is that you can always “c” your more junior colleagues, where “c” stands for {convince, confuse, cajole, coax, compel, …} to attend. That’s why, I am not worried—not at least for the time being—about whether I will get students for my course or not. Students will come, if you just begin teaching. That’s my working mantra for now…


But of course, right now, we are busy with our accreditation-related work. However, by February/March, I will become free—or at least free enough—to be able to begin conducting this FDP.


As my material for the course progressively gets ready, I will post some parts of it here. Eventually, by the time the FDP gets over, I would have uploaded all the material together at some place or the other. (May be I will create another blog just for that course material.)

This blog post was meant to note something on the coding side. But then, as usual, I ended up having this huge preface at the beginning.


When I was doing my PhD in the mid-naughties, I wanted a good public domain (preferably open source) mesh generator. There were several of them, but mostly on the Unix/Linux platform.

I had nothing basically against Unix/Linux as such. My problem was that I found it tough to remember the line commands. My working memory is relatively poor, very poor. And that’s a fact; I don’t say it out of any (false or true) modesty. So, I found it difficult to remember all those shell and system commands and their options. Especially painful for me was to climb up and down a directory hierarchy, just to locate a damn file and open it already! Given my poor working memory, I had to have the entire structure laid out in front of me, instead of remembering commands or file names from memory. Only then could I work fast enough to be effective enough a programmer. And so, I found it difficult to use Unix/Linux. Ergo, it had to be Windows.

But, most of this Computational Science/Engineering code was not available (or even compilable) on Windows, back then. Often, they were buggy. In the end, I ended up using Bjorn Niceno’s code, simply because it was in C (which I converted into C++), and because it was compilable on Windows.

Then, a few years later, when I was doing my industrial job in an FEM-software company, once again there was this requirement of an integrable mesh generator. It had to be: on Windows; open source; small enough, with not too many external dependencies (such as the Boost library or others); compilable using “the not really real” C++ compiler (viz. VC++ 6); one that was not very buggy or still was under active maintenance; and one more important point: the choice had to be respectable enough to be acceptable to the team and the management. I ended up using Jonathan Schewchuk’s Triangle.

Of course, all this along, I already knew about Gmsh, CGAL, and others (purely through my ‘net searches; none told me about any of them). But for some or the other reason, they were not “usable” by me.

Then, during the mid-teens (2010s), I went into teaching, and software development naturally took a back-seat.

A lot of things changed in the meanwhile. We all moved to 64-bit. I moved to Ubuntu for several years, and as the Idea NetSetter stopped working on the latest Ubuntu, I had no choice but to migrate back to Windows.

I then found that a lot of platform wars had already disappeared. Windows (and Microsoft in general) had become not only better but also more accommodating of the open source movement; the Linux movement had become mature enough to not look down upon the GUI users as mere script-kiddies; etc. In general, inter-operability had improved by leaps and bounds. Open Source projects were being not only released but also now being developed on Windows, not just on Unix/Linux. One possible reason why both the camps suddenly might have begun showing so much love to each other perhaps was that the mobile platform had come to replace the PC platform as the avant garde choice of software development. I don’t know, because I was away from the s/w world, but I am simply guessing that that could also be an important reason. In any case, code could now easily flow back and forth both the platforms.

Another thing to happen during my absence was: the wonderful development of the Python eco-system. It was always available on Ubuntu, and had made my life easier over there. After all, Python had a less whimsical syntax than many other alternatives (esp. the shell scripts); it carried all the marks of a real language. There were areas of discomfort. The one thing about Python which I found whimsical (and still do) is the lack of the braces for defining scopes. But such areas were relatively easy to overlook.

At least in the area of Computational Science and Engineering, Python had made it enormously easier to write ambitious codes. Just check out a C++ code for MPI for cluster computing, vs. the same code, written in Python. Or, think of not having to write ridiculously fast vector classes (or having to compile disparate C++ libraries using their own make systems and compiler options, and then to make them all work together). Or, think of using libraries like LAPACK. No more clumsy wrappers and having to keep on repeating multiple number of scope-resolution operators and namespaces bundling in ridiculously complex template classes. Just import NumPy or SciPy, and proceed to your work.

So, yes, I had come to register in my mind the great success story being forged by Python, in the meanwhile. (BTW, in case you don’t know, the name of the language comes from a British comedy TV serial, not from the whole-animal swallowing creep.) But as I said, I was now into academia, into core engineering, and there simply wasn’t much occasion to use any language, C++, Python or any other.

One more hindrance went away when I “discovered” that the PyCharm IDE existed! It not only was free, but also had VC++ key-bindings already bundled in. W o n d e r f u l ! (I would have no working memory to relearn yet another set of key-bindings, you see!)

In the meanwhile, VC++ anyway had become very big, very slow and lethargic, taking forever for the intelli-sense ever to get to produce something, anything. The older, lightweight, lightening-fast, and overall so charming IDE i.e. the VC++ 6, had given way, because of the .NET platform, to this new IDE which behaved as if it was designed to kill the C++ language. My forays into using Eclipse CDT (with VC++ key-bindings) were only partially successful. Eclipse was no longer buggy; it had begun working really well. The major trouble here was: there was no integrated help at the press of the “F1” key. Remember my poor working memory? I had to have that F1 key opening up the .chm helpf file at just the right place. But that was not happening. And, debug-stepping through the code still was not as seamless as I had gotten used to, in the VC++ 6.

But with PyCharm + Visual Studio key bindings, most my concerns got evaporated. Being an interpreted language, Python always would have an advantage as far as debug-stepping through the code is concerned. That’s the straight-forward part. But the real game-changer for me was: the maturation of the entire Python eco-system.

Every library you could possibly wish for was there, already available, like Aladdin’s genie standing with folded hands.

OK. Let me give you an example. You think of doing some good visualization. You have MatPlotLib. And a very helpful help file, complete with neat examples. No, you want more impressive graphics, like, say, volume rendering (voxel visualization). You have the entire VTK wrappped in; what more could you possibly want? (Windows vs. Linux didn’t matter.) But you instead want to write some custom-code, say for animation? You have not just one, not just two, but literally tens of libraries covering everything: from OpenGL, to scene-graphs, to computational geometry, to physics engines, to animation, to games-writing, and what not. Windowing? You had the MFC-style WxWidgets, already put into a Python avatar as WxPython. (OK, OpenGL still gives trouble with WxPython for anything ambitious. But such things are rather isolated instances when it comes to the overall Python eco-system.)

And, closer to my immediate concerns, I was delighted to find that, by now, both OpenFOAM and Gmsh had become neatly available on Windows. That is, not just “available,” i.e., not just as sources that can be read, but also working as if the libraries were some shrink-wrapped software!

Availability on Windows was important to me, because, at least in India, it’s the only platform of familiarity (and hence of choice) for almost all of the faculty members from any of the e-school departments other than CS/IT.

Hints: For OpenFOAM, check out blueCFD instead of running it through Dockers. It’s clean, and indeed works as advertised. As to Gmsh, ditto. And, it also comes with Python wrappers.

While the availability of OpenFOAM on Windows was only too welcome, the fact is, its code is guaranteed to be completely inaccessible to a typical junior faculty member from, say, a mechanical or a civil or a chemical engineering department. First, OpenFOAM is written in real (“templated”) C++. Second, it is very bulky (millions of lines of code, may be?). Clearly beyond the comprehension of a guy who has never seen more than 50 lines of C code at a time in his life before. Third, it requires the GNU compiler, special make environment, and a host of dependencies. You simply cannot open OpenFOAM and show how those FVM algorithms from Patankar’s/Versteeg & Malasekara’s book do the work, under its hood. Neither can you ask your students to change a line here or there, may be add a line to produce an additional file output, just for bringing out the actual working of an FVM algorithm.

In short, OpenFOAM is out.

So, I have decided to use OpenFOAM only as a “backup.” My primary teaching material will only be Python snippets. The students will also get to learn how to install OpenFOAM and run the simplest tutorials. But the actual illustrations of the CFD ideas will be done using Python. I plan to cover only FVM and only simpler aspects of that. For instance, I plan to use only structured rectangular grids, not non-orthogonal ones.

I will write code that (i) generates mesh, (ii) reads mesh generated by the blockMesh of OpenFOAM, (iii) implements one or two simple BCs, (iv) implements the SIMPLE algorithm, and (v) uses MatPlotLib or ParaView to visualize the output (including any intermediate outputs of the algorithms).

I may then compare the outputs of these Python snippets with a similar output produced by OpenFOAM, for one or two simplest cases like a simple laminar flow over step. (I don’t think I will be covering VOF or any other multi-phase technique. My course is meant to be covering only the basics.)

But not having checked Gmsh recently, and thus still carrying my old impressions, I was almost sure I would have to write something quick in Python to convert BMP files (showing geometry) into mesh files (with each pixel turning into a finite volume cell). The trouble with this approach was, the ability to impose boundary conditions would be seriously limited. So, I was a bit worried about it.

But then, last week, I just happened to check Gmsh, just to be sure, you know! And, WOW! I now “discovered” that the Gmsh is already all Python-ed in. Great! I just tried it, and found that it works, as bundled. Even on Windows. (Yes, even on Win7 (64-bit), SP1).

I was delighted, excited, even thrilled.

And then, I began “reflecting.” (Remember I am a professor?)

I remembered the times when I used to sit in a cyber-cafe, painfully downloading source code libraries over a single 64 kbps connection which would shared in that cyber-cafe over 6–8 PCs, without any UPS or backups in case the power went out. I would download the sources that way at the cyber-cafe, take them home to a Pentium machine running Win2K, try to open and read the source only to find that I had forgot to do the CLRF conversion first! And then, the sources wouldn’t compile because the make environment wouldn’t be available on Windows. Or something or the other of that sort. But still, I fought on. I remember having downloaded not only the OpenFOAM sources (with the hope of finding some way to compile them on Windows), but also MPICH2, PetSc 2.x, CGAL (some early version), and what not. Ultimately, after my valiant tries at the machine for a week or two, “nothing is going to work here” I would eventually admit to myself.

And here is the contrast. I have a 4G connection so I can comfortably seat at home, and use the Python pip (or the PyCharm’s Project Interpreter) to download or automatically update all the required libraries, even the heavy-weights like what they bundle inside SciPy and NumPy, or the VTK. I no longer have to manually ensure version incompatibilities, platform incompatibilities. I know I could develop on Ubuntu if I want to, and the student would be able to run the same thing on Windows.

Gone are those days. And how swiftly, it seems now.

How time flies…


I will be able to come back only next month because our accreditation-related documentation work has now gone into its final, culminating phase, which occupies the rest of this month. So, excuse me until sometime in February, say until 11th or so. I will sure try to post a snippet or two on using Gmsh in the meanwhile, but it doesn’t really look at all feasible. So, there.

Bye for now, and take care…


A Song I Like:

[Tomorrow is (Sanskrit, Marathi) “Ganesh Jayanti,” the birth-day of Lord Ganesha, which also happens to be the auspicious (Sanskrit, Marathi) “tithee” (i.e. lunar day) on which my mother passed away, five years ago. In her fond remembrance, I here run one of those songs which both of us liked. … Music is strange. I mean, a song as mature as this one, but I remember, I still had come to like it even as a school-boy. May be it was her absent-minded humming of this song which had helped? … may be. … Anyway, here’s the song.]

(Hindi) “chhup gayaa koi re, door se pukaarake”
Singer: Lata Mangeshkar
Music: Hemant Kumar
Lyrics: Rajinder Kishan