Work Is Punishment

Work is not worship—they said.

It’s a punishment, full stop!—they said.

One that is to be smilingly borne.

Else you lose your job.

And so lose everything else too. …


Hmmm… I said. … I was confused.

Work is enjoyment, actually. … I then discovered.

I told them.


They didn’t believe.

Not when I said it.

Not because they ceased believing in me.

It’s just that. They. Simply. Didn’t. Believe. In. It.

And they professed to believe in

a lot of things that never did make

any sense to themselves.

They said so.

And it was so.


A long many years have passed by, since then.


Now, whether they believe in it or not,

I have come to believe in this gem:

Work is punishment—full stop.


That’s the principle on the basis of which I am henceforth going to operate.

And yes! This still is a poem alright?

[What do you think most poems written these days are like?]

It remains a poem.


And I am going to make money. A handsome amount of money.

For once in my life-time.

After all, one can make money and still also write poems.

That’s what they say.

Or do science. Real science. Physics. Even physics for that matter.

Or, work. Real work, too.


It’s better than having no money and…

.


 

 

 

Advertisements

Python scripts for simulating QM, part 0: A general update

My proposed paper on my new approach to QM was not accepted at the international conference where I had sent my abstract. (For context, see the post before the last, here [^] ).

“Thank God,” that’s what I actually felt when I received this piece of news, “I can now immediately proceed to procrastinate on writing the full-length paper, and also, simultaneously, un-procrastinate on writing some programs in Python.”

So far, I have written several small and simple code-snippets. All of these were for the usual (text-book) cases; all in only 1D. Here in this post, I will mention specifically which ones…


Time-independent Schrodinger equation (TISE):

Here, I’ve implemented a couple of scripts, one for finding the eigen-vectors and -values for a particle in a box (with both zero and arbitrarily specified potentials) and another one for the quantum simple harmonic oscillator.

These were written not with the shooting method (which is the method used in the article by Rhett Allain for the Wired magazine [^]) but with the matrix method. … Yes, I have gone past the stage of writing all the numerical analysis algorithm in original, all by myself. These days, I directly use Python libraries wherever available, e.g., NumPy’s LinAlg methods. That’s why, I preferred the matrix method. … My code was not written from scratch; it was based on Cooper’s code “qp_se_matrix”, here [PDF ^]).


Time-dependent Schrodinger equation (TDSE):

Here, I tried out a couple of scripts.

The first one was more or less a straightforward porting of Ian Cooper’s program “se_fdtd” [PDF ^] from the original MatLab to Python. The second one was James Nagel’s Python program (written in 2007 (!) and hosted as a SciPy CookBook tutorial, here [^]). Both follow essentially the same scheme.

Initially, I found this scheme to be a bit odd to follow. Here is what it does.

It starts out by replacing the complex-valued Schrodinger equation with a pair of real-valued (time-dependent) equations. That was perfectly OK by me. It was their discretization which I found to be a bit peculiar. The discretization scheme here is second-order in both space and time, and yet it involves explicit time-stepping. That’s peculiar, so let me write a detailed note below (in part, for my own reference later on).

Also note: Though both Cooper and Nagel implement essentially the same method, Nagel’s program is written in Python, and so, it is easier to discuss (because the array-indexing is 0-based). For this reason, I might make a direct reference only to Nagel’s program even though it is to be understood that the same scheme is found implemented also by Cooper.


A note on the method implemented by Nagel (and also by Cooper):

What happens here is that like the usual Crank-Nicolson (CN) algorithm for the diffusion equation, this scheme too puts the half-integer time-steps to use (so as to have a second-order approximation for the first-order derivative, that of time). However, in the present scheme, the half-integer time-steps turn out to be not entirely fictitious (the way they are, in the usual CN method for the single real-valued diffusion equation). Instead, all of the half-integer instants are fully real here in the sense that they do enter the final discretized equations for the time-stepping.

The way that comes to happen is this: There are two real-valued equations to solve here, coupled to each other—one each for the real and imaginary parts. Since both the equations have to be solved at each time-step, what this method does is to take advantage of that already existing splitting of the time-step, and implements a scheme that is staggered in time. (Note, this scheme is not staggered in space, as in the usual CFD codes; it is staggered only in time.) Thus, since it is staggered and explicit, even the finite-difference quantities that are defined only at the half-integer time-steps, also get directly involved in the calculations. How precisely does that happen?

The scheme defines, allocates memory storage for, and computationally evaluates the equation for the real part, but this computation occurs only at the full-integer instants (n = 0, 1, 2, \dots). Similarly, this scheme also defines, allocates memory for, and computationally evaluates the equation for the imaginary part; however, this computation occurs only at the half-integer instants (n = 1/2, 1+1/2, 2+1/2, \dots). The particulars are as follows:

The initial condition (IC) being specified is, in general, complex-valued. The real part of this IC is set into a space-wide array defined for the instant n; here, n = 0. Then, the imaginary part of the same IC is set into a separate array which is defined nominally for a different instant: n+1/2. Thus, even if both parts of the IC are specified at t = 0, the numerical procedure treats the imaginary part as if it was set into the system only at the instant n = 1/2.

Given this initial set-up, the actual time-evolution proceeds as follows:

  • The real-part already available at n is used in evaluating the “future” imaginary part—the one at n+1/2
  • The imaginary part thus found at n+1/2 is used, in turn, for evaluating the “future” real part—the one at n+1.

At this point that you are allowed to say: lather, wash, repeat… Figure out exactly how. In particular, notice how the simulation must proceed in integer number of pairs of computational steps; how the imaginary part is only nominally (i.e. only computationally) distant in time from its corresponding real part.

Thus, overall, the discretization of the space part is pretty straight-forward here: the second-order derivative (the Laplacian) is replaced by the usual second-order finite difference approximation. However, for the time-part, what this scheme does is both similar to, and different from, the usual Crank-Nicolson scheme.

Like the CN scheme, the present scheme also uses the half-integer time-levels, and thus manages to become a second-order scheme for the time-axis too (not just space), even if the actual time interval for each time-step remains, exactly as in the CN, only \Delta t, not 2\Delta t.

However, unlike the CN scheme, this scheme still remains explicit. That’s right. No matrix equation is being solved at any time-step. You just zip through the update equations.

Naturally, the zipping through comes with a “cost”: The very scheme itself comes equipped with a stability criterion; it is not unconditionally stable (the way CN is). In fact, the stability criterion now refers to half of the time-interval, not full, and thus, it is a bit even more restrictive as to how big the time-step (\Delta t) can be given a certain granularity of the space-discretization (\Delta x). … I don’t know, but guess that this is how they handle the first-order time derivatives in the FDTD method (finite difference time domain). May be the physics of their problems itself is such that they can get away with coarser grids without being physically too inaccurate, who knows…


Other aspects of the codes by Nagel and Cooper:

For the initial condition, both Cooper and Nagel begin with a “pulse” of a cosine function that is modulated to have the envelop of the Gaussian. In both their codes, the pulse is placed in the middle, and they both terminate the simulation when it reaches an end of the finite domain. I didn’t like this aspect of an arbitrary termination of the simulation.

However, I am still learning the ropes for numerically handling the complex-valued Schrodinger equation. In any case, I am not sure if I’ve got good enough a handle on the FDTD-like aspects of it. In particular, as of now, I am left wondering:

What if I have a second-order scheme for the first-order derivative of time, but if it comes with only fictitious half-integer time-steps (the way it does, in the usual Crank-Nicolson method for the real-valued diffusion equation)? In other words: What if I continue to have a second-order scheme for time, and yet, my scheme does not use leap-frogging? In still other words: What if I define both the real and imaginary parts at the same integer time-steps n = 0, 1, 2, 3, \dots so that, in both cases, their values at the instant n are directly fed into both their values at n+1?

In a way, this scheme seems simpler, in that no leap-frogging is involved. However, notice that it would also be an implicit scheme. I would have to solve two matrix-equations at each time-step. But then, I could perhaps get away with a larger time-step than what Nagel or Cooper use. What do you think? Is checker-board patterning (the main reason why we at all use staggered grids in CFD) an issue here—in time evolution? But isn’t the unconditional stability too good to leave aside without even trying? And isn’t the time-axis just one-way (unlike the space-axis that has BCs at both ends)? … I don’t know…


PBCs and ABCs:

Even as I was (and am) still grappling with the above-mentioned issue, I also wanted to make some immediate progress on the front of not having to terminate the simulation (once the pulse reached one of the ends of the domain).

So, instead of working right from the beginning with a (literally) complex Schrodinger equation, I decided to first model the simple (real-valued) diffusion equation, and to implement the PBCs (periodic boundary conditions) for it. I did.

My code seems to work, because the integral of the dependent variable (i.e., the total quantity of the diffusing quantity present in the entire domain—one with the topology of a ring) does seem to stay constant—as is promised by the Crank-Nicolson scheme. The integral stays “numerically the same” (within a small tolerance) even if obviously, there are now fluxes at both the ends. (An initial condition of a symmetrical saw-tooth profile defined between y = 0.0 and y = 1.0, does come to asymptotically approach the horizontal straight-line at y = 0.5. That is what happens at run-time, so obviously, the scheme seems to handle the fluxes right.)

Anyway, I don’t always write everything from the scratch; I am a great believer in lifting codes already written by others (with attribution, of course :)). Thus, while thus searching on the ‘net for some already existing resources on numerically modeling the Schrodinger equation (preferably with code!), I also ran into some papers on the simulation of SE using ABCs (i.e., the absorbing boundary conditions). I was not sure, however, if I should implement the ABCs immediately…

As of today, I think that I am going to try and graduate from the transient diffusion equation (with the CN scheme and PBCs), to a trial of the implicit TDSE without leap-frogging, as outlined above. The only question is whether I should throw in the PBCs to go with that or the ABCs. Or, may be, neither, and just keep pinning the  \Psi values for the end- and ghost-nodes down to 0, thereby effectively putting the entire simulation inside an infinite box?

At this point of time, I am tempted to try out the last. Thus, I think that I would rather first explore the staggering vs. non-staggering issue for a pulse in an infinite box, and understand it better, before proceeding to implement either the PBCs or the ABCs. Of course, I still have to think more about it… But hey, as I said, I am now in a mood of implementing, not of contemplating.


Why not upload the programs right away?

BTW, all these programs (TISE with matrix method, TDSE on the lines of Nagel/Cooper’s codes, transient DE with PBCs, etc.) are still in a fluid state, and so, I am not going to post them immediately here (though over a period of time, I sure would).

The reason for not posting the code runs something like this: Sometimes, I use the Python range objects for indexing. (I saw this goodie in Nagel’s code.) At other times, I don’t. But even when I don’t use the range objects, I anyway am tempted to revise the code so as to have them (for a better run-time efficiency).

Similarly, for the CN method, when it comes to solving the matrix equation at each time-step, I am still not using the TDMA (the Thomas algorithm) or even just sparse matrices. Instead, right now, I am allocating the entire N \times N sized matrices, and am directly using NumPy’s LinAlg’s solve() function on these biggies. No, the computational load doesn’t show up; after all, I anyway have to use a 0.1 second pause in between the rendering passes, and the biggest matrices I tried were only 1001 \times 1001 in size. (Remember, this is just a 1D simulation.) Even then, I am tempted a bit to improve the efficiency. For these and similar reasons, some or the other tweaking is still going on in all the programs. That’s why, I won’t be uploading them right away.


Anything else about my new approach, like delivering a seminar or so? Any news from the Indian physicists?

I had already contacted a couple of physics professors from India, both from Pune: one, about 1.5 years ago, and another, within the last 6 months. Both these times, I offered to become a co-guide for some computational physics projects to be done by their PG/UG students or so. Both times (what else?) there was absolutely no reply to my emails. … If they were to respond, we could have together progressed further on simulating my approach. … I have always been “open” about it.

The above-mentioned experience is precisely similar to how there have been no replies when I wrote to some other professors of physics, i.e., when I offered to conduct a seminar (covering my new approach) in their departments. Particularly, from the young IISER Pune professor whom I had written. … Oh yes, BTW, there has been one more physicist who I contacted recently for a seminar (within the last month). Once again, there has been no reply. (This professor is known to enjoy hospitality abroad as an Indian, and also use my taxpayer’s money for research while in India.)

No, the issue is not whether the emails I write using my Yahoo! account go into their span folder—or something like that. That would be too innocuous a cause, and too easy to deal with—every one has a mobile-phone these days. But I know these (Indian) physicists. Their behaviour remains exactly the same even if I write my emails using a respectable academic email ID (my employers’, complete with a .edu domain). This was my experience in 2016, and it repeated again in 2017.

The bottom-line is this: If you are an engineer and if you write to these Indian physicists, there is almost a guarantee that your emails will go into a black-hole. They will not reply to you even if you yourself have a PhD, and are a Full Professor of engineering (even if only on an ad-hoc basis), and have studied and worked abroad, and even if your blog is followed internationally. So long as you are engineer, and mention QM, the Indian physicists simply shut themselves off.

However, there is a trick to get them to reply you. Their behavior does temporarily change when you put some impressive guy in your cc-field (e.g., some professor friend of yours from some IIT). In this case, they sometimes do reply your first email. However, soon after that initial shaking of hands, they somehow go back to their true core; they shut themselves off.

And this is what invariably happens with all of them—no matter what other Indian bloggers might have led you to believe.

There must be some systemic reasons for such behavior, you say? Here, I will offer a couple of relevant observations.

Systemically speaking, Indian physicists, taken as a group (and leaving any possible rarest of the rare exceptions aside), all fall into one band: (i) The first commonality is that they all are government employees. (ii) The second commonality they all tend to be leftists (or, heavily leftists). (iii) The third commonality is they (by and large) share is that they had lower (or far lower) competitive scores in the entrance examinations at the gateway points like XII, GATE/JAM, etc.

The first factor typically means that they know that no one is going to ask them why they didn’t reply (even to people like with my background). The second factor typically means that they don’t want to give you any mileage, not even just a plain academic respect, if you are not already one of “them”. The third factor typically means that they simply don’t have the very intellectual means to understand or judge anything you say if it is original—i.e., if it is not based on some work of someone from abroad. In plain words: they are incompetent. (That in part is the reason whenever I run into a competent Indian physicist, it is both a surprise and a pleasure. To drop a couple of names: Prof. Kanhere (now retired) from UoP (now SPPU), and Prof. Waghmare of JNCASR. … But leaving aside this minuscule minority, and coming to the rest of the herd: the less said, the better.)

In short, Indian physicists all fall into a band. And they all are very classical—no tunneling is possible. Not with these Indian physicists. (The trends, I guess, are similar all over the world. Yet, I definitely can say that Indians are worse, far worse, than people from the advanced, Western, countries.)

Anyway, as far as the path through the simulations goes, since no help is going to come from these government servants (regarded as physicists by foreigners), I now realized that I have to get going about it—simulations for my new approach—entirely on my own. If necessary, from the basic of the basics. … And that’s how I got going with these programs.


Are these programs going to provide a peek into my new approach?

No, none of these programs I talked about in this post is going to be very directly helpful for simulations related to my new approach. The programs I wrote thus far are all very, very standard (simplest UG text-book level) stuff. If resolving QM riddles were that easy, any number of people would have done it already.

… So, the programs I wrote over the last couple of weeks are nothing but just a beginning. I have to cover a lot of distance. It may take months, perhaps even a year or so. But I intend to keep working at it. At least in an off and on manner. I have no choice.

And, at least currently, I am going about it at a fairly good speed.

For the same reason, expect no further blogging for another 2–3 weeks or so.


But one thing is for certain. As soon as my paper on my new approach (to be written after running the simulations) gets written, I am going to quit QM. The field does not hold any further interest to me.

Coming to you: If you still wish to know more about my new approach before the paper gets written, then you convince these Indian professors of physics to arrange for my seminar. Or, else…

… What else? Simple! You. Just. Wait.

[Or see me in person if you would be visiting India. As I said, I have always been “open” from my side, and I continue to remain so.]


A song I like:
(Hindi) “bheegee bheegee fizaa…”
Music: Hemant Kumar
Singer: Asha Bhosale
Lyrics: Kaifi Aazmi


History:
Originally published: 2018.11.26 18:12 IST
Extension and revision: 2018.11.27 19.29 IST

Blog-Filling—Part 3

Note: A long Update was added on 23 November 2017, at the end of the post.


Today I got just a little bit of respite from what has been a very tight schedule, which has been running into my weekends, too.

But at least for today, I do have a bit of a respite. So, I could at least think of posting something.

But for precisely the same reason, I don’t have any blogging material ready in the mind. So, I will just note something interesting that passed by me recently:

  1. Catastrophe Theory: Check out Prof. Zhigang Suo’s recent blog post at iMechanica on catastrophe theory, here [^]; it’s marked by Suo’s trademark simplicity. He also helpfully provides a copy of Zeeman’s 1976 SciAm article, too. Regular readers of this blog will know that I am a big fan of the catastrophe theory; see, for instance, my last post mentioning the topic, here [^].
  2. Computational Science and Engineering, and Python: If you are into computational science and engineering (which is The Proper And The Only Proper long-form of “CSE”), and wish to have fun with Python, then check out Prof. Hans Petter Langtangen’s excellent books, all under Open Source. Especially recommended is his “Finite Difference Computing with PDEs—A Modern Software Approach” [^]. What impressed me immediately was the way the author begins this book with the wave equation, and not with the diffusion or potential equation as is the routine practice in the FDM (or CSE) books. He also provides the detailed mathematical reason for his unusual choice of ordering the material, but apart from his reason(s), let me add in a comment here: wave \Rightarrow diffusion \Rightarrow potential (Poisson-Laplace) precisely was the historical order in which the maths of PDEs (by which I mean both the formulations of the equations and the techniques for their solutions) got developed—even though the modern trend is to reverse this order in the name of “simplicity.” The book comes with Python scripts; you don’t have to copy-paste code from the PDF (and then keep correcting the errors of characters or indentations). And, the book covers nonlinearity too.
  3. Good Notes/Teachings/Explanations of UG Quantum Physics: I ran across Dan Schroeder’s “Entanglement isn’t just for spin.” Very true. And it needed to be said [^]. BTW, if you want a more gentle introduction to the UG-level QM than is presented in Allan Adam (et al)’s MIT OCW 8.04–8.06 [^], then make sure to check out Schroeder’s course at Weber [^] too. … Personally, though, I keep on fantasizing about going through all the videos of Adam’s course and taking out notes and posting them at my Web site. [… sigh]
  4. The Supposed Spirituality of the “Quantum Information” Stored in the “Protein-Based Micro-Tubules”: OTOH, if you are more into philosophy of quantum mechanics, then do check out Roger Schlafly’s latest post, not to mention my comment on it, here [^].

The point no. 4. above was added in lieu of the usual “A Song I Like” section. The reason is, though I could squeeze in the time to write this post, I still remain far too rushed to think of a song—and to think/check if I have already run it here or not. But I will try add one later on, either to this post, or, if there is a big delay, then as the next “blog filler” post, the next time round.

[Update on 23 Nov. 2017 09:25 AM IST: Added the Song I Like section; see below]

OK, that’s it! … Will catch you at some indefinite time in future here, bye for now and take care…


A Song I Like:

(Western, Instrumental) “Theme from ‘Come September'”
Credits: Bobby Darin (?) [+ Billy Vaughn (?)]

[I grew up in what were absolutely rural areas in Maharashtra, India. All my initial years till my 9th standard were limited, at its upper end in the continuum of urbanity, to Shirpur, which still is only a taluka place. And, back then, it was a decidedly far more of a backward + adivasi region. The population of the main town itself hadn’t reached more than 15,000 or so by the time I left it in my X standard; the town didn’t have a single traffic light; most of the houses including the one we lived in) were load-bearing structures, not RCC; all the roads in the town were of single lanes; etc.

Even that being the case, I happened to listen to this song—a Western song—right when I was in Shirpur, in my 2nd/3rd standard. I first heard the song at my Mama’s place (an engineer, he was back then posted in the “big city” of the nearby Jalgaon, a district place).

As to this song, as soon as I listened to it, I was “into it.” I remained so for all the days of that vacation at Mama’s place. Yes, it was a 45 RPM record, and the permission to put the record on the player and even to play it, entirely on my own, was hard won after a determined and tedious effort to show all the elders that I was able to put the pin on to the record very carefully. And, every one in the house was an elder to me: my siblings, cousins, uncle, his wife, not to mention my parents (who were the last ones to be satisfied). But once the recognition arrived, I used it to the hilt; I must have ended up playing this record for at least 5 times for every remaining day of the vacation back then.

As far as I am concerned, I am entirely positive that appreciation for a certain style or kind of music isn’t determined by your environment or the specific culture in which you grow up.

As far as songs like these are concerned, today I am able to discern that what I had immediately though indirectly grasped, even as a 6–7 year old child, was what I today would describe as a certain kind of an “epistemological cleanliness.” There was a clear adherence to certain definitive, delimited kind of specifics, whether in terms of tones or rhythm. Now, it sure did help that this tune was happy. But frankly, I am certain, I would’ve liked a “clean” song like this one—one with very definite “separations”/”delineations” in its phrases, in its parts—even if the song itself weren’t to be so directly evocative of such frankly happy a mood. Indian music, in contrast, tends to keep “continuity” for its own sake, even when it’s not called for, and the certain downside of that style is that it leads to a badly mixed up “curry” of indefinitely stretched out weilings, even noise, very proudly passing as “music”. (In evidence: pick up any traditional “royal palace”/”kothaa” music.) … Yes, of course, there is a symmetrical downside to the specific “separated” style carried by the Western music too; the specific style of noise it can easily slip into is a disjointed kind of a noise. (In evidence, I offer 90% of Western classical music, and 99.99% of Western popular “music”. As to which 90%, well, we have to meet in person, and listen to select pieces of music on the fly.)

Anyway, coming back to the present song, today I searched for the original soundtrack of “Come September”, and got, say, this one [^]. However, I am not too sure that the version I heard back then was this one. Chances are much brighter that the version I first listened to was Billy Vaughn’s, as in here [^].

… A wonderful tune, and, as an added bonus, it never does fail to take me back to my “salad days.” …

… Oh yes, as another fond memory: that vacation also was the very first time that I came to wear a T-shirt; my Mama had gifted it to me in that vacation. The actual choice to buy a T-shirt rather than a shirt (+shorts, of course) was that of my cousin sister (who unfortunately is no more). But I distinctly remember she being surprised to learn that I was in no mood to have a T-shirt when I didn’t know what the word meant… I also distinctly remember her assuring me using sweet tones that a T-shirt would look good on me! … You see, in rural India, at least back then, T-shirts weren’t heard of; for years later on, may be until I went to Nasik in my 10th standard, it would be the only T-shirt I had ever worn. … But, anyway, as far as T-shirts go… well, as you know, I was into software engineering, and so….

Bye [really] for now and take care…]

 

The Infosys Prizes, 2015

I realized that it was the end of November the other day, and it somehow struck me that I should check out if there has been any news on the Infosys prizes for this year. I vaguely recalled that they make the yearly announcements sometime in the last quarter of a year.

Turns out that, although academic bloggers whose blogs I usually check out had not highlighted this news, the prizes had already been announced right in mid-November [^].

It also turns out also that, yes, I “know”—i.e., have in-person chatted (exactly once) with—one of the recipients. I mean Professor Dr. Umesh Waghmare, who received this year’s award for Engineering Sciences [^]. I had run into him in an informal conference once, and have written about it in a recent post, here [^].

Dr. Waghmare is a very good choice, if you ask me. His work is very neat—I mean both the ideas which he picks out to work on, and the execution on them.

I still remember his presentation at that informal conference (where I chatted with him). He had talked about a (seemingly) very simple idea, related to graphene [^]—its buckling.

Here is my highly dumbed down version of that work by Waghmare and co-authors. (It’s dumbed down a lot—Waghmare et al’s work was on buckling, not bending. But it’s OK; this is just a blog, and guess I have a pretty general sort of a “general readership” here.)

Bending, in general, sets up a combination of tensile and compressive stresses, which results in the setting up of a bending moment within a beam or a plate. All engineers (except possibly for the “soft” branches like CS and IT) study bending quite early in their undergraduate program, typically in the second year. So, I need not explain its analysis in detail. In fact, in this post, I will write only a common-sense level description of the issue. For technical details, look up the Wiki articles on bending [^] and buckling [^] or Prof. Bower’s book [^].

Assuming you are not an engineer, you can always take a longish rubber eraser, hold it so that its longest edge is horizontal, and then bend it with a twist of your fingers. If the bent shape is like an inverted ‘U’, then, the inner (bottom) surface has got compressed, and the outer (top) surface has got stretched. Since compression and tension are opposite in nature, and since the eraser is a continuous body of a finite height, it is easy to see that there has to be a continuous surface within the volume of the eraser, some half-way through its height, where there can be no stresses. That’s because, the stresses change sign in going from the compressive stress at the bottom surface to the tensile stresses on the top surface. For simplicity of mathematics, this problem is modeled as a 1D (line) element, and therefore, in elasticity theory, this actual 2D surface is referred to as the neutral axis (i.e. a line).

The deformation of the eraser is elastic, which means that it remains in the bent state only so long as you are applying a bending “force” to it (actually, it’s a moment of a force).

The classical theory of bending allows you to relate the curvature of the beam, and the bending moment applied to it. Thus, knowing bending moment (or the applied forces), you can tell how much the eraser should bend. Or, knowing how much the eraser has curved, you can tell how big a pair of fforces would have to be applied to its ends. The theory works pretty well; it forms of the basis of how most buildings are designed anyway.

So far, so good. What happens if you bend, not an eraser, but a graphene sheet?

The peculiarity of graphene is that it is a single atom-thick sheet of carbon atoms. Your usual eraser contains billions and billions of layers of atoms through its thickness. In contrast, the thickness of a graphene sheet is entirely accounted for by the finite size of the single layer of atoms. And, it is found that unlike thin paper, the graphen sheet, even if it is the the most extreme case of a thin sheet, actually does offer a good resistance to bending. How do you explain that?

The naive expectation is that something related to the interatomic bonding within this single layer must, somehow, produce both the compressive and tensile stresses—and the systematic variation from the locally tensile to the locally compressive state as we go through this thickness.

Now, at the scale of single atoms, quantum mechanical effects obviously are dominant. Thus, you have to consider those electronic orbitals setting up the bond. A shift in the density of the single layer of orbitals should correspond to the stresses and strains in the classical mechanics of beams and plates.

What Waghmare related at that conference was a very interesting bit.

He calculated the stresses as predicted by (in my words) the changed local density of the orbitals, and found that the forces predicted this way are way smaller than the experimentally reported values for graphene sheets. In other words, the actual graphene is much stiffer than what the naive quantum mechanics-based model shows—even if the model considers those electronic orbitals. What is the source of this additional stiffness?

He then showed a more detailed calculation (i.e. a simulation), and found that the additional stiffness comes from a quantum-mechanical interaction between the portions of the atomic orbitals that go off transverse to the plane of the graphene sheet.

Thus, suppose a graphene sheet is initially held horizontally, and then bent to form an inverted U-like curvature. According to Waghmare and co-authros, you now have to consider not just the orbital cloud between the atoms (i.e. the cloud lying in the same plane as the graphene sheet) but also the orbital “petals” that shoot vertically off the plane of the graphene. Such petals are attached to nucleus of each C atom; they are a part of the electronic (or orbital) structure of the carbon atoms in the graphene sheet.

In other words, the simplest engineering sketch for the graphene sheet, as drawn in the front view, wouldn’t look like a thin horizontal line; it would also have these small vertical “pins” at the site of each carbon atom, overall giving it an appearance rather like a fish-bone.

What happens when you bend the graphene sheet is that on the compression side, the orbital clouds for these vertical petals run into each other. Now, you know that an orbital cloud can be loosely taken as the electronic charge density, and that the like charges (e.g. the negatively charged electrons) repel each other. This inter-electronic repulsive force tends to oppose the bending action. Thus, it is the petals’ contribution which accounts for the additional stiffness of the graphene sheet.

I don’t know whether this result was already known to the scientific community back then in 2010 or not, but in any case, it was a very early analysis of bending of graphene. Further, as far as I could tell, the quality of Waghmare’s calculations and simulations was very definitely superlative. … You work in a field (say computational modeling) for some time, and you just develop a “nose” of sorts, that allows you to “smell” a superlative calculation from an average one. Particularly so, if your own skills on the calculations side are rather on the average, as happens to be the case with me. (My strengths are in conceptual and computational sides, but not on the mathematical side.) …

So, all in all, it’s a very well deserved prize. Congratulations, Dr. Waghmare!

 


A Song I Like:

(The so-called “fusion” music) “Jaisalmer”
Artists: Rahul Sharma (Santoor) and Richard Clayderman (Piano)
Album: Confluence

[As usual, may be one more editing pass…]

[E&OE]

Blogging some crap…

I had taken a vow not to blog very frequently any more—certainly not any more at least right this month, in April.

But then, I am known to break my own rules.

Still, guess I really am coming to a point where quite a few threads on which I wanted to blog are, somehow, sort of coming to an end, and fresh topics are still too fresh to write anything about.

So, the only things to blog about would be crap. Thus the title of this post.

Anyway, here is an update of my interests, and the reason why it actually is, and also would be, difficult for me to blog very regularly in the near future of months, may be even a year or so. [I am being serious.]

1. About micro-level water resources engineering:

Recently, I blogged a lot about it. Now, I think I have more or less completed my preliminary studies, and pursuing anything further would take a definitely targeted and detailed research—something that only can be pursued once I have a master’s or PhD student to guide. Which will only happen once I have a job. Which will only happen in July (when the next academic term of the University of Mumbai begins).

There is only one idea that I might mention for now.

I have installed QGIS, and worked through the relevant exercises to familiarize myself with it. Ujaval Gandhi’s tutorials are absolutely great in this respect.

The idea I can blog about right away is this. As I mentioned earlier, DEM maps with 5 m resolution are impossible to find. I asked my father to see if he had any detailed map at sub-talukaa level. He gave me an old official map from GSI; it is on a 1:50000 scale, with contours at 20 m. Pretty detailed, but still, since we are looking for check-dams of heights up to 10 m, not so helpful. So, I thought of interpolating contours, and the best way to do it would be through some automatic algorithms. The map anyway has to be digitized first.

That means, scan it at a high enough resolution, and then perform a raster to vector conversion so that DEM heightfields could be viewed in QGIS.

The trouble is, the contour lines are too faint. That means, automatic image processing to extract the existing contours would be of limited help. So, I thought of an idea: why not lay a tracing paper on top, and trace out only the contours using black pen, and then, separately scan it? It was this idea that was already mentioned in an official Marathi document by the irrigation department.

Of course, they didn’t mean to go further and do the raster-to-vector conversion and all.  I would want to adapt/create algorithms that could simulate rainfall run-offs after high intensity sporadic rains, possibly leading also to flooding. I also wanted to build algorithms that would allow estimates of volumes of water in a check dam before and after evaporation and seepage. (Seepage calculations would be done, as a first step, after homogenizing the local geology; the local geology could enter the computations at a more advanced stage of the research.) A PhD student at IIT Bombay has done some work in this direction, and I wanted to independently probe these issues. I could always use raster algorithms, but since the size of the map would be huge, I thought that the vector format would be more efficient for some of these algorithms. Thus, I had to pursue the raster-to-vector conversion.

So I did some search in this respect, and found some papers and even open source software. For instance, Peter Selinger’s POTrace, and the further off-shoots from it.

I then realized that since the contour lines in the scanned image (whether original or traced) wouldn’t be just one-pixel wide, I would have to run some kind of a line thinning algorithm.

Suitable ready made solutions are absent and building one from the scratch would be too time consuming—it can possibly be a good topic for a master’s project in the CS/Mech departments, in the computer graphics field. Here is one idea I saw implemented somewhere. To fix our imagination, launch MS Paint (or GIMP on Ubuntu), and manually draw a curve in a thick brush, or type a letter in a huge font like 48 points or so, and save the BMP file. Our objective is to make a single pixel-thick line drawing out of this thick diagram. The CS folks apparently call it the centerlining algorithm. The idea I saw implemented was something like this: (i) Do edge detection to get single pixel wide boundaries. The “filled” letter in the BMP file would now become “hollow;” it would have only the outlines that are single pixel wide. (ii) Do raster-to-vector conversion, say using POTrace, on this hollow letter. You would thus have a polygon representation for the letter. (iii) Run a meshing software (e.g. Jonathan Schewchuk’s Triangle, or something in the CGAL library) to fill the interior parts of this hollow polygon with a single layer of triangles. (iv) Find the centroids of all these triangles, and connect them together. This will get us the line running through the central portions of each arm of the letter diagram. Keep this line and delete the triangles. What you have now got is a single pixel-wide vector representation of what once was a thick letter—or a contour line in the scanned image.

Sine this algorithm seemed too complicated, I thought whether it won’t be possible to simply apply a suitable diffusion algorithm to simply erode away the thickness of the line. For instance, think that the thick-walled letter is initially made uniformly cold, and then it is placed in uniformly heated surroundings. Since the heat enters from boundaries, the outer portions become hotter than the interior. As the temperature goes on increasing, imagine the thick line to begin to melt. As soon as a pixel melts, check whether there is any solid pixel still left in its neighbourhood or not. If yes, remove the molten pixel from the thick line. In the end, you would get a raster representation one pixel thick. You can easily convert it to the vector representation. This is a simplified version of the algorithm I had implemented for my paper on the melting snowman, with that check for neighbouring solid pixels now being thrown in.

Pursuing either would be too much work for the time being; I could either offload it to a student for his project, or work on it at a later date.

Thus ended my present thinking line on the micro-level water-resources engineering.

2. Quantum mechanics:

You knew that I was fooling you when I had noted in my post dated the first of April this year, that:

“in the course of attempting to build a computer simulation, I have now come to notice a certain set of factors which indicate that there is a scope to formulate a rigorous theorem to the effect that it will always be logically impossible to remove all the mysteries of quantum mechanics.”

Guess people know me too well—none fell for it.

Well, though I haven’t quite built a simulation, I have been toying with certain ideas about simulating quantum phenomena using what seems to be a new fluid dynamical model. (I think I had mentioned about using CFD to do QM, on my blog here a little while ago).

I pursued this idea, and found that it indeed should reproduce all the supposed weirdities of QM. But then I also found that this model looks a bit too contrived for my own liking. It’s just not simple enough. So, I have to think more about it, before allocating any specific or concrete research activities about it.

That is another dead-end, as far as blogging is concerned.

However, in the meanwhile, if you must have something interesting related to QM, check out David Hestenes’ work. Pretty good, if you ask me.

OK. Physicists, go away.

3. Homeopathy:

I had ideas about computational modelling for the homeopathic effect. By homeopathy, I mean: the hypothesis that water is capable of storing an “imprint” or “memory” of a foreign substance via structuring of its dipole molecules.

I have blogged about this topic before. I had ideas of doing some molecular dynamics kind of modelling. However, I now realize that given the current computational power, any MD modelling would be for far too short time periods. I am not sure how useful that would be, if some good scheme (say a variational scheme) for coarse-graining or coupling coarse-grained simulation with the fine-grained MD simulation isn’t available.

Anyway, I didn’t have much time available to look into these aspects. And so, there goes another line of research; I don’t have much to do blogging about it.

4. CFD:

This is one more line of research/work for me. Indeed, as far as my professional (academic research) activities go, this one is probably the most important line.

Here, too, there isn’t much left to blog about, even if I have been pursuing some definite work about it.

I would like to model some rheological flows as they occur in ceramics processing, starting with ceramic injection moulding. A friend of mine at IIT Bombay has been working in this area, and I should have easy access to the available experimental data. The phenomenon, of course, is much too complex; I doubt whether an institute with relatively modest means like an IIT could possibly conduct experimentation to all the required level of accuracy or sophistication. Accurate instrumentation means money. In India, money is always much more limited, as compared to, say, in the USA—the place where neither money nor dumbness is ever in short supply.

But the problem is very interesting to a computational engineer like me. Here goes a brief description, suitably simplified (but hopefully not too dumbed down (even if I do have American readers on this blog)).

Take a little bit of wax in a small pot, melt it, and mix some fine sand into it. The paste should have the consistency of a toothpaste (the limestone version, not the gel version). Just like you pinch on the toothpaste tube and pops out the paste—technically this is called an extrusion process—similarly, you have a cylinder and ram arrangement that holds this (molten wax+sand) paste and injects it into a mould cavity. The mould is metallic; aluminium alloys are often used in research because making a precision die in aluminium is less expensive. The hot molten wax+ceramic paste is pushed into the mould cavity under pressure, and fills it. Since the mould is cold, it takes out the heat from the paste, and so the paste solidifies. You then open the mould, take out the part, and sinter it. During sintering, the wax melts and evaporates, and then the sand (ceramic) gets bound together by various sintering mechanism. Materials engineers focus on the entire process from a processing viewpoint. As a computational engineer, my focus is only up to the point that the paste solidifies. So many interesting things happen up to that point that it already makes my plate too full. Here is an indication.

The paste is a rheological material. Its flow is non-Newtonian. (There sinks in his chair your friendly computational fluid dynamicist—his typical software cannot handle non-Newtonian fluids.) If you want to know, this wax+sand paste shows a shear-thinning behaviour (which is in contrast to the shear-thickening behaviour shown by, say, corn syrup).

Further, the flow of the paste involves moving boundaries, with pronounced surface effects, as well as coalescence or merging of boundaries when streams progressing on different arms of the cavity eventually come together during the filling process. (Imagine the simplest mould cavity in the shape of an O-ring. The paste is introduced from one side, say from the dash placed on the left hand side of the cavity, as shown here: “-O”. First, after entering the cavity, the paste has to diverge into the upper and lower arms, and as the cavity filling progresses, the two arms then come together on the rightmost parts of the “O” cavity.)

Modelling moving boundaries is a challenge. No textbook on CFD would even hint at how to handle it right, because all of them are based on rocket science (i.e. the aerodynamics research that NASA and others did from fifties onwards). It’s a curious fact that aeroplanes always fly in air. They never fly at the boundary of air and vacuum. So, an aeronautical engineer never has to worry about a moving fluid boundary problem. Naval engineers have a completely different approach; they have to model a fluid flow that is only near a surface—they can afford to ignore what happens to the fluid that lies any deeper than a few characteristic lengths of their ships. Handling both moving boundaries and interiors of fluids at the same time with sufficient accuracy, therefore, is a pretty good challenge. Ask any people doing CFD research in casting simulation.

But simulation of the flow of the molten iron in gravity sand-casting is, relatively, a less complex problem. Do dimensional analysis and verify that molten iron has the same fluid dynamical characteristics as that of the plain water. In other words, you can always look at how water flows inside a cavity, and the flow pattern would remain exactly the same also for molten iron, even if the metal is so heavy. Implication, surface tension effects are OK to handle for the flow of molten iron. Also, pressures are negligibly small in gravity casting.

But rheological paste being too thick, and it flowing under pressure, handling the surface tensions effect right should be even bigger a challenge. Especially at those points where multiple streams join together, under pressure.

Then, there is also heat transfer. You can’t get away doing only momentum equations; you have to couple in the energy equations too. And, the heat transfer obviously isn’t steady-state; it’s necessarily transient—the whole process of cavity filling and paste solidification gets over within a few seconds, sometimes within even a fraction of a second.

And then, there is this phase change from the liquid state to the solid state too. Yet another complication for the computational engineer.

Why should he address the problem in the first place?

Good question. Answer is: Economics.

If the die design isn’t right, the two arms of the fluid paste lose heat and become sluggish, even part solidify at the boundary, before joining together. The whole idea behind doing computational modelling is to help the die designer improve his design, by allowing him to try out many different die designs and their variations on a computer, before throwing money into making an actual die. Trying out die designs on computer takes time and money too, but the expense would be relatively much much smaller as compared to actually making a die and trying it. Precision machining is too expensive, and taking a manufacturing trial takes too much time—it blocks an entire engineering team and a production machine into just trials.

So, the idea is that the computational engineer could help by telling in advance whether, given a die design and process parameters, defects like cold-joins are likely to occur.

The trouble is, the computational modelling techniques happen to be at their weakest exactly at those spots where important defects like cold-joins are most likely. These are the places where all the armies of the devil come together: non-Newtonian fluid with temperature dependent properties, moving and coalescing boundaries, transient heat transfer, phase change, variable surface tension and wall friction, pressure and rapidity (transience would be too mild a word) of the overall process.

So, that’s what the problem to model itself looks like.

Obviously, ready made software aren’t yet sophisticated enough. The best available are those that do some ad-hoc tweaking to the existing software for the plastic injection moulding. But the material and process parameters differ, and it shows in the results. And, that way, validation of these tweaks still is an on-going activity in the research community.

Obviously, more research is needed! [I told you the reason: Economics!]

Given the granular nature of the material, and the rapidity of the process, some people thought that SPH (smoothed particle hydrodynamics) should be suitable. They have tried, but I don’t know the extent of the sophistication thus far.

Some people have also tried finite-differences based approaches, with some success. But FDM has its limitations—fluxes aren’t conserved, and in a complex process like this, it would be next to impossible to tell whether a predicted result is a feature of the physical process or an artefact of the numerical modelling.

FVM should do better because it conserves fluxes better. But the existing FVM software is too complex to try out the required material and process specific variations. Try introducing just one change to a material model in OpenFOAM, and simulating the entire filling process with it. Forget it. First, try just mould filling with coupled heat transfer. Forget it. First, try just mould filling with OpenFOAM. Forget it. First, try just debug-stepping through a steady-state simulation. Forget it. First, try just compiling it from the sources, successfully.

I did!

Hence, the natural thing to do is to first write some simple FVM code, initially only in 2D, and then go on adding the process-specific complications to it.

Now this is something about I have got going, but by its nature, it also is something about you can’t blog a lot. It will be at least a few months or so before even a preliminary version 0.1 code would become available, at which point some blogging could be done about it—and, hopefully, also some bragging.

Thus, in the meanwhile, that line of thought, too comes to an end, as far as blogging is concerned.

Thus, I don’t (and won’t) have much to blog about, even if I remain (and plan to remain) busy (to very busy).

So allow me to blog only sparsely in the coming weeks and months. Guess I could bring in the comments I made at other blogs once in a while to keep this blog somehow going, but that’s about it.

In short, nothing new. And so, it all is (and is going to be) crap.

More of it, later—much later, may be a few weeks later or so. I will blog, but much more infrequently, that’s the takeaway point.

* * * * *   * * * * *   * * * * *

(Marathi) “madhu maagashee maajhyaa sakhyaa pari…”
Lyrics: B. R. Tambe
Singer: Lata Mangeshkar
Music: Vasant Prabhu

[I just finished writing the first cut; an editing pass or two is still due.]

[E&OE]