Blogging some crap…

I had taken a vow not to blog very frequently any more—certainly not any more at least right this month, in April.

But then, I am known to break my own rules.

Still, guess I really am coming to a point where quite a few threads on which I wanted to blog are, somehow, sort of coming to an end, and fresh topics are still too fresh to write anything about.

So, the only things to blog about would be crap. Thus the title of this post.

Anyway, here is an update of my interests, and the reason why it actually is, and also would be, difficult for me to blog very regularly in the near future of months, may be even a year or so. [I am being serious.]

1. About micro-level water resources engineering:

Recently, I blogged a lot about it. Now, I think I have more or less completed my preliminary studies, and pursuing anything further would take a definitely targeted and detailed research—something that only can be pursued once I have a master’s or PhD student to guide. Which will only happen once I have a job. Which will only happen in July (when the next academic term of the University of Mumbai begins).

There is only one idea that I might mention for now.

I have installed QGIS, and worked through the relevant exercises to familiarize myself with it. Ujaval Gandhi’s tutorials are absolutely great in this respect.

The idea I can blog about right away is this. As I mentioned earlier, DEM maps with 5 m resolution are impossible to find. I asked my father to see if he had any detailed map at sub-talukaa level. He gave me an old official map from GSI; it is on a 1:50000 scale, with contours at 20 m. Pretty detailed, but still, since we are looking for check-dams of heights up to 10 m, not so helpful. So, I thought of interpolating contours, and the best way to do it would be through some automatic algorithms. The map anyway has to be digitized first.

That means, scan it at a high enough resolution, and then perform a raster to vector conversion so that DEM heightfields could be viewed in QGIS.

The trouble is, the contour lines are too faint. That means, automatic image processing to extract the existing contours would be of limited help. So, I thought of an idea: why not lay a tracing paper on top, and trace out only the contours using black pen, and then, separately scan it? It was this idea that was already mentioned in an official Marathi document by the irrigation department.

Of course, they didn’t mean to go further and do the raster-to-vector conversion and all.  I would want to adapt/create algorithms that could simulate rainfall run-offs after high intensity sporadic rains, possibly leading also to flooding. I also wanted to build algorithms that would allow estimates of volumes of water in a check dam before and after evaporation and seepage. (Seepage calculations would be done, as a first step, after homogenizing the local geology; the local geology could enter the computations at a more advanced stage of the research.) A PhD student at IIT Bombay has done some work in this direction, and I wanted to independently probe these issues. I could always use raster algorithms, but since the size of the map would be huge, I thought that the vector format would be more efficient for some of these algorithms. Thus, I had to pursue the raster-to-vector conversion.

So I did some search in this respect, and found some papers and even open source software. For instance, Peter Selinger’s POTrace, and the further off-shoots from it.

I then realized that since the contour lines in the scanned image (whether original or traced) wouldn’t be just one-pixel wide, I would have to run some kind of a line thinning algorithm.

Suitable ready made solutions are absent and building one from the scratch would be too time consuming—it can possibly be a good topic for a master’s project in the CS/Mech departments, in the computer graphics field. Here is one idea I saw implemented somewhere. To fix our imagination, launch MS Paint (or GIMP on Ubuntu), and manually draw a curve in a thick brush, or type a letter in a huge font like 48 points or so, and save the BMP file. Our objective is to make a single pixel-thick line drawing out of this thick diagram. The CS folks apparently call it the centerlining algorithm. The idea I saw implemented was something like this: (i) Do edge detection to get single pixel wide boundaries. The “filled” letter in the BMP file would now become “hollow;” it would have only the outlines that are single pixel wide. (ii) Do raster-to-vector conversion, say using POTrace, on this hollow letter. You would thus have a polygon representation for the letter. (iii) Run a meshing software (e.g. Jonathan Schewchuk’s Triangle, or something in the CGAL library) to fill the interior parts of this hollow polygon with a single layer of triangles. (iv) Find the centroids of all these triangles, and connect them together. This will get us the line running through the central portions of each arm of the letter diagram. Keep this line and delete the triangles. What you have now got is a single pixel-wide vector representation of what once was a thick letter—or a contour line in the scanned image.

Sine this algorithm seemed too complicated, I thought whether it won’t be possible to simply apply a suitable diffusion algorithm to simply erode away the thickness of the line. For instance, think that the thick-walled letter is initially made uniformly cold, and then it is placed in uniformly heated surroundings. Since the heat enters from boundaries, the outer portions become hotter than the interior. As the temperature goes on increasing, imagine the thick line to begin to melt. As soon as a pixel melts, check whether there is any solid pixel still left in its neighbourhood or not. If yes, remove the molten pixel from the thick line. In the end, you would get a raster representation one pixel thick. You can easily convert it to the vector representation. This is a simplified version of the algorithm I had implemented for my paper on the melting snowman, with that check for neighbouring solid pixels now being thrown in.

Pursuing either would be too much work for the time being; I could either offload it to a student for his project, or work on it at a later date.

Thus ended my present thinking line on the micro-level water-resources engineering.

2. Quantum mechanics:

You knew that I was fooling you when I had noted in my post dated the first of April this year, that:

“in the course of attempting to build a computer simulation, I have now come to notice a certain set of factors which indicate that there is a scope to formulate a rigorous theorem to the effect that it will always be logically impossible to remove all the mysteries of quantum mechanics.”

Guess people know me too well—none fell for it.

Well, though I haven’t quite built a simulation, I have been toying with certain ideas about simulating quantum phenomena using what seems to be a new fluid dynamical model. (I think I had mentioned about using CFD to do QM, on my blog here a little while ago).

I pursued this idea, and found that it indeed should reproduce all the supposed weirdities of QM. But then I also found that this model looks a bit too contrived for my own liking. It’s just not simple enough. So, I have to think more about it, before allocating any specific or concrete research activities about it.

That is another dead-end, as far as blogging is concerned.

However, in the meanwhile, if you must have something interesting related to QM, check out David Hestenes’ work. Pretty good, if you ask me.

OK. Physicists, go away.

3. Homeopathy:

I had ideas about computational modelling for the homeopathic effect. By homeopathy, I mean: the hypothesis that water is capable of storing an “imprint” or “memory” of a foreign substance via structuring of its dipole molecules.

I have blogged about this topic before. I had ideas of doing some molecular dynamics kind of modelling. However, I now realize that given the current computational power, any MD modelling would be for far too short time periods. I am not sure how useful that would be, if some good scheme (say a variational scheme) for coarse-graining or coupling coarse-grained simulation with the fine-grained MD simulation isn’t available.

Anyway, I didn’t have much time available to look into these aspects. And so, there goes another line of research; I don’t have much to do blogging about it.

4. CFD:

This is one more line of research/work for me. Indeed, as far as my professional (academic research) activities go, this one is probably the most important line.

Here, too, there isn’t much left to blog about, even if I have been pursuing some definite work about it.

I would like to model some rheological flows as they occur in ceramics processing, starting with ceramic injection moulding. A friend of mine at IIT Bombay has been working in this area, and I should have easy access to the available experimental data. The phenomenon, of course, is much too complex; I doubt whether an institute with relatively modest means like an IIT could possibly conduct experimentation to all the required level of accuracy or sophistication. Accurate instrumentation means money. In India, money is always much more limited, as compared to, say, in the USA—the place where neither money nor dumbness is ever in short supply.

But the problem is very interesting to a computational engineer like me. Here goes a brief description, suitably simplified (but hopefully not too dumbed down (even if I do have American readers on this blog)).

Take a little bit of wax in a small pot, melt it, and mix some fine sand into it. The paste should have the consistency of a toothpaste (the limestone version, not the gel version). Just like you pinch on the toothpaste tube and pops out the paste—technically this is called an extrusion process—similarly, you have a cylinder and ram arrangement that holds this (molten wax+sand) paste and injects it into a mould cavity. The mould is metallic; aluminium alloys are often used in research because making a precision die in aluminium is less expensive. The hot molten wax+ceramic paste is pushed into the mould cavity under pressure, and fills it. Since the mould is cold, it takes out the heat from the paste, and so the paste solidifies. You then open the mould, take out the part, and sinter it. During sintering, the wax melts and evaporates, and then the sand (ceramic) gets bound together by various sintering mechanism. Materials engineers focus on the entire process from a processing viewpoint. As a computational engineer, my focus is only up to the point that the paste solidifies. So many interesting things happen up to that point that it already makes my plate too full. Here is an indication.

The paste is a rheological material. Its flow is non-Newtonian. (There sinks in his chair your friendly computational fluid dynamicist—his typical software cannot handle non-Newtonian fluids.) If you want to know, this wax+sand paste shows a shear-thinning behaviour (which is in contrast to the shear-thickening behaviour shown by, say, corn syrup).

Further, the flow of the paste involves moving boundaries, with pronounced surface effects, as well as coalescence or merging of boundaries when streams progressing on different arms of the cavity eventually come together during the filling process. (Imagine the simplest mould cavity in the shape of an O-ring. The paste is introduced from one side, say from the dash placed on the left hand side of the cavity, as shown here: “-O”. First, after entering the cavity, the paste has to diverge into the upper and lower arms, and as the cavity filling progresses, the two arms then come together on the rightmost parts of the “O” cavity.)

Modelling moving boundaries is a challenge. No textbook on CFD would even hint at how to handle it right, because all of them are based on rocket science (i.e. the aerodynamics research that NASA and others did from fifties onwards). It’s a curious fact that aeroplanes always fly in air. They never fly at the boundary of air and vacuum. So, an aeronautical engineer never has to worry about a moving fluid boundary problem. Naval engineers have a completely different approach; they have to model a fluid flow that is only near a surface—they can afford to ignore what happens to the fluid that lies any deeper than a few characteristic lengths of their ships. Handling both moving boundaries and interiors of fluids at the same time with sufficient accuracy, therefore, is a pretty good challenge. Ask any people doing CFD research in casting simulation.

But simulation of the flow of the molten iron in gravity sand-casting is, relatively, a less complex problem. Do dimensional analysis and verify that molten iron has the same fluid dynamical characteristics as that of the plain water. In other words, you can always look at how water flows inside a cavity, and the flow pattern would remain exactly the same also for molten iron, even if the metal is so heavy. Implication, surface tension effects are OK to handle for the flow of molten iron. Also, pressures are negligibly small in gravity casting.

But rheological paste being too thick, and it flowing under pressure, handling the surface tensions effect right should be even bigger a challenge. Especially at those points where multiple streams join together, under pressure.

Then, there is also heat transfer. You can’t get away doing only momentum equations; you have to couple in the energy equations too. And, the heat transfer obviously isn’t steady-state; it’s necessarily transient—the whole process of cavity filling and paste solidification gets over within a few seconds, sometimes within even a fraction of a second.

And then, there is this phase change from the liquid state to the solid state too. Yet another complication for the computational engineer.

Why should he address the problem in the first place?

Good question. Answer is: Economics.

If the die design isn’t right, the two arms of the fluid paste lose heat and become sluggish, even part solidify at the boundary, before joining together. The whole idea behind doing computational modelling is to help the die designer improve his design, by allowing him to try out many different die designs and their variations on a computer, before throwing money into making an actual die. Trying out die designs on computer takes time and money too, but the expense would be relatively much much smaller as compared to actually making a die and trying it. Precision machining is too expensive, and taking a manufacturing trial takes too much time—it blocks an entire engineering team and a production machine into just trials.

So, the idea is that the computational engineer could help by telling in advance whether, given a die design and process parameters, defects like cold-joins are likely to occur.

The trouble is, the computational modelling techniques happen to be at their weakest exactly at those spots where important defects like cold-joins are most likely. These are the places where all the armies of the devil come together: non-Newtonian fluid with temperature dependent properties, moving and coalescing boundaries, transient heat transfer, phase change, variable surface tension and wall friction, pressure and rapidity (transience would be too mild a word) of the overall process.

So, that’s what the problem to model itself looks like.

Obviously, ready made software aren’t yet sophisticated enough. The best available are those that do some ad-hoc tweaking to the existing software for the plastic injection moulding. But the material and process parameters differ, and it shows in the results. And, that way, validation of these tweaks still is an on-going activity in the research community.

Obviously, more research is needed! [I told you the reason: Economics!]

Given the granular nature of the material, and the rapidity of the process, some people thought that SPH (smoothed particle hydrodynamics) should be suitable. They have tried, but I don’t know the extent of the sophistication thus far.

Some people have also tried finite-differences based approaches, with some success. But FDM has its limitations—fluxes aren’t conserved, and in a complex process like this, it would be next to impossible to tell whether a predicted result is a feature of the physical process or an artefact of the numerical modelling.

FVM should do better because it conserves fluxes better. But the existing FVM software is too complex to try out the required material and process specific variations. Try introducing just one change to a material model in OpenFOAM, and simulating the entire filling process with it. Forget it. First, try just mould filling with coupled heat transfer. Forget it. First, try just mould filling with OpenFOAM. Forget it. First, try just debug-stepping through a steady-state simulation. Forget it. First, try just compiling it from the sources, successfully.

I did!

Hence, the natural thing to do is to first write some simple FVM code, initially only in 2D, and then go on adding the process-specific complications to it.

Now this is something about I have got going, but by its nature, it also is something about you can’t blog a lot. It will be at least a few months or so before even a preliminary version 0.1 code would become available, at which point some blogging could be done about it—and, hopefully, also some bragging.

Thus, in the meanwhile, that line of thought, too comes to an end, as far as blogging is concerned.

Thus, I don’t (and won’t) have much to blog about, even if I remain (and plan to remain) busy (to very busy).

So allow me to blog only sparsely in the coming weeks and months. Guess I could bring in the comments I made at other blogs once in a while to keep this blog somehow going, but that’s about it.

In short, nothing new. And so, it all is (and is going to be) crap.

More of it, later—much later, may be a few weeks later or so. I will blog, but much more infrequently, that’s the takeaway point.

* * * * *   * * * * *   * * * * *

(Marathi) “madhu maagashee maajhyaa sakhyaa pari…”
Lyrics: B. R. Tambe
Singer: Lata Mangeshkar
Music: Vasant Prabhu

[I just finished writing the first cut; an editing pass or two is still due.]

[E&OE]

 

Getting dusty…

I have been getting dusty for some time now.

… No, by “dusty” I don’t mean that dust of the “Heat and Dust” kind, even though it’s been quite the regular kind of an “unusually hot” summer this year, too.

[In case you don’t know, “Heat and Dust” was a neat movie that I vaguely recall I had liked when it had come on the scene some 2–3 decades ago. Guess I was an undergrad student at COEP or so, back then or so. (Google-devataa now informs me that the movie was released in 1983, the same year that I graduated from COEP.)]

Anyway, about the title of this post: By getting dusty, I mean that I have been trying to get some definite and very concrete beginning, on the software development side, on modelling things using “dust.” That is, particles. I mean to say, methods like molecular dynamics (MD), smoothed particle hydrodynamics (SPH), lattice Boltzmann method (LBM), etc. … I kept on postpoing writing a blog post here with the anticipation that I would succeed in tossing a neat toy code for a post.

… However, I soon came face-to-face with the  sobering truth that since becoming a professor, my programming skills have taken a (real) sharp dip.

Can you believe that I had trouble simply getting wxWidgets to work on Ubuntu/Win64? Or to get OpenGL to work on Ubuntu? It took almost two weeks for me to do that! (And, I haven’t yet got OpenGL to work with wxWidgets on Ubuntu!) … So, finally, I (once again) gave up the idea of doing some neat platform-neutral C++ work, and, instead (once again) went back to VC++. Then there was a problem awaiting me regarding VC++ too.

Actually, I had written a blog post against the modern VC++ at iMechanica and all (link to be inserted) but that was quite some time back (may be a couple of years or so). In the meanwhile, I had forgotten how bad VC++ has really become over the years, and I had to rediscover that discouraging fact once again!

So, I then tried installing VC++ 6 on Win7 (64-bit)—and, yet another ugly surprise. VC++ 6 won’t run on Win7. It’s possible to do that using some round-about ways, but it all is now a deprecated technology.

Finally, I resigned myself to using VC++ 10 on Win7. Three weeks of precious vacation time already down!

That‘s what I meant when I said how bad a programmer I have turned out to be, these days.

Anyway, that’s when I finally could begin writing some real though decidedly toy code for some simple MD, just so that I could play around with it a bit.

Though writing MD code seems such a simple, straight-forward game (what’s the Verlet algorithm if not plain and simple FDM?), I soon realized that there are some surprises in it, too.

All of the toy MD code to be found on the Internet (and some of the serious code, too) assumes only a neat rectangular or a brick-like domain. If you try to accommodate an arbitrary shaped domain (even if only with straight-lines for boundaries), you immediately run into the run-time efficiency issues. The matter gets worse if you try to accommodate holes in the domain—e.g., a square hole in a square domain like what my Gravatar icon shows. (It was my initial intention to quickly do an MD simulation for this flow through square cavity having a square hole.)

Next, once you are able to handle arbitrarily shaped domains with arbitrarily shaped holes, small bugs begin to show up. Sometimes, despite no normal flux condition, my particles were able to slip past the domain boundaries, esp. near the points where two bounding edges meet. However, since the occurrence was rare, and hard to debug for (what do you do if it happens only in the 67,238th iteration? hard-code a break after 67,237, recompile, run, go off for a coffee?) I decided to downscale the complexity of the basic algorithm.

So, instead of using the Lennard-Jones potential (or some other potential), I decided to switch off the potential field completely, and have some simple perfectly hard and elastically colliding disks. (I did implement separate shear and normal frictions at the time of collisions, though. (Of course, the fact that frictions don’t let the particles be attracted, ever, is a different story, but, remember, we are trying to be simple here.)) The particles were still managing to escape the domain, at rare times. But at least, modelling with the hard elastic disks allowed me to locate the bugs better. Turned out to be a very, very stupid-some matter: my algorithm had to take care for a finite-sized particle interacting with the boundary edges.

But, yes, I could then get something like more than 1000 particles happily go colliding with each other for more than 10^8 collisions. (No, I haven’t parallelized the code. I merely let it run on my laptop while I was attending in a longish academics-related meeting.)

Another thing: Some of the code on the ‘net, I found, simply won’t work for even a modestly longer simulation run.  For instance, Amit Kumar’s code here [^]. (Sorry Amit, I should have written you first. Will drop you a line as soon as this over-over-overdue post is out the door.) The trouble with such codes is with the time-step, I guess. … I don’t know for sure, yet; I am just loud-thinking about the reason. And, I know for a fact that if you use Amit’s parameter values, a gas explosion is going to occur rather soon, may be right after something like 10^5 collisions or so. Java runs too slowly and so Amit couldn’t have noticed it, but that’s what happens with those parameter values in my C++ code.

I haven’t yet fixed all my bugs, and in fact, haven’t yet implemented the Lennard-Jones model for the arbitrarily shaped domains (with (multiple) holes). I thought I should first factor out the common code well, and then proceed. … And that’s when other matters of research took over.

Well, why did I get into MD, in the first place? Why didn’t I do something useful starting off with the LBM?

Well, the thing is like this. I know from my own experience that this idea of a stationary control volume and a moving control volume is difficult for students to get. I thought it would be easy to implement an MD fluid, and then, once I build in the feature of “selecting” (or highlighting) a small group of molecules close to each other, with these molecules together forming a “continuous” fluid parcel, I could then directly show my students how this parcel evolves with the fluid motion—how the location, shape, momentum and density of the parcel undergoes changes. They could visually see the difference between the Eulerian and Lagrangian descriptions. That, really speaking, was my motivation.

But then, as I told you, I discovered that I have regressed low enough to have become a very bad programmer by now.

Anyway, playing around this way also gave me some new ideas. If you have been following this blog for some time, you would know that I have been writing in favor of homeopathy. While implementing my code, I thought that it might be a good idea to implement not just LJ, but also the dipole nature of water molecules, and see how the virtual water behaves: does it show the hypothesized character of persistence of structure or not. (Yes, you read it here first.) But, what the hell, I have far too many other things lined up for me to pursue this thread right away. But, sure, it’s very interesting to me, and so, I will do something in that direction some time in future.

Once I get my toy MD code together (for both hard-disk and LJ/other potentials models, via some refactorability/extensibility thrown in), then I guess I would move towards a toy SPH code. … Or at least that’s what I guess would be the natural progression in all this toys-building activity. This way, I could reuse many of my existing MD classes. And so, LBM should logically follow after SPH—what do you think? (And, yes, I will have to take the whole thing from the current 2D to 3D, sometime in future, too.)

And, all of that is, of course, assuming that I manage to become a better programmer.

Before closing, just one more note: This vacation, I tried Python too, but found that (i) to accomplish the same given functional specs, my speed of development using C++ is almost the same (if not better) than my speed with Python, and (ii) I keep missing the nice old C/C++ braces for the local scope. Call it the quirky way in which an old programmer’s dusty mind works, but at least for the time being, I am back to C++ from that brief detour to Python.

Ok, more, later.

* * * * *   * * * * *   * * * * *

A Song I Like:
(Marathi) “waaT sampataa sampenaa, kuNi waaTet bhetenaa…”
Music: Datta Dawajekar
Lyrics: Devakinandan Saaraswat
Singer: Jayawant Kulkarni

[PS: A revision is perhaps due for streamlining the writing. May be I will come back and do that.]

[E&OE]

 

A Hypothesis on Homeopathy, Part 4

OK, back to the exercise on the three states of matter. (If joining late, first go through my last post [^].)

First of all, take out your sketches. Then, do a Google search on “states of matter.” Click open the Google link to “Images for states of matter,” and browse further, esp. the Encyclopaedia Britannica entry. Also, see the Wiki entry on States of Matter [^] (version, today’s!!). Compare your sketches with these diagrams.

I can bet a coin of (Marathi) “chaar aaNe” (i.e. in Hindi, “chaar aanaa” or “chawanni”) that for most of you, the sketches you made would look very similar to those reported by the Wiki. Also, by the Britannica Encyclopaedia.

To make the matters somewhat more convenient, I also made a small diagram myself, using MS Paint. It appears below, and shows, from top to bottom, a solid, a liquid, and a gas. This diagram is not my idea of how they ought to be drawn, but, as per my aforementioned bet, it does show all yours’!!:

States of Matter, as Usually Shown

States of Matter, as Usually Shown

Now, on to the main point I said you all were going to miss (well, almost all of you, anyway!). To illustrate it, let’s take an example.

Take any metal, say, iron. When a piece of solid iron is heated, it melts into liquid iron. Does the above sketch adequately represent this change of state? Think about it.

If you know materials science, you would know that iron exists as a BCC (body-centered cubic) crystal at the room temperature and as an FCC (face-centered cubic) crystal at intermediately high temperatures. Etc. In contrast, my diagram shows iron as a SCC (simple cubic structure) crystal. Point granted. But that’s not the point I have in mind here.

So, my question still is: Do you find anything else wrong with that picture, esp. the sketches for the solid and liquid states?

In case you didn’t, here’s a hint.

What is the density of solid iron at room temperature? (Did at least that question give you a hint, now?) The density is, say, 7800 kg/m^3. OK. Now, what is the density of liquid iron, at a temperature a little beyond its melting point, say, at 1725 C? It is, say, 6900 kg/m^3. Given these data, suppose, you begin with one kg of iron, and melt it, and further heat the liquid iron to 1725 C. What do you think is the overall volume expansion, in percentage terms for such a solid to liquid change of state? (Note, volume and density are inversely related.) Say, roughly, 10 percent. That is, on the volume basis.

How does a volume expansion translate into lineal terms? For a unit cube to have a 10 percent increase in volume, what must be the increase in the length of each of its sides? I will provide some details for the benefit of some of my readers. Does it go like: \sqrt[3]{0.1} \approx 0.464 \approx 46 \% ? Or does it go like this: (1.0+e)^3 = 1.1 where e is the per-unit expansion (in fractional terms), and hence, 1.0 + e = \sqrt[3]{1.1} \approx 1.03228 , implying that e = 0.03228 \approx 3.2 \% ? 🙂

The second answer is correct. In lineal terms, the expansion accompanying the solid-to-liquid change is just about 3.2 %. Thus, when a cube of solid iron melts, if the molten liquid is to be held in a cube-shaped container, the container will have sides barely 3.2 % longer. That’s all!

For a viewpoint more convenient for us here: If you keep the volume of the liquid the same as that of the solid, how many atoms from the original solid would you expect to find in that same volume? The answer is: In volume terms, about 91% (because, the inverse of 1.1 is 0.909090…). Hence, in lineal terms, 97%. Hence, in areal terms, about 94%.

What we have drawn above is a 2D representation, and hence, areal terms apply. In the sketch for the solid state, there are exactly 100 atoms (10 rows and 10 columns). Hence, in the corresponding sketch for the liquid state, there should be 94 atoms.

Now, go, refer to the above sketch, and count the number of atoms actually drawn for the liquid state. I did. There are 63 of them.

The question is: How do you squeeze the extra 31 atoms—or approximately 50% more atoms—in the same space? Doesn’t it look impossible?

One way to look at it is this. Why not start with the sketch of those end-to-end stacked atoms in the solid state, and simply draw the area they would be permitted to play in, once the state of the matter changes from solid to liquid? That is precisely what I have done in the sketch below. The side of the square in which the solid atoms fit, was 480 pixels. At about 3.2 % increase, it becomes 496 pixels. I have drawn a blue square of 496 pixels to show you the room allowed to the 100 atoms, once they get into their liquid state.

Extra Space Available to a Solid When It Becomes Liquid

Extra Space Available to a Solid When It Becomes Liquid

Conclusion: Contrary to what (i) your high-school teacher, (ii) Wikipaedia, (iii) Encyclopaedia Britannica, (iv) etc. told you, (and contrary to what their high-school teachers, in turn, had told them) atoms simply don’t find as much room to roam around as the usual sketches show, when they go from the solid state to the liquid state. Indeed, in a great contrast to gases, both liquid and solid states are almost similar in terms of how tightly packed their atoms are.

Now, let’s try to integrate what that conclusion means, assuming the neat hypothetical substance we have sketched above. (Don’t rush to apply this discussion directly to water. We will cover water in some future post in this series. Water shows too much of anomalous behavior. The hypothetical substance we took here is based on the hard-spheres assumption, and as such, for some purposes, it might provide a good abstraction for metals—but not for salts, minerals, silicates, glasses, plastics, liquid crystals, etc., and, above all, in most important respects, not for water either.)

Is our model for the liquid state consistent with the fact that liquids flow readily? What really happens at the atomic level when liquids flow?

If the questions includes: “at the atomic level,” the answer is: “not much.” While you have been taught, perhaps right from your high-school science, that both liquids and gases are fluids, there is a great difference between how the flow processes must be occurring in them at the atomic level. Let me explain.

Here, you must first understand that, in the general sense of the term, a flow cannot be generated by simple direct pulls or pushes, i.e. by applying the normal load. Pressure does not result in transport of matter from one region of space to another. Pressure differences might. But not pressure by itself (whether it is positive or negative). Flow, in general, involves transport of a quantity of matter from one spatial region to another, together with change of shape, too—not just a local change of volume. For flow in this sense to occur, it is shear loading—and not normal loading—which must get applied. Liquids can flow only when the loading type is shear.

Do a physical experiment. Construct a large rectangular frame out of four wooden battons that are hinged at each of its four corners. It should also be possible to easily dis-assemble the sides, if necessary. Place the frame on a flat glass top, so as to make a rectangular tray. The frame should be large enough to carry a great number of (hundreds of) small glass marbles forming a monolayer on the surface of the table-top. Place the marbles in the tray following different systematic ways. For instance, first, let all blue marbles be together and all red marbles be together. Next, create one or two circular clusters of blue marbles in the other red marbles. Etc.

If the lengths of the frame-sides are such that they exceed the dense packing of marbles by a few percentage , the setup will show a liquid state. If the marbles instead fit snugly, it will show a solid state. (For representing the solid state, ideally, the hexagonal packing should be considered, not the square-packing shown above. (Exercise: Find out the lineal expansion factor e when a hexagonal solid becomes liquid.))

Starting with a setup for the liquid state (i.e. with, say, 3% extra length), first, shake the frame horizontally for some time. Observe how the marbles move. In particular, mark a few marbles and observe how they move with respect to their neighbouring marbles. Do the initial neighbours remain neighbours? (To CS guys: The issue here is not primarily the number of connected elements; it is: whether each connection keeps the same neighbouring element all throughout, or not.)

Notice, even if in this experiment, you have not allowed for interatomic bonds between marbles (which is an extremely important point), you would still find something interesting. Unlike in gases, thermal energy (or vibrations) does not serve to break “neighbourhoods.” For most atoms, even over very very long times of shaking, and even when there is no attractive interatomic potential present, the originally directly neighbouring atoms continue to remain direct neighbours. Even in the liquid state, not just the solid one. This is an important point, and though it is so easy to see, it is also very easily possible that you read it here first. Science is like that.

Next, take a fresh setup of liquid-like marbles in a frame. Now, completely remove two opposite sides of the frame, and, using a couple of extra battons, slowly apply shear to the marbles so that the “liquid” deforms in shear in such a way that two large chunks of marbles are made to merely slide against each other. What do you observe?

Obviously, even in such a shear deformation, most atoms remain as closely connected as ever. Only very few atoms, i.e. precisely those which happen to lie on the line of the shear-slip, slide against each other. Only these few atoms acquire new individual neighbours. The rest of the overwhelming majority of neighbourhoods remain intact.

Lesson: When liquids flow, they do not do so via independent motions of individual atoms as happens in gases, but via sliding motions of large blocks of atoms. There simply is not enough room for liquid atoms to behave in a neat conformance to your high school teachers’ imagination. Consequently, the mechanism of each atom moving independent of others is ruled out. And so, the only deformation mode left to explain flow is: movement of large blocks of neighbourhood-preserving atoms, sliding against each other in shear.

Suppose you argue that this is not the only shear deformation possible. Suppose, you advise this course of action: Keep all the four sides of the frame intact and simply apply shear to the overall frame, so as to deform the whole frame from a rectangular shape to a general parallelogram shape. Now, all atoms would move, albeit in a sliding mode. Accordingly, suppose that the battons that are originally alighed to the y-axis become, during shear-deformation, oblique-angled—their angle with the x-axis changes from 90 degrees to, say, 30 degrees. In this case, horizontal rows of atoms slide, and yet, all atoms move. If such a shear movement is carried out to a sufficiently large extent (large, as necessary to explain magnitudes experiences in real flows), then all atoms would have changed all their direct neighbours.

Why shouldn’t a liquid flow in this manner? You may try to buttress your argument by making a reference to the diagram they show while teaching Newton’s law of viscosity: The liquid at the ground surface is shown to remain stationary throughout, whereas planes of liquid are shown sliding at ever incresing velocity as you go away from the surface. Why not apply the suggested idea to the atomic model, you may ask.

To find the answer, just try it. Try it with actual marbles. See whether producing a shear this way is easier or more difficult. You will find that the mode I suggested—two blocks sliding on only one plane—is easier. And, the rows-sliding mode becomes plane-sliding mode for a 3D fluid. Between the block-sliding mode and the rows-sliding mode, the former requires far less energy, and hence, Nature would pursue it.

Thus, real liquids do not flow in the rows-sliding mode, despite the nice abstract illustration they use while teaching viscosity. (The illustration accompanying viscosity assumes a continuum, just in case you missed it!)

In fact, even the description of two blocks shearing away, the way we presented it above, is not an exact description. In reality, the boundary conditions are such that liquids can split into 3D regions—call them globules or cells or whatever. Each of the 3D globules might itself suffer deformation, but the point is, it would do so only as an independent block. Thus, the point is, within each globule/3D region, it’s not just planes of atoms but entire 3D blocks that remain together. Thus, when liquid flow occurs, for most atoms, there are blocks within which neighbouring atoms are never replaced by other atoms.

The idea of blocks makes it necessary to think in terms of scales. At what scale does separation via sliding occurs? How? etc.

It’s perhaps possible that real liquids flow with not just micro- but also meso- and even macro-scale structures.

However, even if you take only nano-scale blocks, precisely because the Avogadro number is so mind-bogglingly large, it still means that literally billions of atoms remain together even in extremely turbulent motion. Not convinced? Do a simple calculation. Take a reasonable 100 pm (1 angstrom or 0.1 nm) as the reasonable effective size for an atom while in the liquid state. For a 50 nm cubic block, it means: 500^3 \approx 125 million atoms. For a 100 nm cubic block, it means: 1 billion atoms.

Atoms this many (hundreds of millions) move together even in the most turbulent motion. Even if there are nasty mechanisms like generation of gas bubbles and their collapse. (If not convinced, estimate the overall volume fraction of gas bubbles in the given liquid, and the size of the tiniest bubble possible in it.) (BTW, in case you didn’t know, gas-bubbles are more disruptive of a liquid’s fabric than the motions of second phase particles such as undissolved solids, colloidally dispersed particles, etc.)

Now, keep aside the hard-spheres assumption for a while and think what it means for hundreds of millions (or even billions) of atoms to move together i.e. without destroying their neighbourhoods. (Even in the worst turbulent motion.)

It means: their quantum mechanical wavefunction involves sufficiently large number of particles that the configuration space is sufficiently large that it would be capable of supporting an incredibly greater number of at least metastable states brought about by whatever external means. In highly simplified terms, what it in turn means are two things: (i) structures undestroyed by most turbulent flows are possible in liquids in general (let alone in water), (ii) the structures can be of an incredibly great variety. In other words, it is possible that liquids can carry “memory,” at a suitable scale.

Notice, we made an extremely pathetic (i.e. unrealistic) assumption in the development of our argument above: a complete absence of interatomic forces (which allows for great slideability). Real liquids are not like that.

Further, to stay close to what could be happening in homeopathic preparation process, we thought of only nano-scale blocks. Despite this worst-case scenario, we found that we still have millions of atoms staying together, even during the most violent liquid motion, and within a space hardly different from the most rigidly structured solids.

Structure and liquid can only be rather thick friends.

Not surprising. Though liquids don’t keep their shape, they do keep their volume—they don’t fill the container they are poured in. (Philosophers must be thankful for this circumstance; it allows them the debate as to whether the glass is half-full or half-empty.)

The structures formed in liquids may be different in several respects from those in solids. The structures may not be sufficiently periodic. So, it may be impossible to catch them via diffraction techniques (like XRD/TEM). Yet, based even on the simplest of considerations as those touched upon above, a counter-question naturally arises: How can there not be structures—mers/linkages, rings, “fabrics,” clusters, etc. Involving not just hundreds or thousands, but millions and billions of atoms per block of those structures.

Your high-school teachers gave you an incredibly wrong idea when they drew those 60% volume filled representations for the liquid state—which, in turn, augmented in your mind the idea that liquids are structureless.

And, mind you, the “liquids” we considered here were very neat ones. Very hypothetical—without interatomic forces, and with all atoms spherical in shape, i.e., isotropic in the charge distribution. On both these counts, real water differs. A lot. Think how strong then can the evidence that 100% pure water, too, could carry structural imprints.

[PS: Might streamline this a bit or correct any minor mistakes still left, within a few days. If there is any major change, will note it as such.]

Links to my earlier posts on this topic:

A comment on homeopathy [^]
A Hypothesis on Homeopathy, Part 1 [^]
A Hypothesis on Homeopathy, Part 2 [^]
A Hypothesis on Homeopathy, Part 3 [^]

* * * * *    * * * * *    * * * * *

A Song I Like:

(Marathi) “Tap Tap paDati, angaavarati, praajaktaachi phule…”
Lyrics: Mangesh Padgaonkar
Music: Shrinivas Khale
Singer: ?? (Shruti Sadolikar ??)

[E&OE]