Blogging some crap…

I had taken a vow not to blog very frequently any more—certainly not any more at least right this month, in April.

But then, I am known to break my own rules.

Still, guess I really am coming to a point where quite a few threads on which I wanted to blog are, somehow, sort of coming to an end, and fresh topics are still too fresh to write anything about.

So, the only things to blog about would be crap. Thus the title of this post.

Anyway, here is an update of my interests, and the reason why it actually is, and also would be, difficult for me to blog very regularly in the near future of months, may be even a year or so. [I am being serious.]

1. About micro-level water resources engineering:

Recently, I blogged a lot about it. Now, I think I have more or less completed my preliminary studies, and pursuing anything further would take a definitely targeted and detailed research—something that only can be pursued once I have a master’s or PhD student to guide. Which will only happen once I have a job. Which will only happen in July (when the next academic term of the University of Mumbai begins).

There is only one idea that I might mention for now.

I have installed QGIS, and worked through the relevant exercises to familiarize myself with it. Ujaval Gandhi’s tutorials are absolutely great in this respect.

The idea I can blog about right away is this. As I mentioned earlier, DEM maps with 5 m resolution are impossible to find. I asked my father to see if he had any detailed map at sub-talukaa level. He gave me an old official map from GSI; it is on a 1:50000 scale, with contours at 20 m. Pretty detailed, but still, since we are looking for check-dams of heights up to 10 m, not so helpful. So, I thought of interpolating contours, and the best way to do it would be through some automatic algorithms. The map anyway has to be digitized first.

That means, scan it at a high enough resolution, and then perform a raster to vector conversion so that DEM heightfields could be viewed in QGIS.

The trouble is, the contour lines are too faint. That means, automatic image processing to extract the existing contours would be of limited help. So, I thought of an idea: why not lay a tracing paper on top, and trace out only the contours using black pen, and then, separately scan it? It was this idea that was already mentioned in an official Marathi document by the irrigation department.

Of course, they didn’t mean to go further and do the raster-to-vector conversion and all.  I would want to adapt/create algorithms that could simulate rainfall run-offs after high intensity sporadic rains, possibly leading also to flooding. I also wanted to build algorithms that would allow estimates of volumes of water in a check dam before and after evaporation and seepage. (Seepage calculations would be done, as a first step, after homogenizing the local geology; the local geology could enter the computations at a more advanced stage of the research.) A PhD student at IIT Bombay has done some work in this direction, and I wanted to independently probe these issues. I could always use raster algorithms, but since the size of the map would be huge, I thought that the vector format would be more efficient for some of these algorithms. Thus, I had to pursue the raster-to-vector conversion.

So I did some search in this respect, and found some papers and even open source software. For instance, Peter Selinger’s POTrace, and the further off-shoots from it.

I then realized that since the contour lines in the scanned image (whether original or traced) wouldn’t be just one-pixel wide, I would have to run some kind of a line thinning algorithm.

Suitable ready made solutions are absent and building one from the scratch would be too time consuming—it can possibly be a good topic for a master’s project in the CS/Mech departments, in the computer graphics field. Here is one idea I saw implemented somewhere. To fix our imagination, launch MS Paint (or GIMP on Ubuntu), and manually draw a curve in a thick brush, or type a letter in a huge font like 48 points or so, and save the BMP file. Our objective is to make a single pixel-thick line drawing out of this thick diagram. The CS folks apparently call it the centerlining algorithm. The idea I saw implemented was something like this: (i) Do edge detection to get single pixel wide boundaries. The “filled” letter in the BMP file would now become “hollow;” it would have only the outlines that are single pixel wide. (ii) Do raster-to-vector conversion, say using POTrace, on this hollow letter. You would thus have a polygon representation for the letter. (iii) Run a meshing software (e.g. Jonathan Schewchuk’s Triangle, or something in the CGAL library) to fill the interior parts of this hollow polygon with a single layer of triangles. (iv) Find the centroids of all these triangles, and connect them together. This will get us the line running through the central portions of each arm of the letter diagram. Keep this line and delete the triangles. What you have now got is a single pixel-wide vector representation of what once was a thick letter—or a contour line in the scanned image.

Sine this algorithm seemed too complicated, I thought whether it won’t be possible to simply apply a suitable diffusion algorithm to simply erode away the thickness of the line. For instance, think that the thick-walled letter is initially made uniformly cold, and then it is placed in uniformly heated surroundings. Since the heat enters from boundaries, the outer portions become hotter than the interior. As the temperature goes on increasing, imagine the thick line to begin to melt. As soon as a pixel melts, check whether there is any solid pixel still left in its neighbourhood or not. If yes, remove the molten pixel from the thick line. In the end, you would get a raster representation one pixel thick. You can easily convert it to the vector representation. This is a simplified version of the algorithm I had implemented for my paper on the melting snowman, with that check for neighbouring solid pixels now being thrown in.

Pursuing either would be too much work for the time being; I could either offload it to a student for his project, or work on it at a later date.

Thus ended my present thinking line on the micro-level water-resources engineering.

2. Quantum mechanics:

You knew that I was fooling you when I had noted in my post dated the first of April this year, that:

“in the course of attempting to build a computer simulation, I have now come to notice a certain set of factors which indicate that there is a scope to formulate a rigorous theorem to the effect that it will always be logically impossible to remove all the mysteries of quantum mechanics.”

Guess people know me too well—none fell for it.

Well, though I haven’t quite built a simulation, I have been toying with certain ideas about simulating quantum phenomena using what seems to be a new fluid dynamical model. (I think I had mentioned about using CFD to do QM, on my blog here a little while ago).

I pursued this idea, and found that it indeed should reproduce all the supposed weirdities of QM. But then I also found that this model looks a bit too contrived for my own liking. It’s just not simple enough. So, I have to think more about it, before allocating any specific or concrete research activities about it.

That is another dead-end, as far as blogging is concerned.

However, in the meanwhile, if you must have something interesting related to QM, check out David Hestenes’ work. Pretty good, if you ask me.

OK. Physicists, go away.

3. Homeopathy:

I had ideas about computational modelling for the homeopathic effect. By homeopathy, I mean: the hypothesis that water is capable of storing an “imprint” or “memory” of a foreign substance via structuring of its dipole molecules.

I have blogged about this topic before. I had ideas of doing some molecular dynamics kind of modelling. However, I now realize that given the current computational power, any MD modelling would be for far too short time periods. I am not sure how useful that would be, if some good scheme (say a variational scheme) for coarse-graining or coupling coarse-grained simulation with the fine-grained MD simulation isn’t available.

Anyway, I didn’t have much time available to look into these aspects. And so, there goes another line of research; I don’t have much to do blogging about it.

4. CFD:

This is one more line of research/work for me. Indeed, as far as my professional (academic research) activities go, this one is probably the most important line.

Here, too, there isn’t much left to blog about, even if I have been pursuing some definite work about it.

I would like to model some rheological flows as they occur in ceramics processing, starting with ceramic injection moulding. A friend of mine at IIT Bombay has been working in this area, and I should have easy access to the available experimental data. The phenomenon, of course, is much too complex; I doubt whether an institute with relatively modest means like an IIT could possibly conduct experimentation to all the required level of accuracy or sophistication. Accurate instrumentation means money. In India, money is always much more limited, as compared to, say, in the USA—the place where neither money nor dumbness is ever in short supply.

But the problem is very interesting to a computational engineer like me. Here goes a brief description, suitably simplified (but hopefully not too dumbed down (even if I do have American readers on this blog)).

Take a little bit of wax in a small pot, melt it, and mix some fine sand into it. The paste should have the consistency of a toothpaste (the limestone version, not the gel version). Just like you pinch on the toothpaste tube and pops out the paste—technically this is called an extrusion process—similarly, you have a cylinder and ram arrangement that holds this (molten wax+sand) paste and injects it into a mould cavity. The mould is metallic; aluminium alloys are often used in research because making a precision die in aluminium is less expensive. The hot molten wax+ceramic paste is pushed into the mould cavity under pressure, and fills it. Since the mould is cold, it takes out the heat from the paste, and so the paste solidifies. You then open the mould, take out the part, and sinter it. During sintering, the wax melts and evaporates, and then the sand (ceramic) gets bound together by various sintering mechanism. Materials engineers focus on the entire process from a processing viewpoint. As a computational engineer, my focus is only up to the point that the paste solidifies. So many interesting things happen up to that point that it already makes my plate too full. Here is an indication.

The paste is a rheological material. Its flow is non-Newtonian. (There sinks in his chair your friendly computational fluid dynamicist—his typical software cannot handle non-Newtonian fluids.) If you want to know, this wax+sand paste shows a shear-thinning behaviour (which is in contrast to the shear-thickening behaviour shown by, say, corn syrup).

Further, the flow of the paste involves moving boundaries, with pronounced surface effects, as well as coalescence or merging of boundaries when streams progressing on different arms of the cavity eventually come together during the filling process. (Imagine the simplest mould cavity in the shape of an O-ring. The paste is introduced from one side, say from the dash placed on the left hand side of the cavity, as shown here: “-O”. First, after entering the cavity, the paste has to diverge into the upper and lower arms, and as the cavity filling progresses, the two arms then come together on the rightmost parts of the “O” cavity.)

Modelling moving boundaries is a challenge. No textbook on CFD would even hint at how to handle it right, because all of them are based on rocket science (i.e. the aerodynamics research that NASA and others did from fifties onwards). It’s a curious fact that aeroplanes always fly in air. They never fly at the boundary of air and vacuum. So, an aeronautical engineer never has to worry about a moving fluid boundary problem. Naval engineers have a completely different approach; they have to model a fluid flow that is only near a surface—they can afford to ignore what happens to the fluid that lies any deeper than a few characteristic lengths of their ships. Handling both moving boundaries and interiors of fluids at the same time with sufficient accuracy, therefore, is a pretty good challenge. Ask any people doing CFD research in casting simulation.

But simulation of the flow of the molten iron in gravity sand-casting is, relatively, a less complex problem. Do dimensional analysis and verify that molten iron has the same fluid dynamical characteristics as that of the plain water. In other words, you can always look at how water flows inside a cavity, and the flow pattern would remain exactly the same also for molten iron, even if the metal is so heavy. Implication, surface tension effects are OK to handle for the flow of molten iron. Also, pressures are negligibly small in gravity casting.

But rheological paste being too thick, and it flowing under pressure, handling the surface tensions effect right should be even bigger a challenge. Especially at those points where multiple streams join together, under pressure.

Then, there is also heat transfer. You can’t get away doing only momentum equations; you have to couple in the energy equations too. And, the heat transfer obviously isn’t steady-state; it’s necessarily transient—the whole process of cavity filling and paste solidification gets over within a few seconds, sometimes within even a fraction of a second.

And then, there is this phase change from the liquid state to the solid state too. Yet another complication for the computational engineer.

Why should he address the problem in the first place?

Good question. Answer is: Economics.

If the die design isn’t right, the two arms of the fluid paste lose heat and become sluggish, even part solidify at the boundary, before joining together. The whole idea behind doing computational modelling is to help the die designer improve his design, by allowing him to try out many different die designs and their variations on a computer, before throwing money into making an actual die. Trying out die designs on computer takes time and money too, but the expense would be relatively much much smaller as compared to actually making a die and trying it. Precision machining is too expensive, and taking a manufacturing trial takes too much time—it blocks an entire engineering team and a production machine into just trials.

So, the idea is that the computational engineer could help by telling in advance whether, given a die design and process parameters, defects like cold-joins are likely to occur.

The trouble is, the computational modelling techniques happen to be at their weakest exactly at those spots where important defects like cold-joins are most likely. These are the places where all the armies of the devil come together: non-Newtonian fluid with temperature dependent properties, moving and coalescing boundaries, transient heat transfer, phase change, variable surface tension and wall friction, pressure and rapidity (transience would be too mild a word) of the overall process.

So, that’s what the problem to model itself looks like.

Obviously, ready made software aren’t yet sophisticated enough. The best available are those that do some ad-hoc tweaking to the existing software for the plastic injection moulding. But the material and process parameters differ, and it shows in the results. And, that way, validation of these tweaks still is an on-going activity in the research community.

Obviously, more research is needed! [I told you the reason: Economics!]

Given the granular nature of the material, and the rapidity of the process, some people thought that SPH (smoothed particle hydrodynamics) should be suitable. They have tried, but I don’t know the extent of the sophistication thus far.

Some people have also tried finite-differences based approaches, with some success. But FDM has its limitations—fluxes aren’t conserved, and in a complex process like this, it would be next to impossible to tell whether a predicted result is a feature of the physical process or an artefact of the numerical modelling.

FVM should do better because it conserves fluxes better. But the existing FVM software is too complex to try out the required material and process specific variations. Try introducing just one change to a material model in OpenFOAM, and simulating the entire filling process with it. Forget it. First, try just mould filling with coupled heat transfer. Forget it. First, try just mould filling with OpenFOAM. Forget it. First, try just debug-stepping through a steady-state simulation. Forget it. First, try just compiling it from the sources, successfully.

I did!

Hence, the natural thing to do is to first write some simple FVM code, initially only in 2D, and then go on adding the process-specific complications to it.

Now this is something about I have got going, but by its nature, it also is something about you can’t blog a lot. It will be at least a few months or so before even a preliminary version 0.1 code would become available, at which point some blogging could be done about it—and, hopefully, also some bragging.

Thus, in the meanwhile, that line of thought, too comes to an end, as far as blogging is concerned.

Thus, I don’t (and won’t) have much to blog about, even if I remain (and plan to remain) busy (to very busy).

So allow me to blog only sparsely in the coming weeks and months. Guess I could bring in the comments I made at other blogs once in a while to keep this blog somehow going, but that’s about it.

In short, nothing new. And so, it all is (and is going to be) crap.

More of it, later—much later, may be a few weeks later or so. I will blog, but much more infrequently, that’s the takeaway point.

* * * * *   * * * * *   * * * * *

(Marathi) “madhu maagashee maajhyaa sakhyaa pari…”
Lyrics: B. R. Tambe
Singer: Lata Mangeshkar
Music: Vasant Prabhu

[I just finished writing the first cut; an editing pass or two is still due.]



Getting dusty…

I have been getting dusty for some time now.

… No, by “dusty” I don’t mean that dust of the “Heat and Dust” kind, even though it’s been quite the regular kind of an “unusually hot” summer this year, too.

[In case you don’t know, “Heat and Dust” was a neat movie that I vaguely recall I had liked when it had come on the scene some 2–3 decades ago. Guess I was an undergrad student at COEP or so, back then or so. (Google-devataa now informs me that the movie was released in 1983, the same year that I graduated from COEP.)]

Anyway, about the title of this post: By getting dusty, I mean that I have been trying to get some definite and very concrete beginning, on the software development side, on modelling things using “dust.” That is, particles. I mean to say, methods like molecular dynamics (MD), smoothed particle hydrodynamics (SPH), lattice Boltzmann method (LBM), etc. … I kept on postpoing writing a blog post here with the anticipation that I would succeed in tossing a neat toy code for a post.

… However, I soon came face-to-face with the  sobering truth that since becoming a professor, my programming skills have taken a (real) sharp dip.

Can you believe that I had trouble simply getting wxWidgets to work on Ubuntu/Win64? Or to get OpenGL to work on Ubuntu? It took almost two weeks for me to do that! (And, I haven’t yet got OpenGL to work with wxWidgets on Ubuntu!) … So, finally, I (once again) gave up the idea of doing some neat platform-neutral C++ work, and, instead (once again) went back to VC++. Then there was a problem awaiting me regarding VC++ too.

Actually, I had written a blog post against the modern VC++ at iMechanica and all (link to be inserted) but that was quite some time back (may be a couple of years or so). In the meanwhile, I had forgotten how bad VC++ has really become over the years, and I had to rediscover that discouraging fact once again!

So, I then tried installing VC++ 6 on Win7 (64-bit)—and, yet another ugly surprise. VC++ 6 won’t run on Win7. It’s possible to do that using some round-about ways, but it all is now a deprecated technology.

Finally, I resigned myself to using VC++ 10 on Win7. Three weeks of precious vacation time already down!

That‘s what I meant when I said how bad a programmer I have turned out to be, these days.

Anyway, that’s when I finally could begin writing some real though decidedly toy code for some simple MD, just so that I could play around with it a bit.

Though writing MD code seems such a simple, straight-forward game (what’s the Verlet algorithm if not plain and simple FDM?), I soon realized that there are some surprises in it, too.

All of the toy MD code to be found on the Internet (and some of the serious code, too) assumes only a neat rectangular or a brick-like domain. If you try to accommodate an arbitrary shaped domain (even if only with straight-lines for boundaries), you immediately run into the run-time efficiency issues. The matter gets worse if you try to accommodate holes in the domain—e.g., a square hole in a square domain like what my Gravatar icon shows. (It was my initial intention to quickly do an MD simulation for this flow through square cavity having a square hole.)

Next, once you are able to handle arbitrarily shaped domains with arbitrarily shaped holes, small bugs begin to show up. Sometimes, despite no normal flux condition, my particles were able to slip past the domain boundaries, esp. near the points where two bounding edges meet. However, since the occurrence was rare, and hard to debug for (what do you do if it happens only in the 67,238th iteration? hard-code a break after 67,237, recompile, run, go off for a coffee?) I decided to downscale the complexity of the basic algorithm.

So, instead of using the Lennard-Jones potential (or some other potential), I decided to switch off the potential field completely, and have some simple perfectly hard and elastically colliding disks. (I did implement separate shear and normal frictions at the time of collisions, though. (Of course, the fact that frictions don’t let the particles be attracted, ever, is a different story, but, remember, we are trying to be simple here.)) The particles were still managing to escape the domain, at rare times. But at least, modelling with the hard elastic disks allowed me to locate the bugs better. Turned out to be a very, very stupid-some matter: my algorithm had to take care for a finite-sized particle interacting with the boundary edges.

But, yes, I could then get something like more than 1000 particles happily go colliding with each other for more than 10^8 collisions. (No, I haven’t parallelized the code. I merely let it run on my laptop while I was attending in a longish academics-related meeting.)

Another thing: Some of the code on the ‘net, I found, simply won’t work for even a modestly longer simulation run.  For instance, Amit Kumar’s code here [^]. (Sorry Amit, I should have written you first. Will drop you a line as soon as this over-over-overdue post is out the door.) The trouble with such codes is with the time-step, I guess. … I don’t know for sure, yet; I am just loud-thinking about the reason. And, I know for a fact that if you use Amit’s parameter values, a gas explosion is going to occur rather soon, may be right after something like 10^5 collisions or so. Java runs too slowly and so Amit couldn’t have noticed it, but that’s what happens with those parameter values in my C++ code.

I haven’t yet fixed all my bugs, and in fact, haven’t yet implemented the Lennard-Jones model for the arbitrarily shaped domains (with (multiple) holes). I thought I should first factor out the common code well, and then proceed. … And that’s when other matters of research took over.

Well, why did I get into MD, in the first place? Why didn’t I do something useful starting off with the LBM?

Well, the thing is like this. I know from my own experience that this idea of a stationary control volume and a moving control volume is difficult for students to get. I thought it would be easy to implement an MD fluid, and then, once I build in the feature of “selecting” (or highlighting) a small group of molecules close to each other, with these molecules together forming a “continuous” fluid parcel, I could then directly show my students how this parcel evolves with the fluid motion—how the location, shape, momentum and density of the parcel undergoes changes. They could visually see the difference between the Eulerian and Lagrangian descriptions. That, really speaking, was my motivation.

But then, as I told you, I discovered that I have regressed low enough to have become a very bad programmer by now.

Anyway, playing around this way also gave me some new ideas. If you have been following this blog for some time, you would know that I have been writing in favor of homeopathy. While implementing my code, I thought that it might be a good idea to implement not just LJ, but also the dipole nature of water molecules, and see how the virtual water behaves: does it show the hypothesized character of persistence of structure or not. (Yes, you read it here first.) But, what the hell, I have far too many other things lined up for me to pursue this thread right away. But, sure, it’s very interesting to me, and so, I will do something in that direction some time in future.

Once I get my toy MD code together (for both hard-disk and LJ/other potentials models, via some refactorability/extensibility thrown in), then I guess I would move towards a toy SPH code. … Or at least that’s what I guess would be the natural progression in all this toys-building activity. This way, I could reuse many of my existing MD classes. And so, LBM should logically follow after SPH—what do you think? (And, yes, I will have to take the whole thing from the current 2D to 3D, sometime in future, too.)

And, all of that is, of course, assuming that I manage to become a better programmer.

Before closing, just one more note: This vacation, I tried Python too, but found that (i) to accomplish the same given functional specs, my speed of development using C++ is almost the same (if not better) than my speed with Python, and (ii) I keep missing the nice old C/C++ braces for the local scope. Call it the quirky way in which an old programmer’s dusty mind works, but at least for the time being, I am back to C++ from that brief detour to Python.

Ok, more, later.

* * * * *   * * * * *   * * * * *

A Song I Like:
(Marathi) “waaT sampataa sampenaa, kuNi waaTet bhetenaa…”
Music: Datta Dawajekar
Lyrics: Devakinandan Saaraswat
Singer: Jayawant Kulkarni

[PS: A revision is perhaps due for streamlining the writing. May be I will come back and do that.]



A Hypothesis on Homeopathy, Part 2

0. Preliminaries:

From this post on, we begin to engage in the hypothesis—if not that, at least the topics prerequisite to understanding it. The hypothesis can be stated simply, but we will have to first clarify the terms being used in it. We begin to undertake such a clarification with this post. Hope the description isn’t so general as to be vague. (I will appreciate feedback.)

1. The Idea of States Applied to Living Beings:

It will help if we approach the issue by first considering the biological nature of man (or of any sufficiently complex organism such as dogs, cats, horses, etc.) Certain salient characteristics that are pertinent to homeopathy will be brought out in the ensuing discussion.

For our purposes, we first consider a healthy i.e. a normal adult who has no unusual habits of food, daily routine, etc.

Suppose that such a man is not habituated to tobacco. Suppose further that he chews tobacco for the first time in his life. What does typically happen? Immediately, he will find a different kind of a taste in his mouth. Soon thereafter, he will feel a light feeling in head, followed by giddy-ness. His pulse will become both more quick and more irregular. He may experience nausea, and may even throw up. If he has tobacco through smoking, several of these symptoms will still appear though to a somewhat lesser degree. To those smokers who have stopped smoking and then pick up a cigarette after a long gap (of, say months), it never fails surprising them that these “newbie’s” symptoms come back for a while, once they restart smoking.

One somewhat abstract way to describe such a set of observations is to say that the state of a man changes from, say “normal” (N1) to, say “tobacco-affected” (or generally, SA, short for substance-affected). The SA state is characterized by a certain group of symptoms. As the man continues having the substance, the further occurances of the SA state now become progressively less intense. A time comes when, what the man now calls his normal state (N2) is completely unaffected by his consumption of the substance. However, upon quitting, withdrawal symptoms can appear, leading him to yet another state WS (withdrawal symptoms). If he continues staying quit, the WS state gradually recedes and his state falls back to N1. Now, he again is biologically ready to experience the TA state.

Of course, you may object, tobacco (or the nicotine in it) is an addictive substance. For a non-addictive substance such as tea or the table salt, the state corresponding to withdrawal will not apply. To a certain extent, you are right. However, a more careful study shows that several finer symptoms still arise even if the substance is as benign as tea, the table salt, hot chilly (jalapano), etc. Overall, the idea of changes of states, does remain generally applicable.

Some further comments on the living states of a man, are in order. A given specific living state can be distinguished by the particular group of attributes or symptoms associated with it. If such attributes change, we may associate a change of state with the man. We are free to associate a changed state regardless of the nature of the causes underlying the change, and regardless of whether we know these causes or not. Clinical evidence ought to be considered sufficient to indicate that states do change—even if by itself it may not explain anything at all. Explanation is not the first stage of a science; observation is.

The state that a man considers as his “normal” state, itself can undergo a slow change over a period of time. In the aforementioned example, the reported normal case changed from N1 to N2 over a period of time.

Philosophically, such a change does not imply a metaphysical flux: what a man considers to be his “normal” state, at any given time, is definite. However, it is just that the attributes that distinguish an N1 state from another N2 state may be so fine, or the change may occur so slowly over time, that differences between them may fall beyond a the man’s capacity to grasp or distinguish. (The story of the frog unable to notice the dangerously increasing water temperature in a slowly heated pot, is provides an example from the animal world.) However the failure to recognize the distinction does not mean that the states themselves are identical. They are not.

Thus, in our hypothesis, we emphasize objectivity of states, and as an implication, we do not elevate the subjectively described experiences to the same level as of objective observations/existential states.

BTW, observe that a chemical substance is not necessarily a pre-requisite of a state change; also radiation, heat, pressure etc. can bring about a change of state. More on this, later, in appropriate context.

2. Life Processes: Dynamic Equilibrium, Complexity and Non-linearity

(2.a) Dynamic Equilibrium

The next idea that observations such as the above suggest is one of equilibrium, more specifically, of a dynamic equilibrium.

In the above example, N1 is a state of dynamic equilibrium and so is N2. On the other hand, the SA or WS states do not refer to equilibrium even in a dynamic sense. They refer to departures from equilibrium. (BTW, depending on the nature of the substance involved, it is possible that both N1 and N2 may refer to a state of health. We shall mostly not dwell on such cases.)

Living organisms are complex enough that the number of metastable equilibrium states that may be assumed by them can be huge. For instance, consider the change of state introduced by keeping all your food habits the same but changing the consumption pattern for only one kind of a food-item. Thus, for example, consider having or not having red chillies (jalapanos) in your diet. Each such a change impinging on your body leads to a fine but definite change in its state, and indicates a different possible state of dynamic equilibrium.

Notice that we still are considering only the more or less “healthy” variety of states that may be assumed by a man (or any sufficiently complex living organism). The states assumed during the various diseases simply add another set of states.

The concept of dynamic equilibrium is vital to both the medical science and our hypothesis. Our description here is not at all adequate. We shall come back to this topic again later on.

(2.b) Complexity:

Another observation that we wish to note here is that life-processes are not only dynamic but also complex.

The word complex, in general, does not mean either “indeterminate” or “hard” (though the word is often used by physicists in the former sense, and by computer scientists in the latter sense.) Indeed, the antonymn of “complex” is: “simple.”

The idea here is that anything of interest may be imagined to be a system, made of up certain interrelated parts. If the number of parts is great or their workings, or the interrelations too numerous, or their realistic description requires too much detail to be provided, then we say that the system is complex. The politics of a village local governing body vs. that in the UN provides one good example—the moral level often is not at all different, but the latter is more complex. The difference between the simple and the complex is brought out also by considering machines: there are simple machines like the inclined plane or a system of pulleys, and there are complex machines such as a space-shuttle.

BTW, as the example of machines indicates the word “complex” does not mean: “unmanageable.” Indeed, engineered systems are often intelligently designed so as to bring complexity (including any naturally occurring chaos) under control.

The biological processes of metabolism are both extremely complex and highly interdependent. Their complexity is the reason why the medical science is not easy to build or practice. The best way to appreciate the complexity of living beings is to trace in detail all that which happens when the organism takes a particular action. For example, suppose you are hungry and decide to have a fruit. Trace all the biological systems involved in this simple set of actions: the level of energy-producing materials (say, sugar) available at the cellular level drops below a certain limit; this triggers a certain chemical signal; it translates into a neural system signal; it reaches a certain part of the spinal rod and/or brain; this last again triggers some other process because of which the biochemical states corresponding to your becoming aware of the state of hunger, happens; your conscious thinking and decision—to eat fruit—again correlates with the electro-chemical signals and states in the brain; your decision further triggers some complex process in the command center… you can carry on…

But important point is that each of these processes again is both extremely complex in itself and extremely dependent on the other parts of the overall system….

I think the fact that biological processes are complex need not be stressed any further.

Coming back to the variety of complex states assumed by a man, due to the reason of interdependencies and complexity, it is possible that a given state may be reached via many different alternative paths. Here, a path is defined as a definite (also continuous) sequence of intermediate state changes. Due to the complexity, existence of multiple pathways between the states is not an exception but rather a norm in the biological systems. Further, the attribute of interdependence means that the dynamic equilibria (corresponding to each individual state) are also highly susceptible to both fleeting disturbances as well  restoration of the system back to some or the other state of dynamic equilibrium. A very tiny biological stimulus may be enough to make the organism slide into another nearby state; a net effect of yet another set of stimulii may take the organism back to the original equilibrium state via some other path.

(2.c) Nonlinearity:

Finally, we shall touch upon yet another feature of biological processes: namely, that often times, they also are nonlinear in nature.

The basic meaning of the term “nonlinear” is very simple; obviously, it means: not linear. Since nothing can be defined via negations, it is perfectly logical to ask: so, what’s the point?

The point has a scope large enough that we have to go step by step, taking some concrete examples alogn the way.

If you attach a weight to a spring, the spring elongates by a certain amount. If you attach a heavier weight, the elongation of the spring is proportionately greater. A smaller input leads to a smaller output; a bigger input leads to a proportionately bigger output.

This property of proportionality does not always hold true for all classes of systems.

For a certain class of systems, reducing the input below a threshold level may lead to a zero output (e.g. the photoelectric effect). For others, the threshold may be present on the higher side (e.g. the electric fuse). For still others, the relationship may be more complicated than just the binary presence/absence of the output.

For instance, consider the human ear. We are able to clearly hear not only whispers but also loud conversations, and then, also rock concerts. The emphasis is on the clarity—we are able to make out subtle nuances of speech at each of these levels. The range of input values over which the ear can function is almost impossible to emulate in the usual, linear physical systems. For example, imagine weighing a gold ring on a weigh-bridge meant for trucks up to 10 tonnes in weight. The differences in the magnitude (a few tons vs. a few grams—about 10^7 times difference) is actually smaller than the range (10^9 or more) that the ear is able to handle.  The reason is that the ear is a nonlinear sensor. For a hundred-fold increase in the acoustic input, the ear produces a signal that is only twice as strong. This allows the further brain circuitry to remain sensitive over the entire hearing range. Nonlinearity does not necessarily mean weird.

A still more complicated behaviour is displayed by some other nonlinear systems, ones in which the system changes its behaviour near certain ranges of input conditions. We shall look at it in the next post.

References for the Next Post:

We shall deal with topics of dynamic instability, catostrophe theory, nonlinearity and chaos during the next post. We shall conceptually touch upon some of those ideas which are relevant to our discussion. If your background did not have any maths beyond XII, it would be worth your while to make a list of the topics or keywords that you found to be either too bizarre or too easily believable, about the chaos theory. If your background includes, say, first two years of maths in BSc/BE courses, you may wish to note that the references that I will mostly draw on are the following (in the decreasing order of relevance):

1. Addison, Paul S. (2005) “Fractals and Chaos: An Illustrated Course,” New Delhi, India:Overseas Press (originally published in UK by Institute of Physics Publishing).

2. Baker, Gregory L. and Gollub, Jerry P. (1996) “Chaotic Dynamics: An Introduction, 2/e” Cambridge, UK:Cambridge University Press

3. Tel, Tamas and Gruiz, Marton (2006) “Chaotic Dynamics: An Introduction Based on Classical Mechanics,” translated by Katalin Kulacsky, Cambridge, UK:Cambridge University Press

4. Hirsh, Morris W. and Smale, Stephen and Devaney, Robert L. (2004) “Differential Equations, Dynamical Systems, and an Introduction to Chaos, 2/e” San Diego, CA, USA:Academic Press

Further, I have to dig up suitable references for the catastrophe theory…. It all has become such an old thing for me by now; no touch with these topics whatsoever at all!…

Before closing: Whether you know the required maths or not, since, undoubtedly, all of you have read about chaos, here is a couple of questions—a sort of “one for the road,” that got repeated!: (i) What is (or what do you think is) the relation between resonance and chaos? (ii) Can conservative systems exhibit chaotic dynamics?

Links to my earlier posts on this topic:
A hypothesis on homeopathy, part 1 [^]
A comment on homeopathy [^]

–  –  –  –  –
A Song I Like
(Marathi) “aalaa paaoos, aalaa paaoos, maatichyaa vasaat g_…”
Singer: Pushpa Pagdhare
Music: Shrinivas Khale
Lyrics: Shanta Shelke


A Hypothesis on Homeopathy, Part 1


Words used by skeptics and many physical scientists in describing homeopathy have always included terms such as quackery; the latest addition to this set is:“witch-craft.” Any success obtained in the clinical practice of homeopathy has been either dismissed or explained away in reference to the placebo effect.

From a theoretical viewpoint, the most severe objection to homeopathy rests on the argument that the homeopathic solutions are so diluted that not even a single drug molecule may be present in it. Since we will have much occasion to use the term, let’s give it a short-form: ANBO (Avogadro’s Number-Based Objections).

There are two distinct processes involved in homeopathy: (i) conducting the provings, and (ii) selection and application of a suitable remedy. Obviously, the ANBO applies in both the cases.

As an aside, observe that all the randomized double-blind tests to which physical scientists/allopaths/skeptics refer, are always been conducted in the context of application but not of provings. Further, the method of application considers as standard, and therefore mimics, the kind of specifics which are rather suited to administering allopathic medicines. Thus, these tests have a certain built-in bias in favor of the allopathic paradigm, not for the homeopathic one. We shall examine such issues in somewhat more detail later on.

For the time being, let us first briefly state the purpose of this series of posts.

The purpose of these posts is not polemical but explanatory. I wish to put forth a hypothesis (more like a broad, qualitative schematic for a hypothetical mechanism) which I think is capable of explaining the efficacy of the homeopathic action.

A Clarification:

A couple of comments concerning the nature of assumptions behind any hypothesis that claims to explain efficacy of homeopathy, are in order.

First, any subjective effects such as those arising due to psychic abilities, hypnosis, telepathy etc. are assumed to be absent. These effects may exist, but it is assumed that in a properly designed and conducted randomized double-blind test, the differences arising in the control- and test-group would be similar in magnitude, and hence their effects would cancel each other out in an overall and statistical sense. Also included in this group is the placebo effect. For the aforementioned reason of mutual cancellation, we shall no longer concern ourselves with this set of factors. Thus, we shall not henceforth concern ourselves with the placebo effect either.

Second, it is assumed that homeopathy is efficacious, namely, that it produces a certain “real” effect, i.e. one that exists over and above the placebo effect. It is obvious that if this premise itself is wrong, then any implications drawn using it would also be wrong. The last can in principle be caught during empirical testing. Here, or the sake of advancing the argument, we shall assume that homeopathy does have efficacy.

What the Hypothesis Must Be Able to Explain:

Now, if homeopathy is assumed to be efficacious, what are the most troublesome implications?

Naturally, the first and foremost answer is: ANBO. To explain homeopathy, we have to be able to tell at least a broad nature of the mechanism which is both plausible and not in contradiction with any part of the rest of our knowledge. We should not only be able to explain why water or alcohol retains a certain kind of “memory,” but we should also be able to put forth a good argument as to why a direct evidence of the same is so hard to obtain.

Apart from the ANBO, there are several other considerations which we list below. The list is given in a roughly decreasing order of importance.

As the second most important implication, we have to be able to explain the curious feature called the principle of “the like cures the like.” This principle seems to be completely at odds with our common sense concerning the physical world as well as the paradigm followed in allopathy.

Thirdly, we should be able to tell how the same hypothetical mechanism works at both the stages: provings and clinical administration.

Fourthly, we must be able to explain why homeopathy does not always seem to work in clinical practice—why there is this “hit or miss” character to it, why finding the suitable remedy almost always involves the trial and error.

Fifthly, we also must be able to explain why homeopathic drugs often do not produce any observable effect on a “third person.” Anecdotes have been put forth of people popping in someone else’s homeopathic drug and yet not getting affected in any way at all. In a way, this is, potentially, a very serious issue. If an allopathic drug is found to be efficacious, we also immediately recognize the potential danger that goes with it. We attach great importance to its proper administration. Why shouldn’t the same considerations apply also to homeopathic remedies if they too are efficacious?

The answers provided by homeopathic practitioners in this regard are not at all satisfactory. Certain very dangerous ideas such as ascribing consciousness to the homeopathic remedies also have been put forth. As mentioned earlier we reject this particular idea out of hand. Further, even if we do admit the plausibility of ideas such as psychic abilities etc., for the reason of mutual cancellations mentioned earlier, we consider them not to be present. Consequently, our hypothesis will have to be capable of explaining also this curious set of observations pertaining the “no side-effects” feature of the homeopathic drugs.

Sixthly, we should be able to offer a solution to a certain objection which is best described as “homeopathy in nature” or what I call “the stream-effected homeopathy.” The objection is something like this. If homeopathic effect is real, then it must occur anytime there is succussion and dilution. Water flowing through natural streams/rivers (and even the municipal pipes and taps) always is in contact with the materials that are potentially (or actually) homeopathic remedies. If so, why doesn’t the river-water (or the tap-water) get homeopathically potentiated?

Finally, we should be able to provide at least plausible explanations for certain other curious aspects of homeopathy: why alcohol? why glass bottles? why potentiation in a certain way? why the desirability of the single dose? why are coffee and wine antidotes to homeopathy but not tea and beer? Many of these things look whimsical, don’t they? Could our hypothesis accommodate at least schematics of explanations?

What We Shall Cover Next:

We shall not try to address all the details of all the issues raised by the above questions. What we shall do is provide a sketchy outline of the nature of the answers involved. Indeed, most—if not all—of the answers have already been provided by other people; some of these sources are noted below.

Overall, what we propose to do here is to give a pop-science kind of account as to why homeopathy might work. In doing so, undoubtedly, some new ideas or ways of looking at things would also get mentioned. If I find that I am stating an essentially new idea (as contrasted with a more coordinated description of something that has already been said) then I will be sure to say so. If you too find some idea here to be truly new, please drop me a line so that I could think of writing a more serious account of the same later on.

There is a lot of skeptical material too, but I am not going to specifically suggest any, not at least today. On the other hand, if you find me mentioning a new critical point, do let me know of the same too.

My plan is to first finish writing this series of blog posts and then edit and convert them into a PDF article. The next post will follow after a few days, definitely within a week or so.

In the meanwhile, do go through a few references suggested below, most of them, in favor of homeopathy.  Comments are welcome!

Links to my earlier posts on this topic:

A comment on homeopathy [^]


Bellavite, P. (2003) “Complexity science and homeopathy: a synthetic overview,” Homeopathy: The Journal of the Faculty of Homeopathy, vol. 92, no. 4, pp. 203–212. doi:10.1016/j.homp.2003.08.002

Hutchinson, Sarah Lyn (2008) “The memory of water: a critical analysis of the science behind a homeopathic theory,” An independent research project dated 25 April 2008, Toronto, Canada:Toronto School of Homeopathic Medicine

Bellavite, Paolo and Signorini, Andrea (2002) “The Emerging Science of Homeopathy: Complexity, Biodynamics, and Nanopharmacology (rev. ed.),” trans. by Anthony Steele. Berkeley, CA:North Atlantic Books

Chaplin, Martin (2010) Web site: “Water Structure and Science,” maintained at the South Bank University, London. URL:

Benveniste, Jacques: Wikipedia: Also see the Web site in French: and another Wiki article:
–  –  –  –  –

A Song I Like:
(Hindi) “yeh jeevan hai…”
Singer: Kishore Kumar
Music: Laxmikant-Pyarelal
Lyrics: Anand Bakshi