Finding a cozy n comfy enough a spot…

Update on 2021.02.02:

I have made a couple of inline updates in the sections 5. and 7. below.


I have already begun cleaning up and reorganizing the code, and writing the boring document. The work so far has come along pretty well. I have by now achieved a satisfactory level of consistency in the numerical results for the hydrogen atom in a 3D box.

As indicated in my last post here, I had found that it’s more useful to focus on the cell-side of the mesh rather than on the physical size of the box or the number of nodes per side of the cube.

Now, given below are some of the details of certain further, systematic, trials which I conducted, in order to arrive at optimum ranges for numerical parameters.

Since the analytical solution is available only for the hydrogenic atoms (i.e. systems with a positively charged nucleus and just one electron, e.g. the hydrogen atom and the He+ ion), these systematic studies were conducted only for them.

If you came here expecting that I might have something to say about the reproducibility for the helium atom, then well, you will have to wait for some 2–3 weeks. The nature of the issues themselves is like that. You can’t hurry things like these too much, as the studies below sure bring out.

So, anyway, here are the highlights of the systematic studies I conducted and some of the representative results.

All the results reported in this post are in the atomic units.


1. Finitization policy to replace the singularity of the Coulomb field:

In my last post here, I had mentioned that in FDM, we have to use a finite value in place of the -\infty at the nucleus. As a necessary consequence, we have to adopt some policy for this finitization.

While conducting my studies, now I found that it is better to frame this policy not in terms of a chosen fraction of the cell-side, but in terms of a certain relevant datum and a multiplying factor. The better procedure is this:

Whatever be the degree of mesh refinement, having constructed the mesh, calculate the always-finite PE value at the FDM node right next to the singularity. Then, multiply this PE value by a certain factor, and use it in place of the theoretically -\infty value at the singular node. Let’s give a name to this multiplying factor; let’s call it the Coulomb Field’s Singularity Finitization Factor (“CFSF” for short).

Notice that using this terminology, a CFSF value of 2.0 turns out to be exactly the same as using the half cell-side rule. However, framing the finitization policy in terms of the CFSF factor has the advantage that it makes it easier to compare the differences in the relative sharpness of the potential well at different mesh-refinement levels.

OK.

I then found that while using FDM, the eigenvalue solver is very sensitive to even small variations to the value of CFSF.

If you use a CFSF value of 1.8, then it turns out that the PE well does not go down enough in the neighbourhood of the singularity, and therefore, the reported ground-state eigenvalues can easily go to about -0.45, -0.2, or even worse. Refining the mesh doesn’t help—within the domain and mesh sizes which are both practicable on my laptop and relevant for the H and He atom modelling. Note, the analytical solution is: -0.5, exactly.

Conclusion: Using even a slightly lower CFSF spoils the results.

OTOH, if you use a CFSF of 2.2 to 2.5, then the ground-state energy can go lower than the exact value of -0.5. Now, this is a sure-shot indication that your numerical modelling has gone wrong.

In general, with FDM, you would expect that with mesh refinement, the convergence in the energy values would be not just monotonic but also one-sided, and also that the convergence would occur from “above” (because the energy values here are negative). In other words, if my understanding of the theory of numerical analysis is correct, then a properly done (meaningful) numerical simulation cannot produce energies below the analytical solution of -0.5.

So, clearly, using a CFSF value even slightly greater than 2.0 is bad for the health of the numerical simulations.

In the earlier trials reported in the last post, I had simply guessed that the value of 2.0 might be good enough for my initial trials. Now it turns out that my computational modeller’s intuition was pretty much on target—or at least, that I was plain lucky! The CFSF value of 2.0 indeed happens to be quite the best value to choose, given the rest of the parameters like the cell-side, the domain size, and the rest of the details of this problem (viz., the strength of the Coulomb singularity, the nature of the Schrodinger equation, the use of uniform and structured meshes, the FDM discretization, etc.).


2. Simple-minded mesh refinement doesn’t produce consistent results:

Suppose you keep the domain size fixed at, say, 20.0, and vary the mesh refinement levels.

Now, your naive expectation might be that as you refine the mesh by increasing the number of nodes per side of the cube, you should get more and more accurate results. That’s our usual experience with problems like diffusion in continua, and even for problems like convection in fluids.

However, the category of the QM problems is different! Here, we have an eigenvalue problem that must be solved with a singular potential field. The naive expectations built on simple problems like the Poisson-Laplace equation or the diffusion equation, go for a toss. Harmonic analysis might still apply in some form (frankly I don’t even know if it does!), but the singularity sure plays tricks!

This is an ugly fact, and frankly, I had not foreseen it. But it’s there. I had to keep myself reminding of the different nature of the eigenvalue problem together with the singular fields.

As you refine the mesh too much, then the absolute value of the PE at a node right next to the point of singularity increases without bound! This fact mandates that the finite value we (more or less) arbitrarily chose to use in place of the actually infinite value for the singular point, has itself to increase further too.

But, for some reasons not known to me (but which by now do feel vaguely reasonable!) the eigenvalue solver begins to experience difficulties with such increases in the absolute value of the PE value at the singularity. Roughly, the trouble begins to happen as the minimum potential energy (at the singular node) goes below -20 or so. In fact, I even found that a highly refined mesh might actually report a positive value for the ground-state energy—no bonding but, on the contrary, a repulsion of the electron!


3. Wavefunction fields are important in my new approach, but they don’t always converge to the analytical solution very well!:

With a reasonable level of mesh refinement, the ground-state energy does monotonically approach the exact figure of -0.5 However, I’ve found that a convergence in energy is not necessarily accompanied also by a good convergence trend in the absolute values of the wavefunction!

In the H-atom, for the ground-state analytical solution, the absolute value of the wavefunction has its maximum right at the nucleus; the wavefunction field forms a cusp at the nucleus, in fact. The analytical value for \psi(x)-max goes like: 0.564189584\dots. (That’s because in the atomic units, the Bohr radius a_0 is chosen to be exactly equal to 1, and so, at r = 0, the ground-state wavefunction for the H-atom becomes \psi(x_{\text{at nucleus}}) = 1/\sqrt{\pi}.)

Now the trouble I found was this:

With mesh refinement, even as the energy is nicely converging to something like -0.4938884 (against -0.5), the \psi-max might still be lingering around a lower figure like 0.516189. The \psi-max values converge more slowly, and their convergence shows opposite trend!

For relatively coarse meshes (i.e. high \Delta x of the FDM mesh), the \psi-max value is actually way much higher than the analytical solution; it even becomes as bad as 3.276834 or 1.393707. As you refine the mesh, they do begin to fall down and approach the analytical solution.

However, with further mesh refinement, the \psi-max values continue to fall down! They cross the analytical solution level of 0.564189584 too, and still continue to fall further! And, this behaviour occurs even as energy result is still approaching the exact solution in a nice-and-expected monotonic manner.

So, the trouble is: Using the right mesh size is actually a trade-off! You have to sacrifice some convergence on the energy number, so as to have a good (reliable) value for the \psi-max measure.

The trouble doesn’t stop there; see the next section.


4. Energies for the excited-states don’t always come out very well:

With appropriately high levels of mesh-refinement, the ground-state energy might be showing good convergence trends. Even the \psi-max values might be good enough (like 0.52 or so). But the energy and/or \psi-max for the first excited state still easily give trouble.

The energy for the first excited state for the hydrogen atom is, by analytical solution, -0.125, exactly.

The numerical values, when the simulation is working right, could be like -0.11, or even better, say -0.123, or thereabout. But that happens only when the mesh is of the intermediate refinement (the cell-side is neither too small nor too large).

However, with a more refined mesh (smaller cell-sides), the PE well can remain more or less rightly shaped for the ground-state energy, but it can still become too deep for the first-excited state energy! The first excited state energy can suddenly get degraded to a value like -0.04471003.

Indeed, there seems to be some kind of a numerical compensation going on in between the \psi-max values and the energy values, especially for the first-excited state energies. The ground-state energies remain much better, in relative terms. (If the mesh refinement is very high, even the ground-state energy goes off the track to something like -0.2692952 or even positive values. That’s what I meant by “appropriately” high levels of mesh refinement.)

I didn’t compare the numerical results with the analytical solutions for energies or \psi-max values for second-excited states or higher. Computation of the bonding energy makes reference only to the ground state, and so, I stopped my exploration of this side of the FDM + eigenvalue solver behaviour at this stage.


5. Atomic sizes reported by the numerical modeling show very good trends:

Another important consideration in my new approach has to do with the atomic radius of the atoms being modelled (hydrogen and helium).

After optimizing the mesh refinement (i.e., effectively, the cell-side), I conducted a series of numerical trials using different domain sizes (from 7.5 through 40.0 ), and implemented a rough-and-ready code to estimate the following measure:

The side of the nucleus-centered sub-cube in which roughly 95 \% (or 99 \%) of the probability cloud is contained.

This size can be taken as a good measure for the atomic size.

In the above working definition, I say roughly 95 \%, because I didn’t care to interpolate the wavefunction fields in between their nodal values. What this means is that the side of the sub-cube effectively changes only in the integer steps, and therefore, the percentage of the sub-cube contained may not be exactly 95 \%; it could be 97.5 \% for one domain size, and 95.3 \% for another domain size, just to pick up some numbers.

But even while using this rough and ready measure (and implementation), I found that the results were quite meaningfully consistent.

But why conduct these trials?

Well, realize that (1) the simulation box has a finite size, and (2) the Dirichlet conditions are being imposed at all the boundary nodes. Given these two constraints, the solution is going to show boundary-/edge-effects, i.e., the solution is going to depend on the domain size.

Now, in my approach, the spread of the probability cloud enters the calculations in a crucial manner. Numerically “extracting” the size of the simulated atom was, therefore, an important part of optimizing the simulations.

The expected behaviour of the above mentioned “size effect” was that as the domain size increases, the calculated atomic size, too, should increase. The question was: Were these differences in the numerically determined sizes important enough? did they vary too much? if yes, how much? The following is what I found:

First, I fixed the domain size (cube side) at 10.0, and varied the mesh refinement (from roughly 41 nodes per side to 121 and 131). I found that the calculated atomic sizes for the hydrogen atom varied but in a relatively small range—which was a big and happy surprise to me. The calculated size went from 5.60 while using a coarse mesh (requiring eigenvalue computation time of about 10 seconds) to a value like 5.25 for an intermediate refinement of the mesh (exe. time 2 min. 32 seconds i.e. 152 seconds), to 5.23 for as fine a mesh as my machine can handle (131 \times 131 \times 131, which required an exe. time of about 20 minutes i.e. 1200 seconds, for each eigenvalue computation call). Remember, all these results were for a domain size of 10.0.

Next, I changed the domain cube side to 15.0, and repeated the trials, for various levels of mesh refinements. Then, ditto, for the domain side of 20.0 and 40.0.

Collecting the results together:

  • 10.0
    • coarse: 5.60
    • intermediate: 5.25
    • fine: 5.23
  • 15.0
    • coarse: 6.0
    • intermediate: 5.62
  • 20.0
    • coarse: 6.40
    • intermediate: 5.50
    • fine: 5.67
  • 40.0
    • coarse: 4.0
    • intermediate: 5.5
    • fine: 6.0
    • very fine: 6.15

You might be expecting very clear-cut trends and it’s not the case here. However, remember, due to the trickiness of the eigenvalue solver in the presence of a “singular” PE well, not to mention the roughness of the size-estimation procedure (only integer-sized sub-cubes considered, and no interpolations of \psi to internodal values), a monotonic sort of behaviour is simply not to be expected here.

Indeed, if you ask me, these are pretty good trends, even if they are only for the hydrogen atom.

Note, for the helium atom, my new approach would require giving eigenvalue computation calls thousands of times. So, at least on this count of atomic radius computation, the fact that even the coarse or mid-level mesh refinement results didn’t vary much (they were in the range of 5.25 to 5.6) was very good. Meaning, I don’t have to sacrifice a lot of accuracy due to this one factor taken by itself.

For comparison, the atomic size (diameter) for the hydrogen atom is given in the literature (Wiki), when translated into atomic units, comes out variously as: (1) 0.94486306 using some “empirical” curve-fitting to some indirect properties of gases; (2) 4.5353427 while using the van der Waal criterion, and (3) 2.0031097 using “calculations” (whose basis or criteria I do not know in detail).

Realize, the van der Waal measure is closest to the criterion used by me above. Also, it is only expected that when using FDM, due to the numerical approximations, just the way the FDM ground-state energy values should come out algebraically greater (they do, say -0.49 vs. the exact datum of -0.5), the FDM \psi-max measure should come out smaller (it does, say 0.52 vs. the analytical solution of \approx 0.56), similarly, for the same reasons, the rough-and-ready estimated atomic size should come out as greater (it does, say 5.25 to 5.67 as the domain size increases from 10.0 to 40.0, the. van der Waal value being 4.54 ).

Inline update on 2021.02.02 19:26 IST: After the publication of this post, I compared the above-mentioned results with the analytical solution. I now find that the sizes of the sub-cubes found using FDM, and using the analytical solution for the hydrogen atom, come out as identical!  This is a very happy news. In other words, making comparisons with the van der Waal size and the other measure was not so relevant anyway; I should have compared the atomic sizes (found using the sub-cubes method) with the best datum, which is, the analytical solution! To put this finding in some perspective, realize that the FDM-computed wavefunctions still do differ a good deal from the analytical solution, but the volume integral for an easy measure like 95 \% does turn out be the same. The following proviso’s apply for this finding: The good match between the analytical solution and the FDM solution are valid only for (i) the range of the domain sizes considered here (roughly, 10 to 40), not for the smaller box sizes (though the two solution would match even better for bigger boxes), and (ii) only when using the Simpson procedure for numerically evaluating the volume integrals. I might as well also note that the Simpson procedure is, relatively, pretty crude. As the sizes of the sub-cubes go on increasing, the Simpson procedure can give volume integrals in excess of 1.0 for both the FDM and the analytical solutions. Inline update over.

These results are important because now I can safely use even a small sized domain like a 10.0-side cube, which implies that I can use a relatively crude mesh of just 51 nodes per side too—which means a sufficiently small run-time for each eigenvalue function call. Even then, I would still remain within a fairly good range on all the important parameters.

Of course, it is already known with certainty that the accuracy for the bonding energy for the helium atom is thereby going to get affected adversely. The accuracy will suffer, but the numerical results would be on the basis of a sweet-zone of all the numerical parameters of relevance—when validated against hydrogen atom. So, the numerical results, even for the helium atom, should have greater reliability.

Considerations like conformance to expected behaviour in convergence, stability, and reliability are far more important considerations in numerical work of this nature. As to sheer accuracy itself, see the next section too.


6. Putting the above results in perspective:

All in all, for the convergence behaviour for this problem (eigenvalue-eigenvector with singular potentials) there are no easy answers. Not even for just the hydrogen atom. There are trade-offs to be made.

However, for computation of bonding energy using my new approach, it’s OK even if a good trade-off could be reached only for the ground-state.

On this count, my recent numerical experimentation seems to suggest that using a mesh cell-side of 0.2 or 0.25 should give the most consistent results across a range of physical domain sizes (from 7.5 through 30.0 ). The atomic size extracted from the simulations also show good behaviour.

Yes, all these results are only for the hydrogen atom. But it was important that I understand the solver behaviour well enough. It’s this understanding which will come in handy while optimizing for the helium atom—which will be my next step on the simulation side.

The trends for the hydrogen atom would be used in judging the results for the the bonding energy for the helium atom.


7. The discussed “optimization” of the numerical parameters is strictly for my laptop:

Notice, if I were employed in a Western university or even at an IIT (or in an Indian government/private research lab), I would have easy access to supercomputers. In that case, much of this study wouldn’t be so relevant.

The studies regarding the atomic size determination, in particular, would still be necessary, but the results are quite stable there. And it is these results which tell me that, had I have access to powerful computational resources, I could have used larger boxes (which would minimize the edge-/size- effect due to the finite size of the box), and I could have used much, much bigger meshes, while still maintaining the all-important mesh cell-side parameter near the sweet spot of about 0.20 to 0.25. So, yes, optimization would still be required. But I would be doing it at a different level, and much faster. And, with much better accuracy levels to report for the helium atom calculations.

Inline update on 2021.02.02 19:36 IST: Addendum: I didn’t write this part very well, and a misleading statement crept in. The point is this: If my computational resources allow me to use very big meshes, and then I would also explore cell-sides that are smaller than the sweet-spot of 0.20 to 0.25. I’ve been having a hunch that the eigenvalue solver would still not show up the kind of degeneracy due to very deep PE well, provided that the physical domain size also were to be made much bigger. In short, if very big meshes are permissible, then there is a possibility that another sweet-spot at smaller cell-sizes could be reached too. There is nothing physical about the 0.20 to 0.25 range alone, that’s the point. Inline update over.

The specifics of the study mentioned in this post was largely chosen keeping in the mind the constraint of working within the limits of my laptop.

Whatever accuracy levels I do eventually end up getting for the helium atom using my laptop, I’ll be using it not just for my planned document but also for my very first arXiv-/journal- paper. The reader of the paper would, then, have to make a mental note that my machine could only support a mesh size of only 131 nodes at its highest end. For FDM computations, that still is a very crude mesh.

And, indeed, for the reasons given above, I would in fact be reporting the helium atom results for meshes in between 41 to 81 nodes per side of the cube, not even 131 nodes. All the rest of the choices of the parameters were made keeping in view this limitation.


8. “When do you plan to ship the code?”

I should be uploading the code eventually. It may not be possible to upload the “client-side” scripts for all the trials reported here (simply because once you upload some code, the responsibility to maintain it comes too!). However, exactly the same “server”- or “backend”- side code will sure be distributed, in its entirety. I will also be giving some indication of the kind of code-snippets I used in order to implement the above mentioned studies. So, all in all, it should be possible for you to conduct the same/similar trials and verify the above given trends.

I plan to clean up and revise the code for the hydrogen atom a bit further, finalize it, and upload it to my GitHub account within, say, a week’s time. The cleaned up and revised version of the helium-atom code will take much longer, may be 3–4 weeks. But notice, the helium-atom code would be giving calls to exactly the same library as that for the hydrogen atom.

All in all, you should have a fairly good amount of time to go through the code for the 3D boxes (something which I have never uploaded so far), run it, run the above kind of studies on the solid grounds of the hydrogen atom, and perhaps even spot bugs or suggest better alternatives to me. The code for the helium atom would arrive by the time you run through this gamut of activities.

So, hold on just a while, may be just a week or even less, for the first code to be put on the GitHub.


On another note, I’ve almost completed compiling a document on the various set of statements for the postulates of QM. I should be uploading it soon too.


OK, so look for an announcement here and on my Twitter thread, regarding the shipping of the basic code library and the user-script for the hydrogen atom, say, within a week’s time. (And remember, this all comes to you without any charge to you! (For that matter, I am not even in any day-job.))


A song I like:

(Hindi) दिल कहे रुक जा रे रुक जा (“dil kahe ruk jaa re ruk jaa”)
Lyrics: Sahir Ludhiyanvi
Music: Laxmikant-Pyarelal
Singer: Mohammed Rafi

[Another favourite right from my high-school days… A good quality audio is here [^]. Many would like the video too. A good quality video is here [^], but the aspect-ratio has gone awry, as usual! ]


History:
— 2020.01.30 17:09 IST: First published.
— 2021.02.02 20:04 IST: Inline updates to sections 5. and 7 completed. Also corrected a couple of typos and streamlined just a few sentences. Now leaving this post in whatever shape it is in.

 

Micro-level water-resources engineering—10: A bridge to end droughts?…

Let me ask you a simple question: Why are bridges at all necessary? I mean to refer to the bridges that get built on rivers. …Why do you at all have to build them?

Your possible answer might be this: Bridges are built on rivers primarily because there is water in the rivers, and the presence of the water body makes it impossible to continue driving across the river. Right? OK. Good.

In India, “kachchaa” (untarred) roads often exist on the sides of the main road or a high-way, as we approach a bridge on a river. These side-roads usually aren’t built after planning, but simply are a result of the tracks left by the bullock-carts plying through the fields, on both sides of the road. People from nearby villages often find such side roads very convenient for their purposes, including accessing the river. The sand-smugglers too find such approach-roads very convenient to their purposes. The same roads are also found convenient by journalists and NGO workers who wish to visit and photograph the same river-bed as it turns totally dry, for quite some time before summer even approaches.

Somewhere in there lies a certain contradiction—a technical contradiction, I should add.

If there were to be no water, ever, in these rivers, then no bridges would at all be necessary. Yet, these bridges are there. That’s because, in monsoon, it rains so much that these rivers begin to flow with full capacity; they even overflow and cause extensive flooding in the adjacent areas. So, naturally, bridges have to be built.

Yet, come even just late winter time, and the river-bed is already on its way to going completely dry. The bridge might as well not have been there.

Thus, the bridges, it would seem, are both necessary and not necessary in India. That’s the contradiction I was talking about.


But why not turn this entire situation to your advantage, and use the very site of a bridge for building a small check-dam?

After all, the very fact that there is a bridge means:

there is enough water flowing through that river, at least during monsoons. We only have to find a way to use it.


Here are some of the advantages of building check-dams nearby a bridge—or may be even directly underneath its span:

  • The patterns of water-flow across the pillars of the bridge, and even the pattern of flooding near the site of the bridge, has become well known, even if only because there is a better access to this site (as compared to other potential sites for a check-dam)—because of the existence of the main road.
  • There is already a built structure in place. This means that the nature of the rocks and of the soil at the site is already well studied. You don’t have to conduct costly geological surveys afresh; you only have to refer to the ready-made past reports.
  • Another implication of there being a pre-existing structure is this: The nearby land has already been acquired. There is no cost to be incurred in land acquisition, and the cost and other concerns in relocating the people.
  • Columns/pillars of the bridge already exist, and so, the cost of building the wall of a check-dam can come down at least a bit—especially if the wall is constructed right underneath the bridge.
  • Many times, there also is a lower-level cause-way, or an older and abandoned bridge lying nearby, which is no longer used. It can be dismantled so that the stones used in its construction can be recycled for building the wall of the check-dam. It’s another potential reduction in cost (including in the material transportation cost).
  • The existence of a bridge at a site can often mean that there is likely to be a significant population on either sides of the river—a population which had demanded that the bridge be built in the first place. Implication: If a water body comes to exist at this same site, then the water doesn’t have to be transported over long distances, because a definite demand would exist locally. Even if not, if the check-dam is equipped with gates, then the stored water can be supplied at distant locations downstream using the same river—you don’t have to build canals (starting from the acquisition of land for them, and further costs and concerns down the way).
  • Easy access to transportation would be good for side-businesses like fisheries, even for building recreational sites. (Think agro-tourism, boating, etc.)

Of course, there are certain important points of caution or concern, too. These must be considered in each individual case, on a case-to-case basis:

  • The local flow pattern would get adversely affected, which can prove to be dangerous for the bridge itself.
  • There is a likelihood of a greater flooding occurring in the nearby locations—esp. upstream! A blocked river swells easily, and does not drain as rapidly as it otherwise would—the causeway or the spillway can easily turn out to be too small, especially in the case of small dams or check-dams.
  • The height of the bridge itself may be good, but still, the river itself may turn out to be a little too shallow at a given location for a check-dam to become technically feasible, there. Given the importance of the evaporation losses, the site still may not turn out to be suitable for building a check-dam. (For evaporation losses, see my last post in this series [^].)

But overall, I think that the idea is attractive enough that it should be pursued very seriously, especially by students and faculty of engineering colleges.


We all know that there has been a great proliferation of engineering colleges all over the country. The growth is no longer limited to only big cities; many of them are situated in very rural areas too.

When a problem to be studied touches on the lives of people, say a student or two, it becomes easy for them to turn serious about it. Speaking from my own personal experience, I can say that BE project-reports from even relatively lower-quality engineering colleges have been surprisingly (unexpectedly) good, when two factors were present:

(i) When the project topic itself dealt with some issue which is close to the actual life of the students and the faculty, to their actual concerns.

For instance, consider the topic of studies of design of check-dams and farm-ponds, and their effectiveness.

During my stint as a professor, I have found that rural students consistently show (across batches) reporting of the actual data (i.e., not a copy-paste job).

In fact, even if they were not otherwise very bright academically, they did show unexpectedly better observation abilities. The observation tables in their reports would not fail to show the more rapidly falling water levels in check-dams. Invariably, they had backed the data in the tables with even photos of the almost dried up check-dams too.

Yes, the photos were often snapped unprofessionally—invariably, using their cell-phones. (Their parked bikes could be easily visible in the photos, but then, sometimes, also the Sun.) No, these rural students typically didn’t use the photo-quality glossy paper to take their printouts—which was very unlike the students from the big cities. The rural students typically had used only ordinary bond-paper even for taking color printouts of their photos (invariably using lower-resolution ink-jet printers).

But still, typically, the set of photos would unambiguously bring out the fact of multiple field visits they had made, per their teacher. The background shrubs showed seasonal variations, for instance; also the falling water levels, and the marks of the salt on the dam walls.

Invariably, the photos only corroborated—and not even once contradicted—the numbers or trends reported in their observation tables.

Gives me the hope that one relatively easy way to identify suitable bridges would be to rely on students like these.

(ii) The second factor (for good, reliable field studies) was: the presence of a teacher who guides the students right.

No, he doesn’t have to have a PhD, or even ME for that matter. But he has to know for himself, and pass on to his students, the value of the actual, direct and unadulterated observations, the value of pursuing a goal sincerely over a course of 6–8 months—and the fun one can have in doing that.


OK, a bit of a digression it all was. But the point to which I wanted to come, was academics, anyway.

I think academic institutions should take a lead in undertaking studies for feasibility of converting a bridge into a check-dam. Each academic team should pick up some actual location, and study it thoroughly from different viewpoints including (but not limited to):

  • CFD analysis for predicting the altered water-flow and flooding patterns (with the water flow possibly designed to occur over the main wall itself, i.e. without a side-weir), especially for a dam which is situated right under a bridge);
  • FEM analysis for strength and durability of the structures;
  • Total costs that will be incurred; total savings due to the site (near a bridge vs. far away from it at some location that is not easy to access); and overall cost–benefits analysis; etc.

The initiative for such studies could possibly begin from IITs or other premier engineering colleges, and then, via some research collaboration schemes, it could get spread over to other engineering colleges. Eventually, this kind of a research—a set of original studies—could come to take hold in the rural engineering colleges, too. … Hopefully.


Should the government agencies like PWD, Irrigation Dept., or “private,” American concerns like the Engineers India Limited, etc., get involved?

Here, I think that the above-mentioned academic teams certainly are going to benefit from interactions with certain select institutes like (speaking of Maharashtra) CDO Nasik, and CWPRS Pune.

However, when it comes PWD etc. proper, I do think that they operate rather in a direct project-execution mode, and not so much in a “speculative” research mode. Plus, their thinking still remains grooved in the older folds such as: either have multi-purpose large dams or have no dams at all!, etc.

But, yes, CWPRS Pune has simulation facilities (both with physical scale-models, and also via computational simulation methods), and CDO Nasik has not only design expertise but also data on all the bridges in the state. (CDO is the centralized design services organization that is responsible for engineering designs of all the dams, canals, bridges and similar structures built by the state government in Maharashtra.) The cooperation of these two organizations would therefore be important.


In the meanwhile, if you are not an engineering student or a faculty member, but still, if you are enthusiastic about this topic, then you can do one thing.

The next time you run into a site that fulfills the following criteria, go ahead, discuss it with people from the nearby villages, take a good set of snaps of the site from all sides, write a very small and informal description including the location details, and send it over by email to me. I will then see what best can be done to take it further. (The fact that there were so few engineering colleges in our times has one advantage: Many of the engineers today in responsible positions come from the COEP network.)

The absolutely essential criteria that your site should fulfill are the following two:

  1. The river gorge must be at least 25 feet deep at the candidate location.
  2. The under-side of the bridge-girder should itself be at least 35 feet above the ground or at a higher level (so that there is at least prima facie enough of a clearance for the flood water to safely pass through the bridge). But please note, this figure is purely my hunch, as of now. I may come back and revise this figure after discussing the matter with some researchers/IIT professors/experienced engineers. For visualization, remember: 10 feet means one storey, or the height of a passenger bus. Thus, the road should lie some 4 stories high from the river-bed. Only then can you overcome evaporation losses and also have enough clearance for flood water to safely pass through without doing any damage to the bridge or the dam.

Further, the preferred criteria (in site selection) would be these:

  1. The upstream of the site should not have too steep a gradient—else, the storage volume might turn out to be too small, or, severe flooding might occur upstream of the check-dam! For the same reason, avoid sites with water-falls nearby (within 1–2 km) upstream.
  2. The site should preferably be situated in a drought-prone region.
  3. Preferably, there should be an older, abandoned bridge of a much lower height (or a cause-way) parallel to a new bridge. Though not absolutely necessary I do include this factor in searches for the initial candidate locations, because it indirectly tells us that enough water flows through the river during the monsoons that the cause-way wouldn’t be enough (it would get submerged), and therefore, a proper bridge (which is tall enough) had to be built. This factor thus indirectly tells us that there is enough rainfall in the catchment area, so that the check-dam would sure get filled to its design capacity—that one wouldn’t have to do any detailed rainfall assessment for the catchment region and all.

So, if you can spot such a site, please do pursue it a bit further, and then, sure do drop me a line. I will at least look into what all can be done.


But, yes, in India, bridges do get built in the perennially drought-prone regions too. After all, when the monsoon arrives, there is flooding even in the drought-prone regions. It’s just that we haven’t applied enough engineering to convert the floods into useful volumes of stored water.

… For a pertinent example, see this YouTube video of a bridge getting washed away near Latur in the Marathwada region of Maharashtra, in September 2016 [^]. Yes, Latur is the same city where even drinking water had to be supplied using trains, starting from early April 2016 [^].

So, we supplied water by train to Latur in April 2016. But then, in September 2016 (i.e. the very next monsoon), their local rivers swelled so much, that an apparently well-built bridge got washed away in the floods. … Turns out that the caution I advised above, concerning simulating flooding, wasn’t out of place. …  But coming back to the drought-prone Latur, though I didn’t check it, I feel sure that come April 2017, and it was all back to a drought in Latur—once again. Fatigue!


PS: In fact, though this idea (of building check-dams near bridges) had occurred to me several years ago, I think I never wrote about it, primarily because I wasn’t sure whether it was practical enough to be deployed in relatively flatter region like Marathwada, where the drought is most acute, and suitable sites for dams, not so easy to come by. (See my earlier posts covering the Ujani and Jayakawadi dams.) However, as it so happened, I was somewhat surprised to find someone trying to advocate this idea within the government last year or so. … I vaguely remember the reports in the local Marathi newspapers in Pune, though I can’t off-hand give you the links.

On second thoughts, here are the links I found today, after googling for “check dams near bridges”. Here are a couple of the links this search throws up as of today: [^] and [^].

… Also, make sure to check the “images” tab produced by this Google search too. … As expected, the government agencies have been dumb enough to throw at least some money at at least a few shallow check-dams too (not good for storage due to evaporation losses) that were erected seemingly in the regions of hard rocks and all (generally, not so good for seepage and ground-water recharge either). As just one example, see here [^]. I am sure there are many, many other similar sites in many other states too. Government dumb-ness is government dumb-ness. It is not constrained by this government or that government. It is global in its reach—it’s even universal!

And that’s another reason why I insist on private initiative, and on involvement of local engineering college students and faculty members. They can be motivated when the matter is close to their concerns, their life, and so, with their involvement the results can turn out to be very beneficial. If nothing else, a project experience like this would help the students become better engineers—less wasteful ones. That too is such an enormous benefit that we could be even separately aiming for it. Here, it can come as a part of the same project.


Anyway, to close this post: Be on the lookout for good potential sites, and feel free to get in touch with me for further discussions on any technical aspects related to this issue. Take care, and bye for now…


A song I like:

(Hindi) “chori chori jab nazare mili…”
Lyrics: Rahat Indori
Music: Anu Malik
Singers: Kumar Sanu, Sanjeevani

[A song with a very fresh feel. Can’t believe it came from Anu Malik. (But, somehow, the usual plagiarism reporting sites don’t include this song! Is it really all that original? May be…)]

 

 

I need a [very well paying] job in data science. Now.

I need a very well paying job in data science. Now. In Pune, India.


 



Yes, I was visiting Kota for some official work when at the railway station of the [back then, a simple little] town, on a “whim” (borne out of a sense of curiosity, having heard the author’s name), I bought it. That was on 14th July 1987. The stamp of the A. H. Wheeler and Company (Rupa Publications), so well known to us all (including IITians and IIM graduates) back then, stand in a mute testimony for the same—the price, and the fact that this little book was imported by them. As to bearing testimony to the event, so does my signature, and the noting of the date. (I would ordinarily have no motivation to note a fake date, what do you say?) Also notable is the price of the book: Rs. 59/-. Bought out of Rs. 1800/- per month, if I remember those days right (and plain because I was an M. Tech. from (one of the then five) IITs. My juniors from my own UG college, COEP, would have had to start with a Rs. 1200/- or Rs. 1400/- package, and rise to my level in about 3 years, back then.)

Try to convince my the then back self that I would be jobless today.

No, really. Do that.

And see if I don’t call you names. Right here.

Americans!


A song I like:

(English, pop-song): “Another town, another train…”
Band (i.e. music, composition, lyrics, etc., to the best of my knowledge): ABBA

Bye for now.


And develop a habit to read—and understand—books. That’s important. As my example serves to illustrate the point. Whether I go jobful or jobless. It’s a good habit to cultivate.

But then, Americans have grown so insensitive to the authentic pains of others—including real works by others. The said attitude must reflect inwards too. The emphasis is on the word “authentic.” If a man doesn’t care for another honest, really very hard-working man in pain but spends his intellect and time in finding rationalizations to enhance his own prestige and money-obtaining powers, by the law of integrative mechanism of conscisousness that is the law of “karma,” the same thing must haunt him back—whether he be a Republican, or a Democrat. (Just a familiarity with the word “karma” is not enough to escape its bad—or good—effects. What matters are actions (“karma”s), ultimately. But given the fact that man has intellect, these are helped, not obscured, by it.)

Go, convince Americans to give me a good, well-paying job, in data science, and in Pune—the one that matches my one-sentence profile (mentioned here) and my temperament. As to the latter, simple it is, to put it in one sentence: “When the time calls for it, I am known to call a spade a spade.”

And, I can call Americans (and JPBTIs) exactly what they have earned.

But the more important paragraph was the second in this section. Starting from “But then, Americans have grown so insensitive to the authentic… .”

Instead of “pains,” you could even add a value / virtue. The statement would hold.