Papers must fall out…

Over the past couple of weeks or so, I’ve been going over SPH (smoothed particle hydrodynamics).

I once again went through the beginning references noted in my earlier post, here [^]. However, instead of rewriting my notes (which I lost in the last HDD crash), this time round, I went straight to programming. … In this post, let me recap recall what all I did.


First, I went through the great post “Why my fluids don’t flow” [^] by Tom Madams. … His blog has the title: “I am doing it wrong,” with the sub-text: “most of the time.” [Me too, Tom, me too!] This post gave a listing of what looked like a fully working C++ code. Seeing this code listing (even if the videos are no longer accessible), I had to try it.

So, I installed the Eclipse CDT. [Remember my HDD crash?—the OS on the new HDD had no IDEs/C++ compilers installed till now; I had thus far installed only Python and PyCharm]. I also installed MinGW, freeglut, Java JRE, but not all of them in the correct order. [Remember, I too do it wrong, most of the time.] I then created a “Hello World” project, and copy-pasted Tom’s code.

The program compiled well. [If you are going to try Tom’s code in Eclipse CDT + MinGW on Windows, the only issue you would now (in 2016) run into would be in the Project Settings, both in the compiler and linker settings parts, and both for OpenGL and freeglut. The latest versions of Eclipse & MinGW have undergone changes and don’t work precisely as explained even in the most helpful Web resources about this combination. … It’s not a big deal, but it’s not exactly what the slightly out-of-date online resources on this topic continue telling you either. … The options for the linker are a bit trickier to get than those for the compiler; the options for freeglut certainly are trickier to get than those for OpenGL. … If you have problems with this combination (Eclipse + MinGW on Windows 7 64-bit, with OpenGL and freeglut), then drop me a line and I will help you out.]

Tom’s program not only compiled well, but it also worked beautifully. Quite naturally, I had to change something about it.

So I removed his call to glDrawArrays(), and replaced the related code with the even older glBegin(GL_POINTS), glVertex2d(), glEnd() sort of a code. As I had anticipated,  there indeed was no noticeable performance difference. If the fluid in the original code required something like a minute (of computer’s physical time) to settle down to a certain quiescent state, then so did the one with the oldest-style usage of OpenGL. The FPS in the two cases were identical in almost all of the release runs, and they differed by less than 5–7% for the debug runs as well, when the trials were conducted on absolutely fresh cold-starts (i.e. with no ready-to-access memory pages in either physical or virtual memory).

Happy with my change, I then came to study Tom’s SPH code proper. I didn’t like the emitters. True to my core engineering background, what I wanted to simulate was the dam break. That means, all the 3000 particles would be present in the system right from the word go, thereby also having a slower performance throughout, including in the beginning. But Tom’s code was too tied up with the emitters. True to my software engineering background, rather than search and remove the emitters-related portion and thus waste my time fixing the resulting compiler errors, I felt like writing my own code. [Which true programmer doesn’t?]

So I did that, writing only stubs for the functions involving the calculations of the kernels and the accelerations. … I, however, did implement the grid-based nearest-neighbor search. Due to laziness, I simply reused the STL lists, rather than implementing the more basic (and perhaps slightly more efficient) “p->next” idiom.

Then I once again came back to Tom’s code, and began looking more carefully at his SPH-specific computations.

What I now didn’t like was the variables defined for the near-density and the near-pressure. These quantities didn’t fit very well into my preconceived notions of how a decent SPH code ought to look like.

So, I decided to deprove [which word is defined as an antonym of “improve”] this part, by taking this 2010 code from its 2007 (Becker et al.) theoretical basis, to a 2003 basis (Muller et al., Eurographics).

Further following my preconceived notions, I also decided to keep the values of the physical constants (density, gas stiffness, viscosity, surface tension) the same as those for the actual water.

The code, of course, wouldn’t work. The fluid would explode as if it were a gas, not water.

I then turned my learner’s attention to David Bindel’s code (see the “Resources” section at the bottom of his page here [^]).

Visiting Bindel’s pages once again, this time round, I noticed that he had apparently written this code only as a background material for a (mere) course-assignment! It was not even an MS thesis! And here I was, still struggling with SPH, even after having spent something like two weeks of full-time effort on it! [The difference was caused by the use of the realistic physical constants, of course. But I didn’t want to simply copy-paste Tom’s or Bindel’s parameter values; I wanted to understand where they came from—what kind of physical and computational contexts made those specific values give reasonable results.]

I of course liked some of the aspects of Bindel’s code better—e.g. kernels—and so, I happily changed my code here and there to incorporate them.

But I didn’t want to follow Bindel’s normalize_mass routine. Two reasons: (i) Once again according to my preconceived notions, I wanted to first set aside a sub-region of the overall domain for the fluid; then decide with how many particles to populate it, and what lattice arrangement to follow (square? body centered-cubic? hexagonal close-packed?); based on that, calculate each particle’s radius; then compute the volume of each particle; and only then set its mass using the gross physical density of the material from which it is composed (using the volume the particle would occupy if it were to be isolated from all others, as an intermediate step). The mass of a particle, thus computed (and not assumed) would remain fixed for all the time-steps in the program. (ii) I eventually wanted a multi-phase dam-break, and so wasn’t going to assume a global constant for the mass. Naturally, my code wouldn’t be able to blindly follow Bindel on all counts.

I also didn’t like the version of the leapfrog he has implemented. His version requires you to maintain additional quantities of the velocities at the half time-steps (I didn’t mind that), and also to implement a separate leapfrog_start() function (which I did mind—an additional sequence of very similar-looking function calls becomes tricky to modify and maintain). So, I implemented the other version of the leapfrog, viz., the “velocity Verlet.” It has exactly the same computational properties (of being symplectic and time-reversible), the same error/convergence properties (it too is second-order accurate), but it comes with the advantage that the quantities are defined only at the integer time-steps—no half-time business, and no tricky initialization sequence to be maintained.

My code, of course, still  didn’t work. The fluid would still explode. The reason, still, was: the parameter values. But the rest of the code now was satisfactory. How do I know this last part? Simple. Because, I commented out the calls to all the functions involving all other accelerations, and retained only the acceleration due to gravity. I could then see the balls experiencing the correct free-fall under gravity, with the correct bouncing-back from the floor of the domain. Both the time for the ball to hit the floor as well as the height reached after bouncing were in line with what physics predicts. Thus I knew that my time integration routines would be bug-free. Using some debug tracings, I also checked that the nearest-neighbour routines were working correctly.

I then wrote a couple of Python scripts to understand the different kernels better; I even plotted them using MatPlotLib. I felt better. A program I wrote was finally producing some output that I could in principle show someone else (rather than having just randomly exploding fluid particles). Even if it was doing only kernel calculations and not the actual SPH simulation. I had to feel [slightly] better, and I did.

At this stage, I stopped writing programs. I began thinking. [Yes, I do that, too.]


To cut a long story short, I ended up formulating two main research ideas concerning SPH. Both these ideas are unlike my usual ones.

Usually, when I formulate some new research idea, it is way too conceptual—at least as compared to the typical research reported in the engineering journals. Typically, at that stage (of my formulation of a new research idea), I am totally unable to see even an outline of what kind of a sequence of journal papers could possibly follow from it.

For instance, in the case of my diffusion equation-related result, it took me years before an outline for a good conference paper—almost like really speaking, at par with a journal paper—could at all evolve. I did have the essential argument ready. But I didn’t know what all context—the specifically mathematical context—would be expected in a paper based on that idea. I (and all the mathematicians I contacted) also had no idea as to how (or where) to go hunting for that context. And I certainly didn’t have any concrete idea as to how I would pull it all together to build a concrete and sufficiently rigorous argument. I knew nothing of that; I only knew that the instantaneous action-at-a-distance (IAD) was now dead; summarily dead. Similarly, in case of QM, I do have some new ideas, but I am still light-years away from deciding on a specific sequence of what kind of papers could be written about it, let alone have a good, detailed idea for the outline of the next journal paper to write on the topic.

However, in this case—this research on SPH—my ideas happen to be more like what [other] people typically use when they write papers for [even very high impact] journals those which lie behind the routine journal papers. So, papers should follow easily, once I work on these ideas.


Indeed, papers must follow those ideas. …There is another reason to it, too.

… Recently, I’ve come to develop an appreciation, a very deep kind of an appreciation, of the idea of having one’s own Google Scholar page, complete with a [fairly] recent photo, a verified email account at an educational institution (preferably with a .edu, or a .ac.in (.in for India) domain, rather than a .org or a .com domain), and a listing of one’s own h-index. [Yes, my own Google Scholar page, even if the h-Index be zero, initially. [Time heals all wounds.] I have come to develop that—an appreciation of this idea of having a Google Scholar page. … One could provide a link to it from one’s personal Web site, one could even cite the page in one’s CV, it could impress UGC/NBA/funding folks…. There are many uses to having a Google Scholar page.

…That is another reason why [journal] papers must come out, at least now.

And I expect that the couple of ideas regarding SPH should lead to at least a couple of journal papers.

Since these ideas are more like the usual/routine research, it would be possible to even plan for their development execution. Accordingly, let me say (as of today) that I should be able to finish both these papers within the next 4–5 months. [That would be the time-frame even if I have no student assistant. [Having a student assistant—even a very brilliant student studying at an IIT, say at IIT Bombay—would still not shorten the time to submission, neither would it reduce my own work-load any more than by about 10–20% or so. That’s the reason I am not planning on a student assistant on these ideas.]

But, yes, with all this activity in the recent past, and with all the planned activity, it is inevitable that papers would fall out. Papers must, in fact, fall out. …. Journal papers! [Remember Google Scholar?]


Of course, when it comes to execution, it’s a different story that even before I begin any serious work on them, I still have to first complete writing my CFD notes, and also have to write a few FDM, FVM and VoF/LevelSet programs scripts or OpenFOAM cases. Whatever I had written in the past, most of it was lost in my last HDD crash. I thus have a lot of territory to recover first.

Of course, rewriting notes/codes is fast. I could so rapidly progress on SPH this year—a full working C++ code in barely 2–3 weeks flat—only because I had implemented some MD (molecular dynamics) code in 2014, no matter how simple MD it was. The algorithms for collision detection and reflections at boundaries remain the same for all particles approaches: MD with hard disks, MD with LJ potential, and SPH. Even if I don’t have the previously written code, the algorithms are no longer completely new to me. As I begin to write code, the detailed considerations and all come back very easily, making the progress very swift, as far as programming is concerned.

When it comes to notes, I somehow find that writing them down once again takes almost the same length of time—just because you had taken out your notes earlier, it doesn’t make writing them down afresh takes any less time.

Thus, overall, recovering the lost territory would still take quite some effort and time.

My blogging would therefore continue to remain sparse even in the near future; expect at the most one more post this month (May 2016).

The work on the journal papers itself should begin in the late-June/early-July, and it should end by mid-December. It could. Nay, it must. … Yes, it must!

Papers must come out of all these activities, else it’s no research at all—it’s nothing. It’s a zero, a naught, a nothing, if there are no papers to show that you did research.

Papers must fall out! … Journal papers!!


A Song I Like:

(Western, Instrumental) “The rain must fall”
Composer: Yanni


[May be one quick editing pass later today, and I will be done with this post. Done on 12th May 2016.]

[E&OE]

Advertisements

12 thoughts on “Papers must fall out…

  1. Hello sir, as written in your previous blog you were supposed to complete CFD notes during this weekends, if you have completed the notes can you please share it with us specially wave equation part with solved examples using Mac Cormack scheme…

    [Admin note: Saw tonight—on 20–21 May 2016.]

    • No Shreenath, I didn’t mention anything like week-ends.

      The notes I am now writing aren’t meant for classes or examinations.

      The current notes are of quite uneven coverage (in terms of both the range and the depth of the topics covered). They are actually more like some hurried scriblings that one makes as he goes about planning (or even implementing) his code. These are not meant to be shared—not in their current form anyway. Once the code is complete and the initial testing is over, and if I have time at that time, I may then come back to this material and revise it. In that eventuality, it may become “shareable.” Right now, it’s not sharing-worthy!

      If I were to have a student who had all: excellent C++ skills, some previous knowledge of CFD, and then also an interest in writing CFD code (as against using the ready-made CFD packages), then I might have considered having him on this project, even though, as I said in the post above in the context of SPH, having a student does not typically reduce the time to completion. It’s a huge misconception that students help complete a project faster. No, they don’t. (Neither are they supposed to. They are supposed to do their own learning, not finish their professor’s pet projects.) But the misconception is quite prevalent. So let me ask the reader to concretize it.

      Suppose I am writing a toy FVM code. The idea is to have a code base that is rich enough to be useful in research (though not in production), and small enough that novel ideas and features could be easily built into it for testing them out. (By way of a contrast, think changing OpenFOAM, for instance.) In what ways could a student possibly help me write such a code-base?

      Since the student isn’t as experienced as I am, I would have to first do the entire system architecture, build a coherent framework, write the first set of helper classes, and get some preliminary output. The student wouldn’t come into picture until this part is over. (Neither would the notes written for this part be very useful to him in writing his university examinations.)

      Assuming the above part is over, i.e. after some 40–60% part is already over, he could perhaps now come on board. He would take some time to absorb the architecture and the problem. Only then could he contribute. By that time, the project would move to something like 60–70% completion stage.

      In what way could he now possibly help? He could add some modules to handle some additional kinds of boundary conditions. (A demanding task, but not very core—one can always produce a paper even before the code is able to handle every type of boundary conditions.)

      … So, anyway, if I were to have a student already working this way with me, I would have shared these notes with him….

      For the typical course work-related notes, well, in my experience, the best time to write them is when the semester is in progress, and the topics have begun being covered on a regular basis—without changes in the work-loads of the professor in the middle of a semester (by the college management), and without continuous mass-bunks for weeks, or requests to postpone a lecture or club 2–3 lectures together (by the students).

      … So, writing the course-notes would have to wait until the next time I am given the opportunity to teach CFD, without many hiccups. (Though I don’t insist on it, a very good way at the UG level is for the professor to teach and the students to act as scribes in rotation, submit the material to the professor in a timely manner, and for the professor to simply revise and expand upon the material and post it on college intra-net first, before posting it publicly. At the PG level, an effective way is for the students to draw notes first, and for the professor to expand on the material in the class/personal discussion, and only then to finalize the notes. In every case, effective notes requires regularity—no interruptions with work-load changes, and no mass-bunkings.)

      But coming back to the specific request you made: Guess you are not at a loss. Just refer to Anderson, chapter 6; that should be enough for your exams.

      [Last night, I didn’t recall the chapter number and then hated to open the book-rack and search for the contents, so I just published your comment without answering it. (I don’t check for comments every day; I check may be once in a week or longer. And, wordpress has no feature to notify appearance of comments by emails.) … Today, when I began answering, I ended up taking big detours as usual 🙂 … ]

      Best,

      –Ajit
      [E&OE]

    • Thanks for your comments (and the patience until the time that I checked out the comments queue here).

      No, a general conclusion like that isn’t called for. Check out:
      http://spheric-sph.org/validation-tests. These are perfectly “real” fluid simulations—check out the experimental validations. (They also are fairly big, not “toy.”) In fact, if I recall it right, Hirt and Nichol’s first paper on VoF perhaps had shown a coarser conformation to the experimental data. (Of course, computing power those days was quite limited.)

      So the question to ask isn’t whether SPH works or not; it is: why does it do work so well in some situations even if it does not, in others (e.g. in the cases such as what you point out).

      The basic answer is that NS are nonlinear equations, and so, the parameters-space or the regime of simulation is the king—not this method or that method. … I have yet to read the papers that are cited in the blog-post you point out, but even on the initial rapid browsing, it’s clear that the cited study focuses on the subsonic turbulence in gasdynamics, whereas the “spheric” examples seem to be about laminar (or at most transitional) flows in hydrodynamics.

      Talking in terms of this method vs. that, (i) how well do the methods for gasdynamics adapt to the requirements of multi-phase flows with free (i.e. moving) interfaces (between several fluids) and moving boundaries (e.g. those of a solid object getting tossed inside a container being sloshed)? And, (ii) Are they suitable for real-time simulation and rendering?

      It’s just a fortunate circumstance that the SPH happens to be both fast and “sufficiently” accurate within the regime which is of interest to the games and graphics community. It is fast, but not as fast as for real-time rendering. Hence the research from that angle, in that sector.

      … But, yes, I too am interested in understanding the basics/fundamentals of SPH, including knowing its limitations, in an objective manner. In fact, one of the ideas for my planned papers (mentioned in the main post) also involves taking a second look at the basics. … Though I haven’t read it yet, I feel that papers such as pointed out in Nathan Goldbaum’s post (e.g. Abel’s arXiv:1003.0937) could only add to my understanding of the basics.

      So, to summarize, all the following are true, even if they seem to contradict each other:

      (i) The games community doesn’t aim for, and therefore doesn’t care for, physical accuracy.

      (ii) Even after decades of development, the basics of SPH still aren’t fully clear.

      The method addresses a non-linear differential equation, is explicit, and involves N bodies (where N can be millions). Therefore, variational techniques can’t be as easily used to shed physical light as in, say, solid mechanics. (Variational methods have been the favorite “physical basis” to explain almost anything and everything in physics and engineering right since the 19th century—even if the Method of Weighted Residuals itself has never been supplied with any demonstrable physical basis, and there is no functional for nonlinear problems.) Thus, overall, the basics of SPH continue to invite scrutiny in their own right—i.e., apart from the further “grand challenges” mentioned here: http://spheric-sph.org/grand-challenges.

      (iii) There is something to SPH that demonstrably makes it a “real” simulation method (at par with the established techniques of CFD) when the simulations are conducted over certain regimes, but demonstrably not so, over others.

      If you run into any recent papers that look into the basics of SPH, please drop me a line or two; thanks in advance.

      Best,

      –Ajit
      [E&OE]

      • Briefly speaking, if you want to get somewhat decent results with SPH, you would need to incorporate special techniques to improve its accuracy followed by the use of tremendous computing power, which is beyond the reach of conventional users.

        I don’t deal with development of numerical methods, but only as a user. My experience with SPH as a tool to model physical experiments and engineering problems is that it generally gives only qualitative results (nice pictures). Bear in mind that there is a big gap between test problems and real industrial-strength problems. You are better off using a conventional-grid based approach – these methods are now quite mature, much more robust, and show expected performance as predicted by standard numerical analysis arguments.

        If you are developing a numerical method, then that’s another story. For then, you would be facing those problems as described in the aforementioned websites.

      • 1. True. I try to understand see if numerical methods can be improved. That’s my focus.

        2. What software did you use for SPH? For what problems? Would like to know.

        3. SPH is peculiar. As Dan Price mentions (arXiv:1111.1259v1 [astro-ph.IM], p. 7), in SPH, there typically aren’t code crashes, only noisy particle distributions.

        But as Liu & Liu (DOI 10.1007/s11831-010-9040-7, p. 29 (i.e. p. 5 in the PDF)) and Peter Cossins (arXiv:1007.1245v2 [astro-ph.IM], show, for an even kernel (and practically all SPH kernels are that), the error for the kernel approximation scales as h^2, i.e., the kernel approximation is second-order accurate. As Cossins further discusses, the sampling errors are reduced with increasing number of particles within the smoothing kernel (the domain of support).

        At coarse mesh-spacing (large h), grid-based methods are just as bad. Indeed, in the grid-based methods, the theoretical ideal is to systematically reduce h to the point that grid-independence is achieved. (Practically, how many people bother achieving grid-independence in CFD, as in contrast to in FEM for solids?)

        If people can allow a large computation time to a grid-based method, why wouldn’t they show the same generosity to SPH? … The question is not posed rhetorically. … There is a reason to it. SPH’s success in games and real-time (or near real-time) rendering actually has a way of becoming its disadvantage in the core engineering fields. The success captures people’s minds, and they then simply come to associate only inaccuracies with SPH.

        Sure, there will be certain forms of inaccuracies that are not easily foreseeable to an intuition which was developed in reference to FDM/FVM/FEM. After all, SPH is Lagrangian in form, and the intuition is based on the Eulerian form.

        But, looking at the very basics, I am driven to ask this question. If I know that errors scale as h^2, it should be pretty much as good/bad as a second-order accurate FDM/FVM scheme. If so, why not throw the same kind of computational resources at SPH then? If, for simulating one physical minute of a flow, a grid-based solver takes 2 days on a cluster/supercomputer (with a large chunk of RAM being used which these days comes almost dirt cheap), then why not give SPH, too, the same hardware and the same amount of execution time?

        After all, even if its errors don’t reduce to an order higher than the second-order, realize, the SPH particles also aren’t required to fill the entire domain—only the region where the fluid actually exists. Another thing. If you have an octree-based adaptive refining in grid-based methods, you can also have variable smoothing lengths and adaptive splitting/coarsening of the SPH particles. (For example, see Nikhil Galgali’s MS thesis at MIT: http://18.7.29.232/handle/1721.1/55074; there have been others in Europe and elsewhere pursuing the same idea).

        … It all is at an early stage. People haven’t used SPH enough to have a good, representative database of its results vis-a-vis those of the grid-based methods, using the same amount of computational resources, and covering different kinds of flow-regimes/problems. But since the basic theoretical error analysis isn’t bad, I would sure give SPH a chance to prove itself.

        In the meanwhile, for solving practical fluids problems arising in the core engineering, I would advise going with the established techniques. (In fact, I would rather advise FVM with block-structured meshes in comparison to even FEM—FEM too is, relatively, a newcomer as far as CFD goes.)

        All the same, I remain curious as to which SPH software you used, and for what problem (with what computational resources). See if you can share a bit about it, at your own convenience.

        Best,

        –Ajit
        [E&OE]

  2. The main problem I have with SPH is the noisy solution generated by the discrete sum of approximation of local kernel interpolants over the nearest neighbours. This problem can be quite pronounced in 3-dimensional problems. This noise makes it very difficult to know whether the solution you get is real or simply a numerical artifact. The other problem is there is too much diffusion in the solution because the artificial viscosity is present everywhere – this puts a limit on how high of a Reynolds number you can model. But if you just want to work in gaming where physics isn’t the priority, SPH or any other method is fine.

    The LAMMPS MD code from Sandia provides an implementation of SPH, which you can run on clusters or supercomputers if you have such a resource.

  3. Model gas flow at high Reynolds number. Resource is typically 50-200 cores, depending on availability. Again, did not use SPH because of aforementioned issues.

  4. Sorry for the multiple posts, I forgot to mention that the artificial viscosity being present everywhere is one of the problems why SPH doesn’t converge well at all.

  5. Thanks for sharing your valuable insights. I really appreciate it. [No, multiple comments isn’t at all a bother. … Keep them coming in any which way you write!]

    I am actually a newbie in SPH. I got going in a real way only from Feb–March this year, part-time, and from April-end onwards, full-time. That makes me barely months-old in SPH. … I will sure keep in mind your experience of using SPH for high Re gas flows, though let me hasten to add that I am pretty sure that I will not get into that application field (aerodynamics/gasdynamics) in any foreseeable future. Also, mostly not even that flow regime/study objectives (high Re flows/study of turbulence). For me, it’s mostly only simple hydrodynamics, mostly laminar (or so).

    From whatever I have understood about the SPH theory, coupled with my little bit of experience with it, I have a feel that the issue _should_ not be 2D vs. 3D. I don’t expect a greater noise in 3D simply because it’s 3D per say—i.e., if I use enough number of particles (and I’ve by now run a bit of a toy 3D code too). To me (and within the flow regimes I’m concerned with), the main deciding factors are (i) the choice of kernels and (ii) the values of the parameters. They really control the show; the dimensionality of the domain, I expect, _should_ not, given enough number of particles. (I could be proved wrong, but that’s what I do expect as of today.)

    I don’t know enough about artificial viscosity but I also vaguely recall what someone (Dan Price?) has written something about it, which goes something like: in SPH, the artificial viscosity isn’t necessarily all that artificial. [I have yet to study this part.]

    In any case, as I gather, yes, there still are several real issues with SPH: overdamping, tensile instability, particle-clustering, particle-pairing. The noise due to clustering is significant in engineering applications even if not in gaming.

    But does it make SPH suitable only for gaming?

    If SPH works, i.e., if it gives _physically_ correct results (for the physically _relevant_ parameters) for the relatively low-velocity water flows, say those occurring under gravity (e.g. pouring water in a glass, or dam-break, or sloshing—and it does demonstrably work fairly well in these cases), then this fact does have very direct practical implications in certain industrial applications too. Consider just this application: Modeling the molten metal flow in sand casting of iron. [“Water modeling” has actually been used for validation of the castings simulation done with the traditional CFD methods, because of a certain dynamic similitude between water and molten iron/steel.] … As another application (barely coming under the research radars now): ceramic slurry processing. Even newer: ceramic injection molding. (Hardly a few reports so far, on each of these topics.)

    Clearly, the choice isn’t: either gasdynamics, or nothing.

    Overall, it obviously is a method at a relatively early stage of development: for hydrodynamics applications, hardly 1 or 2 decades old (as against the astrophysical ones, where it is about 4 decades old). In contrast, VOF is by now 3.5 decades old, and FVM, a good 5 decades old. FDM is even older (Richardson (?) used it WWII). So, let’s not keep too many expectations from the new kid on the block. Let’s give it some time. It shows enough promise to be useful in some fields. If that promise doesn’t come true eventually (even in those sub-fields), then the technique will “automatically” fall by the wayside. I, for one, sure don’t mind giving it enough rope with which to hang itself; it’s just research as far as I am concerned. And _papers_! (_Journal_ papers!!)

    –Ajit
    [E&OE]

  6. For low Re/laminar flows, SPH may be fine, but I haven’t worked in this regime. For some applications, you may need to address the incompressibility constraint, which in standard SPH, is handled using a stiff equation of state (EOS) for the pressure term, and this makes the time step really small if you are using an explicit time integration scheme. But if you try to use a less stiff EOS then your wave speeds are incorrect and you may get unphysical results. There are some implicit SPH schemes you may want to look at.

    It looks like there are plenty of SPH applications in the area of solid dynamics problems with large deformations, but there are other “meshfree” methods that looks somewhat similar to SPH.

  7. Incompressibility? Exactly as you say. Less stiff EOS? Exactly as you say, again. Implicit SPH? Yes, it’s on my radar, but I will probably go after it only after I submit a couple of journal papers.

    Solid mechanics? No, not on my plate, certainly not as of now. Yes, in solids, there has been an explosion of meshfree variants. Xiujun Fang actually took pains in 2007 to make a neat chart of them and posted it at iMechanica (http://imechanica.org/node/1680). Of course, it’s almost a decade old by now. (How time flies!) On the second look, even this chart now seems incomplete—e.g., I can’t see DPD (dissipative particle dynamics) in it! But no, as of today, solid mechanics would be a digression for me (as also geophysical simulations like river-floods, shore-waves, and all).

    Oh, BTW, in my previous comment, I forgot to thank you for pointing out the LAMMPS to me. I had browsed about it quite some time ago, in fact many years ago. I certainly didn’t know that it now had SPH available in it. LAMMPS sure seems like a pretty good platform to try out SPH. … I will use this blog to post my progress with compiling and running it (may be as a library accessed with Python scripts).

    … No, I don’t have access to any clusters, and in fact my research is not funded at this time. So the only thing feasible to me right now is to force all the cores (4 actual, 8 virtual) on my i7 Core2 laptop to work a little bit. … May be I should drop an email to the friendly folks at nVidia to let me borrow a cluster, or time on cluster/cloud. (They have a development center in Pune though it focuses only on the hardware side like device driver development, but I know that they do in general have an academic partnership program. I will approach them once I land my next academic job. (I am jobless from 21st April; hence my more rapid progress on the research side in the recent weeks. … Well, the current joblessness not a worry, I should get my next job pretty soon, within a month or so, because the fresh terms anyway begin in June. Also, the Metallurgy-to-Mechanical branch-jumping issue no more exists in the Maharashtra state, not at least in a technical sense (though it is possible that it continues to exist in the idiotic and foggy old minds of the entrenched mechanical engineering academics in the S. P. University of Pune) and so, the big lunacy-and-bureaucracy-erected hurdle which I faced in the past for so many years is no longer an issue. Conclusion? I should get my next job soon, and then, I will approach nVidia.))

    … Anyway, thanks a bunch for all your help! I will come back with a new post once I finish trying out LAMMPS.

    Best,

    –Ajit
    [E&OE]

Comments are closed.