A small utility for blockMesh

The blockMesh is a utility program included in the OpenFOAM distribution. It takes a specification of a domain geometry as the input, and produces a block-structured mesh as the output. The OpenFOAM tutorials illustrate its usage.

However, writing the input file to the blockMesh program (i.e., writing the blockMeshDict file) gets very tedious as soon as you add even just a bit of complexity to the domain geometry.

So, I wrote a small utility in Python. It takes a metadata file as its input, and produces the blockMeshDict file corresponding to the interFoam solver, as its output.

The main hassle the script saves you is that you no longer have to manually list all the vertices, let alone refer to them using the implicit indexing scheme, as demanded by the whimsical file-format of the blockMesh utility. Instead, you directly specify the blocks themselves. Then, when it comes to specifying the BCs, once again, you don’t have to specify the faces or patches using vertices. Instead, you directly identify the faces as the local “left/right/front/etc.” face of a particular block. The script will automatically locate and use the right IDs for the vertices. It will also list the vertices in the right order—you no longer have to worry about the normal to the face possibly going into a direction opposite to what is required. Further, the script also automatically lists all the IDs using inline comments in the generated blockMesh dictionary file, so that the file is more readable to you.

The output was tested using the version of the blockMesh program which comes bundled with the blueCFD-core distribution, on Win7 SP1 (64-bit), but not very extensively. It seems to work, though bugs are possible.

Here is the script. I call it “GBMDF.py” (Generate BlockMesh Dictionary File):

'''
Reads a simple metadata file and generates the blockMeshDict file for 
OpenFOAM simulations

Format of the metadata file:
Comments lines begin with # and are ignored
Data are comma separated values, line by line, in the following sequence:
The float value of the "Convert to Meters" factor
Number of Blocks
BlockID, xOrig, yOrig, zOrig, xLen, yLen, zLen, entire string for number of cells and grading (reproduced as is)
Number of different sets (patches) of faces to list for BCs
PatchID, Name, OF type, no. of faces in the patch
block (hex) ID, face ID (0 through 5)
List of faces in this patch: block (hex) ID, face ID (one of 0 through 5)

The face ID is local to the block. The convention is that of OpenFOAM, taken from:
https://github.com/OpenFOAM/OpenFOAM-dev/blob/master/applications/utilities/mesh/generation/blockMesh/blockMesh.C
Accessed on 05 Feb 2018 18:21 HRS IST. 

Here is the copy-paste of the relevant portion from blockMesh.C 
argList::addNote
    (
        "Block description\n"
        "\n"
        "  For a given block, the correspondence between the ordering of\n"
        "  vertex labels and face labels is shown below.\n"
        "  For vertex numbering in the sequence 0 to 7 (block, centre):\n"
        "    faces 0 (f0) and 1 are left and right, respectively;\n"
        "    faces 2 and 3 are front and back; \n"
        "    and faces 4 and 5 are bottom and top::\n"
        "\n"
        "                 7 ---- 6\n"
        "            f5   |\\     |\\   f3\n"
        "            |    | 4 ---- 5   \\\n"
        "            |    3 |--- 2 |    \\\n"
        "            |     \\|     \\|    f2\n"
        "            f4     0 ---- 1\n"
        "\n"
        "       Z         f0 ----- f1\n"
        "       |  Y\n"
        "       | /\n"
        "       O --- X\n"
    );

Excerpt from blockMesh.C is over.

So, what the OpenFOAM convention means is this:
Orient the block such that x runs to the right, 
y to the far (through the page), and z upwards. 
Then,
face 0 is Left (-ve x-face):    (0 4 7 3)
face 1 is Right (+ve x-face):   (1 2 6 5)
face 2 is Front (-ve y-face):   (0 1 5 4)
face 3 is Back (+ve y-face):    (2 3 7 6)
face 4 is Bottom (-ve z-face):  (0 3 2 1)
face 5 is Top (+ve z-face):     (4 5 6 7)

Here is an example MetaData file. Its content begins from the next line:

# This metadata file describes a cube
# Number of Blocks
1
# BlockID, xOrig, yOrig, zOrig, xLen, yLen, zLen, string for number of cells and grading (reproduced as is)
0, 0.0, 0.0, 0.0, 10.0, 5.0 , 3.0, (10 5 3) simplegrading (1 1 1)
# Number of different sets (patches) of faces to list for BCs
2
# PatchID, Name, OF type, no. of faces in the patch
1, sides, wall, 5
# List of faces in this patch: block (hex) ID, face ID (0 through 5)
0, 0
0, 1
0, 4
0, 5
0, 2
# PatchID, Name, OF type, no. of faces in the patch
1, topInlet, patch, 1
# List of faces in this patch: block (hex) ID, face ID (0 through 5)
0, 3

Example File Got Over at the Above Line

Usage: Open a command line terminal and type:
python GBMDF.py InputMetaDataFileName OutputFileName
'''

import sys

#########################################
# String constants

str0 = '''/*--------------------------------*- C++ -*----------------------------------*\\
| =========                 |                                                 |
| \\\\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\\\    /   O peration     | Version:  5                                     |
|   \\\\  /    A nd           | Web:      www.OpenFOAM.org                      |
|    \\\\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters\t'''

str1 = "vertices\n(\n"

str2 = ");\n\nblocks\n(\n"

str3 = ");\n\nedges\n(\n);\n\nboundary\n(\n"

str4 = "\n);\n\nmergePatchPairs\n(\n);\n\n// ************************************************************************* //\n"

#################################################
# Global Data

# Basically, these are maps of vertex coordinates to IDs, and vice-versa.
# These maps are stored as Python dictionaries.
# We create the key for the floating point vertex coordinates 
# (fx, fy, fz) by printing them to a consistently formatted 
# string ("fx fy fz"). The vertex ID is an int
mapVtxToNum = {}
mapNumToVtx = {}
# Precision to use in formatting vertex key strings and in writing 
# floating point numbers in the output file
gnPrec = 6

# This is a list of blocks. Each block is a list of vertex IDs (int's), 
# in the OpenFOAM order (see the doc comment above)
aBlocks = []


#################################################
# Helper functions

def readNextDataLine(theFile):
    sLine = ""
    while True:
        sLine = theFile.readline().strip()
        # ignore empty lines
        if len(sLine) == 0:
            continue
        # ignore comment lines
        nIndex = sLine.find("#")
        if 0 == nIndex:
            continue
        if -1 == nIndex:
            break
    return sLine

def tokenize(sLine):
    toks = sLine.split(",")
    for t in range(len(toks)):
        toks[t] = toks[t].strip()
    return toks

#########################################################
# Vertices-related functions

# Input is floating point numbers. Returns the key string 
# to be used in the vertex (coordinates) -> vertex ID map.
def createVertexKeyString( fx, fy, fz, nPrec =gnPrec):
    fx = round(fx, nPrec)
    fy = round(fy, nPrec)
    fz = round(fz, nPrec)
    sFormat = "%%0.%dlf %%0.%dlf %%0.%dlf" % (nPrec, nPrec, nPrec)
    sKey = sFormat % (fx, fy, fz)
    return sKey

def checkAddVertex(fx, fy, fz):
    vtxKeyStr = createVertexKeyString( fx, fy, fz )
    if mapVtxToNum.has_key(vtxKeyStr):
        nVtx = mapVtxToNum[vtxKeyStr]
        return nVtx, vtxKeyStr
    else:
        nNewVtx = len(mapVtxToNum)
        mapVtxToNum[vtxKeyStr] = nNewVtx
        mapNumToVtx[nNewVtx] = vtxKeyStr
        return nNewVtx, vtxKeyStr

# Get the set of vertex IDs for a given local face ID 
# of a given block (hex)
def getFace(blk,nFaceCode):
    f = []
    if 0 == nFaceCode:
        f = [blk[0],blk[4],blk[7],blk[3]]
    elif 1 == nFaceCode:
        f = [blk[1],blk[2],blk[6],blk[5]]
    elif 2 == nFaceCode:
        f = [blk[0],blk[1],blk[5],blk[4]]
    elif 3 == nFaceCode:
        f = [blk[2],blk[3],blk[7],blk[6]]
    elif 4 == nFaceCode:
        f = [blk[0],blk[3],blk[2],blk[1]]
    elif 5 == nFaceCode:
        f = [blk[4],blk[5],blk[6],blk[7]]
    else:
        f = None
    return f


############################## MAIN #################

if __name__ == "__main__":

    sInFile = sys.argv[1]
    sOutFile = sys.argv[2]

    inFile = open(sInFile, "r")
    outFile = open(sOutFile, "w+")

    sConvertToMeters = readNextDataLine(inFile)
    outFile.write(str0)
    outFile.write("%s;\n\n" % sConvertToMeters)
    outFile.write(str1)

    nBlocks = int(readNextDataLine(inFile))
    for b in range(nBlocks):
        str = readNextDataLine(inFile)
        toks = tokenize(str)

        nBlockID = int(toks[0])
        x0 = float(toks[1])
        y0 = float(toks[2])
        z0 = float(toks[3])
        x1 = x0 + float(toks[4])
        y1 = y0 + float(toks[5])
        z1 = z0 + float(toks[6])

        n0, vtx0 = checkAddVertex(x0, y0, z0)
        n1, vtx1 = checkAddVertex(x1, y0, z0)
        n2, vtx2 = checkAddVertex(x1, y1, z0)
        n3, vtx3 = checkAddVertex(x0, y1, z0)

        n4, vtx4 = checkAddVertex(x0, y0, z1)
        n5, vtx5 = checkAddVertex(x1, y0, z1)
        n6, vtx6 = checkAddVertex(x1, y1, z1)
        n7, vtx7 = checkAddVertex(x0, y1, z1)

        blk = [n0, n1, n2, n3, n4, n5, n6, n7, toks[7], nBlockID]
        aBlocks.append( blk )

    nVertices = len(mapNumToVtx)
    for n in range(nVertices):
        vtxKeyStr = mapNumToVtx[n]
        strVtx = "\t( %s ) \t// vertex ID: %d\n" % (vtxKeyStr, n)
        outFile.write(strVtx)
    outFile.write(str2)
    for b in range(nBlocks):
        blk = aBlocks[b]
        str = "\thex (%2d %2d %2d %2d %2d %2d %2d %2d) %s\t// Block ID: %d\n" \
			%(blk[0],blk[1],blk[2],blk[3],blk[4],blk[5],blk[6],blk[7],blk[8],blk[9])
        outFile.write(str)
    outFile.write(str3)

    nPatchListings = int(readNextDataLine(inFile))
    for p in range(nPatchListings):
        sLine = readNextDataLine(inFile)
        toks = tokenize(sLine)
        nPatchNumber = int(toks[0])
        sPatchName = toks[1]
        sOFType = toks[2]
        nFaces = int(toks[3])

        outFile.write("\t%s\t// Patch Number: %d\n\t{\n" % (sPatchName, nPatchNumber) )
        outFile.write("\t\ttype\t%s;\n\t\tfaces\n\t\t(\n" % sOFType )

        asFaceCodes = ["Left", "Right", "Front", "Back", "Bottom", "Top"]

        for f in range(nFaces):
            sLine = readNextDataLine(inFile)
            toks = tokenize(sLine)
            nBlockID = int(toks[0])
            nFaceCode = int(toks[1])
            blk = aBlocks[nBlockID]
            face = getFace(blk,nFaceCode)
            outFile.write("\t\t\t(%2d %2d %2d %2d) // BlockID: %d Face: %s\n" \
                           % (face[0],face[1],face[2],face[3], nBlockID, asFaceCodes[nFaceCode] ))
        outFile.write("\t\t);\n\t}\n\n")
    outFile.write(str4)
    print( "Done! See the file: %s" % sOutFile)

# EOF

<hr />

And, here is an example meta data file, the one for the damBreak case of interFoam. Go through it once. A small explanatory note appears in the next section.

# This metadata file describes the geometry used for the interFoam 
# solver of OpenFOAM. Use this file as an input to the Python
# GBMDF.py script, to produce the blockMeshDict file.
# Written by and copyright (c) Ajit R. Jadhav. All rights reserved.
# This is a freeware. Use it at your own risk.
# Version: 13 April 2018.
#####################################################################

# The case given here is that of the damBreak tutorial of OpenFOAM.
# Note, the tutorial assumes that the gravity acts along the -ve
# y-axis, whereas the blockMesh source code shows that top and bottom
# are taken, in the local coordinate system for each block, along the
# +ve and -ve z-axis, respectively. Thus, this tutorial is a bit 
# confusing as far as the blockMesh part is concerned.


# ConvertToMeters
0.146

# Number of Blocks
5

# BlockID, xOrig, yOrig, zOrig, xLen, yLen, zLen, string for number of cells and grading (reproduced as is)
0, 0, 0, 0, 2, 0.32876, 0.1, (23 8 1) simpleGrading (1 1 1)
1, 2.16438, 0, 0, 1.83562, 0.32876,  0.1, (19 8 1) simpleGrading (1 1 1)
2, 0, 0.32876, 0, 2, 3.67124, 0.1, (23 42 1) simpleGrading (1 1 1)
3, 2, 0.32876, 0, 0.16438, 3.67124, 0.1, (4 42 1) simpleGrading (1 1 1)
4, 2.16438, 0.32876, 0, 1.83562, 3.67124, 0.1, (19 42 1) simpleGrading (1 1 1)

# Number of different sets (patches) of faces to list for BCs
4

# PatchID, Name, OF type, no. of faces in the patch
0, leftWall, wall, 2
# List of faces in this patch: block (hex) ID, local face ID (0 through 5)
0, 0
2, 0

# PatchID, Name, OF type, no. of faces in the patch
1, rightWall, wall, 2
# List of faces in this patch: block (hex) ID, local face ID (0 through 5)
1, 1
4, 1

# PatchID, Name, OF type, no. of faces in the patch
# Note, for the damBreak case as presented in the OpenFOAM tutorials,
# the "bottom" is the -ve y-axis, not the -ve z-axis
2, lowerWall, wall, 5
# List of faces in this patch: block (hex) ID, local face ID (0 through 5)
0, 2
0, 1
3, 2
1, 0
1, 2

# PatchID, Name, OF type, no. of faces in the patch
3, atmosphere, patch, 3
# List of faces in this patch: block (hex) ID, local face ID (0 through 5)
# Note, for the damBreak case as presented in the OpenFOAM tutorials,
# the "top" is the +ve y-axis, not the +ve z-axis. 
2,3
3,3
4,3

In creating the metadata file, follow these steps:

  1. First, draw the global coordinate system using the OpenFOAM convention (given in its source code, here [^]).
  2. Then, prepare a sketch of the domain geometry. Remember, with every protrusion or dent you add, you have to add a number of blocks. That’s the way blockMesh works.
  3. Number all these blocks suitably, starting from 0.
  4. Make a table of blocks, having the following columns: blockID, the three coordinates of its origin, the three lengths along the three coordinate axes.
  5. Mark the faces on which you will be applying boundary conditions. Number them suitably, starting with 0.
  6. Make a table of faces carrying BCs, having the following columns: face ID, the block ID of which this is a face, and the local face code for this face, according to the OpenFOAM convention
  7. Now you are set. You can write the metadata file.

One reason I didn’t write any ReadMe.txt file is because I wanted to upgrade the format of the input (metadata) file from the current ad-hoc format to a more descriptive, XML-based one. However, I didn’t get the time to do that. (I wrote this script some time in February.) Also, the last time I thought about it, I was not sure whether I should continue with the current way of specifying the block or not. The current way is to specify the coordinates of its origin and the three lengths. But, may be, there are easier ways to specify blocks. For instance, you could specify the first block by giving its origin and three lengths (as is the case now), and then, for the next block, specify it as being adjacent to the first block, touching it on the left/right/front/etc. … I will see if such simplifications can be easily added or not.

Once I extend the script and XML-ize the input data format, I may publish the final version of this script via a journal paper or so. (Yes, I need to enhance my resume!)

In the meanwhile, try it out. I would appreciate receiving any feedback, including suggestions for improvements.

Bye for now…


A Song I Like:

[It was a song I liked when I was a school-boy. I still do.]

(Hindi) “chaahe raho door, chaahe raho paas…”
Music: R. D. Burman
Singers: Kishore Kumar, Lata Mangeshkar
Lyrics: Majrooh Sultanpuri

 

Advertisements

Exactly what does this script show?

Update on 02 March 2018, 15:34 IST: I have now added another, hopefully better, version of the script (but also kept the old one intact); see in the post below. The new script too comes without comments.


Here is a small little Python script which helps you visualize something about a state of stress in 2D.

If interested in understanding the concept of stress, then do run it, read it, try to understand what it does, and then, if still interested in the concept of stress, try to answer this “simple” little question:

Exactly what does this script show? Exactly what it is that you are visualizing, here?


I had written a few more notes and inline comments in the script, but have deliberately deleted most of them—or at least the ones which might have given you a clue towards answering the above question. I didn’t want to spoil your fun, that’s why.

Once you all finish giving it a try, I will then post another blog-entry here, giving my answer to that question (and in the process, bringing back all the deleted notes and comments).

Anyway, here is the script:


'''
A simple script to help visualize *something* about
a 2D stress tensor.

--Ajit R. Jadhav. Version: 01 March 2018, 21:39 HRS IST.
'''

import math
import numpy as np
import matplotlib.pyplot as plt

# Specifying the input stress
# Note:
# While plotting, we set the x- and y-limits to -150 to +150,
# and enforce the aspect ratio of 1. That is to say, we do not
# allow MatPlotLib to automatically scale the axes, because we
# want to appreciate the changes in the shapes as well sizes in
# the plot.
#
# Therefore, all the input stress-components should be kept
# to within the -100 to +100 (both inclusive) range.
#
# Specify the stress state in this order: xx, xy; yx, yy
# The commas and the semicolon are necessary.

sStress = "-100, 45; 90, 25"

axes = plt.axes()
axes.set_xlim((-150, 150))
axes.set_ylim((-150, 150))
plt.axes().set_aspect('equal', 'datalim')
plt.title(
    "A visualization of *something* about\n" \
    "the 2D stress-state [xx, xy; yx, yy] = [%s]" \
    % sStress)

mStress = np.matrix(sStress)
mStressT = np.transpose(mStress)

mUnitNormal = np.zeros((2, 1))
mTraction = np.zeros((2, 1))

nOrientations = 18
dIncrement = 360.0 / float(nOrientations)
for i in range(0, nOrientations):
    dThetaDegrees = float(i) * dIncrement
    dThetaRads = dThetaDegrees * math.pi / 180.0
    mUnitNormal = [round(math.cos(dThetaRads), 6), round(math.sin(dThetaRads), 6)]
    mTraction = mStressT.dot(mUnitNormal)
    if i == 0:
        plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'black', linewidth=1.0)
    else:
        plt.plot((0, mTraction[0, 0]), (0, mTraction[0, 1]), 'gray', linewidth=0.5)
    plt.plot(mTraction[0, 0], mTraction[0, 1], marker='.',
             markeredgecolor='gray', markerfacecolor='gray', markersize=5)
    plt.text(mTraction[0, 0], mTraction[0, 1], '%d' % dThetaDegrees)
    plt.pause(0.05)

plt.show()


Update on 02 March 2018, 15:34 IST:

Here is a second version of a script that does something similar (but continues to lack explanatory comments). One advantage with this version is that you can copy-paste the script to some file, say, MyScript.py, and invoke it from command line, giving the stress components and the number of orientations as command-line inputs, e.g.,

python MyScript.py "100, 0; 0, 50" 12

which makes it easier to try out different states of stress.

The revised code is here:


'''
A simple script to help visualize *something* about
a 2D stress tensor.

--Ajit R. Jadhav. 
History: 
06 March 2018, 10:43 IST: 
In computeTraction(), changed the mUnitNormal code to make it np.matrix() rather than python array
02 March 2018, 15:39 IST; Published the code
'''

import sys
import math
import numpy as np
import matplotlib.pyplot as plt

# Specifying the input stress
# Note:
# While plotting, we set the x- and y-limits to -150 to +150,
# and enforce the aspect ratio of 1. That is to say, we do not
# allow MatPlotLib to automatically scale the axes, because we
# want to appreciate the changes in the shapes as well sizes in
# the plot.
#
# Therefore, all the input stress-components should be kept
# to within the -100 to +100 (both inclusive) range.
#
# Specify the stress state in this order: xx, xy; yx, yy
# The commas and the semicolon are necessary.
# If you run the program from a command-line, you can also
# specify the input stress string in quotes as the first
# command-line argument, and no. of orientations, as the
# second. e.g.:
# python MyScript.py "100, 50; 50, 0" 12
##################################################

gsStress = "-100, 45; 90, 25"
gnOrientations = 18


##################################################

def plotArrow(vTraction, dThetaDegs, clr, axes):
    dx = round(vTraction[0], 6)
    dy = round(vTraction[1], 6)
    if not (math.fabs(dx) < 10e-6 and math.fabs(dy) < 10e-6):
        axes.arrow(0, 0, dx, dy, head_width=3, head_length=9.0, length_includes_head=True, fc=clr, ec=clr)
    axes.annotate(xy=(dx, dy), s='%d' % dThetaDegs, color=clr)


##################################################

def computeTraction(mStressT, dThetaRads):
    vUnitNormal = [round(math.cos(dThetaRads), 6), round(math.sin(dThetaRads), 6)]
    mUnitNormal = np.reshape(vUnitNormal, (2,1))
    mTraction = mStressT.dot(mUnitNormal)
    vTraction = np.squeeze(np.asarray(mTraction))
    return vTraction


##################################################

def main():
    axes = plt.axes()
    axes.set_label("label")
    axes.set_xlim((-150, 150))
    axes.set_ylim((-150, 150))
    axes.set_aspect('equal', 'datalim')
    plt.title(
        "A visualization of *something* about\n" \
        "the 2D stress-state [xx, xy; yx, yy] = [%s]" \
        % gsStress)

    mStress = np.matrix(gsStress)
    mStressT = np.transpose(mStress)
    vTraction = computeTraction(mStressT, 0)
    plotArrow(vTraction, 0, 'red', axes)
    dIncrement = 360.0 / float(gnOrientations)
    for i in range(1, gnOrientations):
        dThetaDegrees = float(i) * dIncrement
        dThetaRads = dThetaDegrees * math.pi / 180.0
        vTraction = computeTraction(mStressT, dThetaRads)
        plotArrow(vTraction, dThetaDegrees, 'gray', axes)
        plt.pause(0.05)
    plt.show()


##################################################

if __name__ == "__main__":
    nArgs = len(sys.argv)
    if nArgs > 1:
        gsStress = sys.argv[1]
    if nArgs > 2:
        gnOrientations = int(sys.argv[2])
    main()



OK, have fun, and if you care to, let me know your answers, guess-works, etc…..


Oh, BTW, I have already taken a version of my last post also to iMechanica, which led to a bit of an interaction there too… However, I had to abruptly cut short all the discussions on the topic because I unexpectedly got way too busy in the affiliation- and accreditation-related work. It was only today that I’ve got a bit of a breather, and so could write this script and this post. Anyway, if you are interested in the concept of stress—issues like what it actually means and all that—then do check out my post at iMechanica, too, here [^].


… Happy Holi, take care to use only safe colors—and also take care not to bother those people who do not want to be bothered by you—by your “play”, esp. the complete strangers…

OK, take care and bye for now. ….


A Song I Like:

(Marathi [Am I right?]) “rang he nave nave…”
Music: Aditya Bedekar
Singer: Shasha Tirupati
Lyrics: Yogesh Damle

 

“Math rules”?

My work (and working) specialization today is computational science and engineering. I have taught FEM, and am currently teaching both FEM and CFD.

However, as it so happens, all my learning of FEM and CFD has been derived only through self-studies. I have never sat in a class-room and learnt these topics in the usual learning settings. Naturally, there were, and are, gaps in my knowledge.


The most efficient way of learning any subject matter is through traditional and formal learning—I mean to say, our usual university system. The reason is not just that a teacher is available to teach you the material; even books can do that (and often times, books are actually better than teachers). The real advantage of the usual university education is the existence of those class-mates of yours.

Your class-mates indirectly help you in many ways.

Come the week of those in-semester unit tests, at least in the hostels of Indian engineering schools, every one suddenly goes in the studies mode. In the hostel hallways, you casually pass someone by, and he puts a simple question to you. It is, perhaps, his genuine difficulty. You try to explain it to him, find that there are some gaps in your own knowledge too. After a bit of a discussion, some one else joins the discussion, and then you all have to sheepishly go back to the notes or books, or solve a problem together. It helps all of you.

Sometimes, the friend could be even just showing off to you—he wouldn’t ask you a question if he knew you could answer it. You begin answering in your usual magnificently nonchalant manner, and soon reach the end of your wits. (A XI standard example: If the gravitational potential inside the earth is constant, how come a ball dropped in a well falls down? [That is your friend’s question, just to tempt you in the wrong direction all the way through.]… And what would happen if there is a bore all through the earth’s volume, assuming that the earth’s core is solid all the way through?) His showing off helps you.

No, not every one works in this friendly a mode. But enough of them do that one gets used to this way of studying/learning—a bit too much, perhaps.

And, it is this way of studying which is absent not only in the learning by pure self-studies alone, but also in those online/MOOC courses. That is the reason why NPTEL videos, even if downloaded and available on the local college LAN, never get referred to by individual students working in complete isolation. Students more or less always browse them in groups even if sitting on different terminals (and they watch those videos only during the examination weeks!)

Personally, I had got [perhaps excessively] used to this mode of learning. [Since my Objectivist learning has begun interfering here, let me spell the matter out completely: It’s a mix of two modes: your own studies done in isolation, and also, as an essential second ingredient, your interaction with your class-mates (which, once again, does require the exercise of your individual mind, sure, but the point is: there are others, and the interaction is exposing the holes in your individual understanding).]

It is to this mix that I have got too used to. That’s why, I have acutely felt the absence of the second ingredient, during my studies of FEM and CFD. Of course, blogging fora like iMechanica did help me a lot when it came to FEM, but for CFD, I was more or less purely on my own.

That’s the reason why, even if I am a professor today and even if I am teaching CFD not just to UG but also to PG students, I still don’t expect my knowledge to be as seamlessly integrated as for the other things that I know.

In particular, one such a gap got to the light recently, and I am going to share my fall—and rise—with you. In all its gloriously stupid details. (Feel absolutely free to leave reading this post—and indeed this blog—any time.)


In CFD, the way I happened to learn it, I first went through the initial parts (the derivations part) in John Anderson, Jr.’s text. Then, skipping the application of FDM in Anderson’s text more or less in its entirety, I went straight to Versteeg and Malasekara. Also Jayathi Murthy’s notes at Purdue. As is my habit, I was also flipping through Ferziger+Peric, and also some other books in the process, but it was to a minor extent. The reason for skipping the rest of the portion in Anderson was, I had gathered that FVM is the in-thing these days. OpenFOAM was already available, and its literature was all couched in terms of FVM, and so it was important to know FVM. Further, I could also see the limitations of FDM (like requirement of a structured Cartesian mesh, or special mesh mappings, etc.)

Overall, then, I had never read through the FDM modeling of Navier-Stokes until very recent times. The Pune University syllabus still requires you to do FDM, and I thus began reading through the FDM parts of Anderson’s text only a couple of months ago.

It is when I ran into having to run the FDM Python code for the convection-diffusion equation that a certain lacuna in my understanding became clear to me.


Consider the convection-diffusion equation, as given in Prof. Lorena Barba’s Step No.8, here [^]:

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

I had never before actually gone through these equations until last week. Really.

That’s because I had always approached the convection-diffusion system via FVM, where the equation would be put using the Eulerian frame, and it therefore would read something like the following (using the compact vector/tensor notation):

\dfrac{\partial}{\partial t}(\rho \phi) +  \nabla \cdot (\rho \vec{u} \phi)  = \nabla \cdot (\Gamma \nabla \phi) + S
for the generic quantity \phi.

For the momentum equations, we substitute \vec{u} in place of \phi, \mu in place of \Gamma, and S_u - \nabla P in place of S, and the equation begins to read:
\dfrac{\partial}{\partial t}(\rho \vec{u}) +  \nabla \cdot (\rho \vec{u} \otimes \vec{u})  = \nabla \cdot (\mu \nabla \vec{u}) - \nabla P + S_u

For an incompressible flow of a Newtonian fluid, the equation reduces to:

\dfrac{\partial}{\partial t}(\vec{u}) +  \nabla \cdot (\vec{u} \otimes \vec{u})  = \nu \nabla^2 \vec{u} - \dfrac{1}{\rho} \nabla P + \dfrac{1}{\rho} S_u

This was the framework—the Eulerian framework—which I had worked with.

Whenever I went through the literature mentioning FDM for NS equations (e.g. the computer graphics papers on fluids), I more or less used to skip looking at the maths sections, simply because there is such a variety of reporting the NS equations, and those initial sections of the papers all go over the same background material. The meat of the paper comes only later. (Ferziger and Peric, I recall off-hand, mention that there are some 72 different ways of writing down the NS equations.)


The trouble occurred when, last week, I began really reading through (as in contrast to rapidly glancing over) Barba’s Step No. 8 as mentioned above. Let me copy-paste the convection-diffusion equations once again here, for ease of reference.

\dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + u \dfrac{\partial v}{\partial x} + v \dfrac{\partial v}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

Look at the left hand-side (LHS for short). What do you see?

What I saw was an application of the following operator—an operator that appears only in the Lagrangian framework:

\dfrac{\partial}{\partial t} + (\vec{u} \cdot \nabla)

Clearly, according to what I saw, the left hand-side of the convection-diffusion equation, as written above, is nothing but this operator, as applied to \vec{u}.

And with that “vision,” began my fall.

“How can she use the Lagrangian expression if she is going to use a fixed Cartesian—i.e. Eulerian—grid? After all, she is doing FDM here, isn’t she?” I wondered.

If it were to be a computer graphics paper using FDM, I would have skipped over it, presuming that they would sure transform this equation to the Eulerian form some time later on. But, here, I was dealing with a resource for the core engineering branches (like mech/aero/met./chem./etc.), and I also had a lab right this week to cover this topic. I couldn’t skip over it; I had to read it in detail. I knew that Prof. Barba couldn’t possibly make a mistake like that. But, in this lesson, even right up to the Python code (which I read for the first time only last week), there wasn’t even a hint of a transformation to the Eulerian frame. (Initially, I even did a search on the string: “Euler” on that page; no beans.)

There must be some reason to it, I thought. Still stuck with reading a Lagrangian frame for the equation, I then tried to imagine a reasonable interpretation:

Suppose there is one material particle at each of the FDM grid nodes? What would happen with time? Simplify the problem all the way down. Suppose the velocity field is already specified at each node as the initial condition, and we are concerned only with its time-evolution. What would happen with time? The particles would leave their initial nodal positions, and get advected or diffused away. In a single time-step, they would reach their new spatial positions. If the problem data are arbitrary, their positions at the end of the first time-step wouldn’t necessarily coincide with grid points. If so, how can she begin her next time iteration starting from the same grid points?

I had got stuck.

I thought through it twice, but with the same result. I searched through her other steps. (Idly browsing, I even looked up her CV: PhD from CalTech. “No, she couldn’t possibly be skipping over the transformation,” I distinctly remember telling myself for the nth time.)

Faced with a seemingly unyielding obstacle, I had to fall back on to my default mode of learning—viz., the “mix.” In other words, I had to talk about it with someone—any one—any one, who would have enough context. But no one was available. The past couple of days being holidays at our college, I was at home, and thus couldn’t even catch hold of my poor UG students either.

But talking, I had to do. Finally, I decided to ask someone about it by email, and so, looked up the email ID of a CFD expert, and asked him if he could help me with something that is [and I quote] “seemingly very, very simple (conceptual) matter” which “stumps me. It is concerned with the application of Lagrangian vs. Eulerian frameworks. It seems that the answer must be very simple, but somehow the issue is not clicking-in together or falling together in place in the right way, for me.” That was yesterday morning.

It being a week-end, his reply came fairly rapidly, by yesterday afternoon (I re-checked emails at around 1:30 PM); he had graciously agreed to help me. And so, I rapidly wrote up a LaTeX document (for equations) and sent it to him as soon as I could. That was yesterday, around 3:00 PM. Satisfied that finally I am talking to someone, I had a late lunch, and then crashed for a nice ciesta. … Holidays are niiiiiiiceeeee….

Waking up at around 5:00 PM, the first thing I did, while sipping a cup of tea, was to check up on the emails: no reply from him. Not expected this soon anyway.

Still lingering in the daze of that late lunch and the ciesta, idly, I had a second look at the document which I had sent. In that problem-document, I had tried to make the comparison as easy to see as possible, and so, I had taken care to write down the particular form of the equation that I was looking for:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)  \\  \dfrac{\partial v}{\partial t} + \dfrac{\partial uv}{\partial x} + \dfrac{\partial v^2}{\partial y} = \nu \; \left(\dfrac{\partial ^2 v}{\partial x^2} + \dfrac{\partial ^2 v}{\partial y^2}\right)

“Uh… But why would I keep the product terms like u^2 inside the finite difference operator?” I now asked myself, still in the lingering haze of the ciesta. “Wouldn’t it complicate, say, specifying boundary conditions and all?” I was trying to pick up my thinking speed. Still yawning, I idly took a piece of paper, and began jotting down the equations.

And suddenly, before even completing writing down the very brief working-out by hand, the issue had already become clear to me.

Immediately, I made me another cup of tea, and while still sipping it, launched TexMaker, wrote another document explaining the nature of my mistake, and attached it to a new email to the expert. “I got it” was the subject line of this new email I wrote. Hitting the “Send” button, I noticed what time it was: around 7 PM.

Here is the “development” I had noted in that document:

Start with the equation for momentum along the x-axis, expressed in the Eulerian (conservation) form:

\dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y} = \nu \; \left(\dfrac{\partial ^2 u}{\partial x^2} + \dfrac{\partial ^2 u}{\partial y^2}\right)

Consider only the left hand-side (LHS for short). Instead of treating the product terms u^2 and uv as final variables to be discretized immediately, use the product rule of calculus in the same Eulerian frame, rearrange, and apply the zero-divergence property for the incompressible flow:

\text{LHS} = \dfrac{\partial u}{\partial t} + \dfrac{\partial u^2}{\partial x} + \dfrac{\partial uv}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u\dfrac{\partial u}{\partial x} + u \dfrac{\partial v}{\partial y} + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[\dfrac{\partial u}{\partial x} + \dfrac{\partial v}{\partial y} \right] + v \dfrac{\partial u}{\partial y}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + u \left[ 0 \right] + v \dfrac{\partial u}{\partial y}; \qquad\qquad \because \nabla \cdot \vec{u} = 0 \text{~if~} \rho = \text{~constant}  \\  = \dfrac{\partial u}{\partial t} + u \dfrac{\partial u}{\partial x} + v \dfrac{\partial u}{\partial y}

We have remained in the Eulerian frame throughout these steps, but the final equation which we got in the end, happens to be identical as that for the Lagrangian frame! Magic!!

The “magic” occurs only because the flow is incompressible. For a compressible flow, the equations should continue looking different, because \rho would be a variable, and so it would have to be accounted for with a further application of the product rule while evaluating \frac{\partial}{\partial t}(\rho u), \frac{\partial}{\partial x}(\rho u^2) and \frac{\partial}{\partial x}(\rho uv) etc.

But as it so happens, for the current case, even if the final equations look exactly the same, we should not supply the same physical imagination.

We don’t imagine the Lagrangian particles at nodes. Our imagination continues remaining Eulerian throughout the development, with our focus not on the advected particles’ positions but on the flow variables u and v at the definite (fixed) points in space.


Sometimes, just expressing your problem to someone else itself pulls you out of your previous mental frame, and that by itself makes the problem disappear—in other words, the problem gets solved without your “solving” it. But to do that, you need someone else to talk to!


But how could I make such stupid and simple a mistake, you ask? This is something even a UG student at an IIT would be expected to know! [Whether they always do, or not, is a separate issue.]

Two reasons:

First: As I said, there are gaps in my knowledge of computational mechanics. More gaps than you would otherwise expect, simply because I had never had class-mates with whom to discuss my learning of computational  mechanics, esp., CFD.

Second: I was getting deeper into the SPH in the recent weeks, and thus was biased to read only the Lagrangian framework if I saw that expression.

And a third, more minor reason: One tends to be casual with the online resources. “Hey it is available online already. I could reuse it in a jiffy, if I want.” Saying that every time you visit the resource, but indefinitely postponing actually reading through it. That’s the third reason.


And if I could make so stupid a mistake, and hold it for such a long time (a day or so), then how could I then see through it, even if only eventually?

One reason: Crucial to that development is the observation that the divergence of velocity is zero for an incompressible flow. My mind was trained to look for it.

Even if the Pune University syllabus explicitly states that derivations will not be asked on the examinations, just for the sake of solidity in students’ understanding, I had worked through all the details of all the derivations in my class. During those routine derivations, you do use this crucial property in simplifying the NS equations, but on the right hand-side, i.e., for the surface forces term (while simplifying for the Newtonian fluid). Anderson does not work it out fully [see his p. 66] nor do Versteeg and Malasekara. But I anyway had worked it out fully, in my class… So, my mind was already trained and primed for this observation.

So, it was easy enough to spot the same pattern—even before jotting it down on paper—once it began appearing on the left hand-side of the same equation.

Hard-work pays off—if not today, tomorrow.


CFD books always emphasize the idea that the 4 combinations produced by (i) differential-vs-integral forms and (ii) Lagrangian-vs-Eulerian forms all look different, and yet, they still are the same. Books like Anderson’s take special pains to emphasize this point.

Yes, in a way, all equations are the same: all the four mathematical forms express the same physical principle.

However, when seen from another perspective, what we saw above was an example of two equations which look exactly the same in every respect, but in fact are not to be viewed as such.

One way of reading this equation is to imagine inter-connected material particles getting advected according to that equation in their local framework. Another way of reading exactly the same equation is to imagine a fluid flowing past those fixed FDM nodes, with only the nodal flow properties changing according to that equation.

Exactly the same maths (i.e. the same equation), and of course, also the same physical principle, but a different physical imagination.

And you want to tell me “math [sic] rules?”


I Song I Like:

(Hindi) “jaag dil-e-deewaanaa, rut jaagee…”
Singer: Mohamad Rafi
Music: Chitragupt
Lyrics: Majrooh Sultanpuri

[As usual, may be another editing pass…]

[E&OE]

CFD Code Snippets—3: First-order wave equation, first-order upwind scheme

The first-order (i.e. one-way) wave equation in 1D is given by:

\dfrac{\partial T}{\partial t} = c \dfrac{\partial T}{\partial x}.

where T is the transverse displacement now, not temperature, and c is a constant.

The equation describes a pulse moving at a constant speed c to the right. If c < 0, the pulse moves to the left. Gilbert Strang [^] describes this equation with terms like (momentary) “flash of light” and “wall of water.”

It is with this problem that Prof. Lorena Barba chooses to begin [^] her 12 Steps program [^]. However, I am not so sure whether it is better to begin the course with convection rather than with diffusion.

The differential terms in convection are all of the first-order whereas those in the diffusion are mixed: first-order in time and second-order in space. It would thus appear that convection is a simpler phenomena than diffusion. In a certain sense, it is.

However, it so happens that for a course on CFD, the linear convection is only a stepping stone to the nonlinear convection. Seen from this viewpoint, it’s better to first cover diffusion, and only then go over to convection. This way, it becomes easier for the student to compare and contrast the nonlinear convection from the linear convection.

Currently, one way or the other, I am constrained to teach only according to the SPPU syllabus. So, one way or the other, the first-order wave equation could have come only after the diffusion equation.

No, don’t give too much credit to¬† the S. P. Pune University. The sequence wouldn’t be a conscious choice of the syllabus framers. The most probable scenario is that they must have just arranged and rearranged the material until they were satisfied that every committee-member’s suggestion was taken into consideration without any consideration of merit, and that all the six units carried more or less the same amount of material. Or, less likely, until the point that they themselves got tired of arguing with each other. In any case, none could possibly have thought of ordering the material according to a proper hierarchy.

Anyway, here we go with the Python code. (I hurriedly rewrote it afresh after the recent HDD crash, and so, comments and all could be refined a bit further. As to bugs: You be the tester.)


#################################################################
'''
First-Order Wave Equation, using First-Order Upwind Scheme
Code written by and copyright (c)Ajit R. Jadhav.

Based on the code by Prof. Lorena Barba (12 Steps to NS)
For theory, also see notes/resources on the 'net, such as:
Gilbert Strang: http://ocw.mit.edu/courses/mathematics/18-086-mathematical-methods-for-engineers-ii-spring-2006/readings/am52.pdf
Olver: http://www.math.umn.edu/~olver/num_/lnp.pdf
Gardner: https://math.la.asu.edu/~gardner/wave.pdf
and others.

Version: 29-March-2016.
'''
#################################################################

import numpy
from matplotlib import pyplot as plt
import random

#################################################################
#  Program Data. All units are in SI
#

# Physical time over which the simulation is to be run, in s
totalTime = 2.0

# Domain size, in meters
domLen = 1.0

# Signal velocity of the pulse, m/s
c = 1.0

# Number of cells
nc = 50
# Number of nodes
nx = nc + 1
# Compute the width of each cell
dx = domLen / nc

# Vertical (i.e. transverse) wave displacements in m, to use
# in constructing the initial and boundary conditions
T_Level_1 = 0.0
T_Level_2 = 0.1

# Create the numpy arrays, initialized to zero, for holding
# the primary variable values (here, transverse displacements)
# The main array, holds the values after time-marching through one step
T = numpy.zeros(nx)
# A temporary array, holds the values at the previous time-step
Tn = numpy.zeros(nx)

# Compute the duration of each time-step, and the number of time-steps required
# Stability criterion: CFL \equiv c \dfrac{\Delta t}{\Delta x}
# CFL <= 1.0 for stability, but the value of 1.0 here is special!
# Try other values of CFL < 1.0, a few values > 1.0, and a value of exactly 1.0
# Explain the smoothening at CFL < 1.0 # Explain the instability at CFL > 1.0
# Explain the ``oddity'' of the result at CFL = 1.0
CFL = 0.25

print("Assumed stability criterion value: %lf" % CFL)
# Given the above CFL value, back-calculate the time-step duration
dt = CFL * dx / c
print("Duration of each time-step in s, after applying the stability criterion: %lf" % dt)
# Compute the total no. of time-steps (an integer) to fit within the given time-interval
nt = int(totalTime / dt)
print("Number of time-steps required to step through: %d" % nt)

#################################################################
#  Setting up of the initial condition:
#
#  Uncomment the required IC and keep others commented

#########################
# IC1: A single pulse in the middle
# TL = T_Level_1
# TR = T_Level_1
# T[ nx/2 ] = T_Level_2

#########################
# IC2: A step function in the middle
# TL = T_Level_1
# TR = T_Level_1
# T[ nx/3.0 : 2.0*nx/3.0  ] = T_Level_2

#########################
# IC3: Left half at tempLevel_1; right half at tempLevel_2
# TL = T_Level_1
# TR = T_Level_2
# T[0: nx / 2] = TL
# T[nx / 2 + 1: nx] = TR

#########################
# IC4: An arbitrary function
random.seed(10)
TL = T_Level_1
TR = T_Level_1
for i in range(nc/5,nc/5+5):
    T[i] = T_Level_2 * random.random()

print("Initial condition is:")
print(T)
#
# Setting up of the initial condition is over
#################################################################

# Create the x-axis vector, for plotting purposes
xAxis = numpy.linspace(0, domLen, nx)

# Total time elapsed---physical time
tt = 0.0
plt.title("1D First-Order Wave Equation using\nFirst-Order Upwind Scheme. CFL: %5.2lf. Time: %5.2lf s" % (CFL, tt))
plt.xlabel("Distance, in m")
plt.ylabel("Transverse Displacement, in m")
plt.xlim(0.0, domLen)
plt.ylim(T_Level_1, T_Level_2 * 1.5)
plt.plot(xAxis, T, 'r')

#################################################################
#  Time-stepping, using upwind scheme
#

for n in range(nt):
    Tn = T.copy()
    # The trick on the following line is specific to Python's negative indexing
    for i in range(0, nx):
        # The following commented line would be the usual way of writing the loop
        # for i in range(1,nx):
        # The following step is where the first-order upwind scheme computation is done
        # Modify this one line to implement other schemes like Lax-Wendroff etc.
        T[i] = Tn[i] - CFL * (Tn[i] - Tn[i - 1])

    tt = dt * (n + 1)
    plt.clf()
    plt.title("1D First-Order Wave Equation using\nFirst-Order Upwind Scheme. CFL: %5.2lf. Time: %5.2lf s" % (CFL, tt))
    plt.xlabel("Distance, in m")
    plt.ylabel("Transverse Displacement, in m")
    plt.xlim(0.0, domLen)
    plt.ylim(T_Level_1, T_Level_2 * 1.5)
    plt.plot(xAxis, T, 'k')

    plt.pause(0.1)

plt.plot(xAxis, T);
plt.show()

print("done")


Some things to try:

  1. Straight-forward: As mentioned in the code, change the CFL to various values all of which are strictly less than 1.0. Then, change CFL to a few values greater than 1.0. Draw conclusions about stability.
  2. Not so straight-forward: Then, also set the CFL exactly to 1.0, and see what happens. Did you expect the outcome? Why or why not?
  3. Programming-related: Implement the Lax-Wendroff/MacCormack scheme.
  4. The smoothing that occurs at CFL values < 1.0 is due to a phenomenon called false diffusion. Explain the similarity and differences of the false diffusion from the true diffusion (say as modeled in the steps 1 and 2 of this series).

A Song I Like:

(Marathi) “dis chaar jhaale mann…”
Music: Ashok Patki
Lyrics: Soumitra
Singer: Sadhana Sargam


[Minor editing may happen…]

[E&OE]

CFD Code Snippets—2: Transients in the Couette flow, Crank-Nicolson

Here is some Python code for modeling transients in the pure shear-driven Couette flow between two infinite horizontal flat plates.

Initially, both the plates are stationary. Then, at time t = 0, the upper plate is suddenly set into motion with a constant speed u = ue. Due to the no-slip boundary condition between the fluid and the upper plate, the topmost layer of the fluid, too, is set into motion with the same speed, i.e., u = ue. Due to viscosity (i.e. friction internal to the fluid) this topmost layer tugs on to the next layer to move in the same direction, too. (That’s sort of like how when you pull a book from the middle of the stack, the books below it tend to come out, too.) Thus, the next layer also begins to move. Then, this second layer, in turn, nudges the third layer to move. …

Yes, it is turtles—but not all the way down.

The bottom-most fluid layer feels the tug not only of the layer immediately above it, but also the cent-per-cent tug of the no-slip boundary condition of the solid plate just below it. And the lower plate remains stationary throughout.

To cut a long story short, if the situation were steady-state, i.e., if the upper plate were always moving (at a constant speed ue) and the bottom plate stationary, then the velocity profile developed would be in the shape of a straight-line.

However, if both plates are initially stationary, and then if only the upper plate is suddenly moved, then the velocity profile would be initially like the Greek letter `\Gamma‘. With the passage of time, the profile would first lose the sharp 90^{\circ} angle, and so become a bit soft at that corner. Then, the curvature at the corner would become more and more diffuse. Then, the curved profile would become less and less curvy, and finally, it would reach the straight-line of the steady-state.

The code below shows how.

The Couette flow is important because it is one of the very, very (, very, very, very) rare cases in which the Navier-Stokes equations can (at all) be solved “analytically.”

The reason is, for this kind of a Couette flow—the one driven purely by the shear force of the upper plate (i.e., when no pressure gradients are applied in the horizontal direction)—the Navier-Stokes equations reduce to nothing but (our dear old friend): diffusion equation!

We did model the transient diffusion equation the last time, here [^]. However, the discretization approach used back then was FTCS—which is an explicit approach. The one now being used is: the Crank-Nicolson method—which is an implicit approach. FTCS is only conditionally stable, whereas CN is (at least for the diffusion equation, and at least within all reasonable limits) unconditionally stable. (Check out the Flow Science’s take on the explicit vs. implicit issue, here [^].) That makes it interesting.

The code once again comes with more comments on more issues than you want to read and/or know about.

The particular test case currently hard-coded is the one from John D. Anderson, Jr.’s celebrated text [^]. You can compare the output values produced by this program with the listing given in Anderson’s text (Table 9.1, p. 430.)

The Python code here even displays a MatPlotLib plot which you can directly compare with the fig. 9.4 in Anderson’s text (p. 431).

Anyway, here is the code, without further ado:

#################################################################
# Transients in the pure shear-driven Couette flow between two
# infinite flat plates, modeled using the Crank-Nicolson method.
# Here we generally follow the treatment and terminology as
# used in the text by John D. Anderson, Jr. However, please
# note, unlike in the book, here we use the _zero_-based
# index, _not_ 1-based. 
# Code written by and copyright (c)Ajit R. Jadhav.
# Version: 03-Feb-2016.
#################################################################

# Importing of modules and routines
# We use numpy arrays for efficiency
import numpy
# We use a neat plotting library
from matplotlib import pyplot as plt
from TDMASolver import TDMASolver

#################################################################
# Program Data. All units are in SI.
#
# Gap between the parallel plates, in m
D = 0.1
# Number of cells along the y-axis
nc = 20  # 4
# Number of nodes along the y-axis
N = nc + 1
# Compute the height of each cell
dy = D / nc
# Material properties
# The values below are for water
# Density, in kg / m^3
rho = 998.2
# viscosity, in Pa-s (or kg/(m.s))
mu = 8.90e-4
# speed of the moving (upper) plate, in m/s
ue = 0.05
# Compute the effective Reynolds' number
Re = rho * ue * D / mu
print("Calculated Re: %lf" % Re)

#################################################################
# Validation
# 1. The program output has been validated against manually
# worked out solutions for the first two time-steps, for these
# values of parameters: N=5, Re = 100.0, E = 1.0. For this set
# of values, and given the default truncation that happens in the
# Python print function, the convergence to the steady-state
# solution occurs at the 30th time-step.
#
# 2. Also validated with the case given in John D. Anderson's
# text. The values of parameters are: N=21, Re = 5000, nt =1.
# The program output matches the solution listing given on
# p. 430 in the book.
#################################################################

# For validation purposes, we will ignore the above-calculated
# Re, and instead set the variable with a simple value.
# Anderson's book uses 5000---quite close to the program data
# assumed above.
Re = 5000.0

# CN being implicit, has unconditional stability. However, for
# ease of comparisons, we define a parameter E anyway.
# $E \equiv \dfrac{(\Delta t)}{(Re)(\Delta y)^2})$
# Anderson's book uses E = 1.0
E = 1.0

# Number of time-steps to take
# Anderson's book shows graph up to 240 time-steps
nt = 241  # 36

print("Assumed Re: %lf, E: %lf, nt: %d" % (Re, E, nt))

# Calculate the time-step duration
dt = E * Re * dy * dy
print("Physical time elapsed in marching one computational time-step: %lf seconds." % dt)

#################################################################
# Create the numpy arrays, initialized to zero, for holding
# the primary variable values (here, u)

# The main array, holds the values after time-marching through one step
u = numpy.zeros(N)
# A temporary array, holds the values at the previous time-step
un = numpy.zeros(N)

#################################################################
#  Setting up of the initial condition:
#
u[N - 1] = 1.0
print("Initial u: ", u)
#################################################################

# Create the y-axis vector, for plotting purposes
yAxis = numpy.linspace(0.0, 1.0, N)
plt.title("Transients in the pure shear-driven Couette flow,\nusing the Crank-Nicolson method")
plt.xlabel("Dimensionless horizontal velocity (u/ue)")
plt.ylabel("Dimensionless distance (y/D)")
plt.grid(True)
plt.plot(u, yAxis, 'r')

# Array containting the indices of time-steps at which the
# solution should be plotted:
ShowPlotInstants = [0, 2, 12, 36, 60, 240]

#################################################################
# Using the CN (Cranck-Nicolson) scheme, for the Couette Flow
#
# Compute A and B; they form a tri-diagonal matrix as below:
# B A
# A B A
#   A B A
A = -E / 2.0
B = 1.0 + E

print("A: %lf, B: %lf" % (A, B))

# a, b, c, d are the 1D arrays that define a tri-diagonal
# matrix. The tri-diagonal matrix in general looks like:
# a0|b0 c0 0 |  |x0| |d0|
#   |a1 b1 c1|  |x1|=|d1|
#   |0  a2 b2|c2|x2| |d2|
# a0 and c2 are not included in the tri-diagonal system, but
# the TDMASolver function assumes that these memory
# locations exist in the a and c arrays. Further, the
# algorithm employs the array indices in such a way that
# the first used element of the array a is a[1], not a[0],
# and similarly, the last used element of the array c is
# c[2], not c[1].
#
# For N-grid points, for the transient Couette flow problem,
# we get a system of N-2 X N-2 size, with the _global_ and
# _zero_-based _index_ going over [1, N-2]. Thus, if the
# number of grid points N = 5, we get [1,2,3].
a = numpy.zeros(N - 2)
b = numpy.zeros(N - 2)
c = numpy.zeros(N - 2)
d = numpy.zeros(N - 2)

# Initialize the a, b and c arrays
a[0:N - 2] = A
b[0:N - 2] = B
c[0:N - 2] = A

#################################################################
# Here begins the actual time-stepping...
nShowPlotIndex = 0
for n in range(nt):
    # We first save the known values of u at time n, to un,
    un = u.copy()
    # and apply the Dirichlet boundary conditions
    # to the solution at the time-step n
    un[0] = 0.0
    un[N - 1] = 1.0

    # The right hand-side vector on the CN is calculated here
    # Anderson calls the variable K
    # Note carefully the indices here. The indices used for Kj
    # are straight-forward from Anderson's book, except that here
    # they are redone as 0-based.
    # But the real change is on the index used on d. It is one
    # less, because here we don't at all store the first and the
    # last rows of the total system in it.
    for j in range(1, N - 1):
        Kj = (1.0 - E) * un[j] + E / 2.0 * (un[j + 1] + un[j - 1])
        d[j - 1] = Kj
    # For the same reason spelt above, note the indices: They are
    # respectively 1 less and 2 less.
    d[0] = d[0] - a[0] * un[0]
    d[N - 3] = d[N - 3] - c[N - 3] * un[N - 1]
    # print( d )

    # Realize, the solution array returned here has values only for
    # the interior nodes. Its elements do not include the two
    # boundary nodes.
    soln = TDMASolver(a, b, c, d)
    for j in range(1, N - 1):
        u[j] = soln[j - 1]

    # We skip plotting for all the time-steps except for those given
    # in the ShowPlotInstants array. The values are for the graph in
    # Anderson's text on p. 431
    if n == ShowPlotInstants[nShowPlotIndex]:
        nShowPlotIndex = nShowPlotIndex + 1
        # Uncomment the following line for erasing out any earlier plots
        # plt.clf()
        # Uncomment the following line if you want to print T to console
        # print( T )
        plt.plot(u, yAxis, 'k')
        plt.pause(0.1)
        print("\nVelocity profile after time-step no. %d (or physical seconds: %lf)" % (n, float(n) * dt))
        print(u)

plt.plot(u, yAxis, 'b')
plt.show()

print("done")
#################################################################
#################################################################

 

The above code calls on a TDMASolver() function written in a Python module of the same name. Here is the listing for that.

Initially, I tried writing my own TDMA (i.e. Thomas algorithm) code, but found that it was getting too time-consuming. So, I did the smart thing. I searched on the ‘net, found Ofan Tau’s code, here [^], and copy-pasted it!

One funny thing here was that Ofan in turn refers to some code on a Wikibooks page (here [^]), but while Ofan’s code is correct, the code on the Wikibooks page (at least the Python version of it) is not.

Incidentally, the best resource on TDMA aka Thomas’ algorithm that I found was one .PDF document by Prof. William T. Lee of the University of Limerick, Ireland, here [(.PDF) ^]. Keep it handy while going through this code.

#################################################################
# TDMASolver.py
#
# Solves a Tri-Diagonal Matrix system.
#
# a, b, c, d are the 1D arrays that define a tri-diagonal
# matrix. The tri-diagonal matrix in general looks like:
# a0|b0 c0 0 |  |x0| |d0|
#   |a1 b1 c1|  |x1|=|d1|
#   |0  a2 b2|c2|x2| |d2|
# a0 and c2 are not included in the tri-diagonal system, but
# the TDMASolver function assumes that these memory
# locations exist in the a and c arrays. Further, the
# algorithm employs the array indices in such a way that
# the first used element of the array a is a[1], not a[0],
# and similarly, the last used element of the array c is
# c[2], not c[1].
#
# The code here was taken from Ofan Tau's blog post:
# http://ofan666.blogspot.in/2012/02/tridiagonal-matrix-algorithm-solver-in.html
# Ofan's blog post makes reference to:
# https://en.wikibooks.org/wiki/Algorithm_Implementation/Linear_Algebra/Tridiagonal_matrix_algorithm
# It so happens that the _Python_ code at the wikibooks URL is
# _wrong_, but Ofan's code is correct (though it does not check
# for stability or even just diagonal dominance.)
# I have mostly kept Ofan's code as is; just added one or two
# comments.
#
# BTW, a wonderfully illustrated (and as far as I can tell,
# correct) reference for understanding TDMA is a .PDF document
# by Prof. William T. Lee (of Industrial Mathematics unit,
# University of Limerick, Ireland):
# http://www3.ul.ie/wlee/ms6021_thomas.pdf
#
#  -- Ajit R. Jadhav
# Version 03-February-2016
#################################################################

import numpy


#################################################################
# This helper function was written by Ajit R. Jadhav
# It uses the a, b and c vectors to fill a full-size matrix.
# It allows for matrix visualization. To be used mainly for
# debugging purposes.
def FillM(DM, a, b, c):
    N = len(a)
    DM[0][0] = b[0]
    DM[0][1] = c[0]
    for i in range(N - 1):
        DM[i][i - 1] = a[i]
        DM[i][i] = b[i]
        DM[i][i + 1] = c[i]
    DM[N - 1][N - 2] = a[N - 1]
    DM[N - 1][N - 1] = b[N - 1]
    return DM


#################################################################
# This function was shamelessly lifted from Ofan's blog-post:
# http://ofan666.blogspot.in/2012/02/tridiagonal-matrix-algorithm-solver-in.html,
# accessed 02-February-2016
#################################################################
def TDMASolver(a, b, c, d):
    '''
    TDMA solver, a b c d can be NumPy array type or Python list type.
    refer to http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm

    # a, b, c, d are the 1D arrays that define a tri-diagonal
    # matrix. The triadiagonal matrix in general looks like:
    # a0|b0 c0 0 |  |x0| |d0|
    #   |a1 b1 c1|  |x1|=|d1|
    #   |0  a2 b2|c2|x2| |d2|
    # a0 and c2 are not included in the triadiagonal system, but
    # the TDMASolver function assumes that these memory
    # locations exist in the a and c arrays. Further, the
    # algorithm employs the array indices in such a way that
    # the first used element of the array a is a[1], not a[0],
    # and similarly, the last used element of the array c is
    # c[2], not c[1].

    # One test case is this:
    # a = [0, -1.0, -1.0, -1.0]
    # b = [4.0,  4.0,  4.0,  4.0]
    # c = [-1.0, -1.0, -1.0,  0.0]
    # d = [5.0,  5.0, 10.0, 23.0]
    # result: [2, 3, 5, 7]
    '''
    # number of equations
    nf = len(a)
    # print( "In TDMASolver.py: Size of the system: %d" % nf)

    # Make a copy the arrays
    ac, bc, cc, dc = map(numpy.array, (a, b, c, d))

    ##############################################
    # Make the matrix strictly upper triangular
    for i in range(1, nf):
        mc = ac[i] / bc[i - 1]
        bc[i] = bc[i] - mc * cc[i - 1]
        dc[i] = dc[i] - mc * dc[i - 1]

    ##############################################
    # The back-substitution pass:
    # Note the meaning of the -1 index in Python.
    xc = ac
    xc[-1] = dc[-1] / bc[-1]

    for j in range(nf - 2, -1, -1):
        xc[j] = (dc[j] - cc[j] * xc[j + 1]) / bc[j]

    # DM = numpy.zeros((nf,nf))
    # DM = FillM(DM, a, b, c)
    # print(DM)
    # del DM

    # delete variables from memory
    del bc, cc, dc

    return xc

Some things to try:

  1. Change E to 5.0, 10.0, and then even to 4000.0, and see if you get a solution like the one given in Anderson’s text (fig. 9.5, p. 934)
  2. Remove the hard-coded value of Reynolds’ number, and start using the calculated values for the same. Then, play with the physical parameters: use material properties for honey, glycerine, different SAE grades of motor oils, etc., and see if you get reasonable-looking “settling-in” time periods to approach steady-state or not.
  3. Also change the separation between the flat plates.
  4. What would happen if pressure gradients are applied to the fluid? In what way will the governing differential equation change? The velocity profile? Could you handle it using FDM?
  5. And, oh, yes! Do continue thinking about the SPPU (Savitribai Phule Pune University), and its past, present and future “authorities.”

[As usual, this post could change—improve or deprove—without any prior [or posterior] notice.]

[E&OE]

 

CFD Code Snippets—1: 1D transient conduction, FTCS, Dirichlet BCs

This semester, I am teaching an elective UG course on CFD. For illustrating ideas concerning the title problem, I began by borrowing the Python code written by Prof. Lorena Barba (general URL: [^]; the step concerning diffusion: [^] )), and then altered it a bit. I am sharing my modified version here.

In particular, here is a summary of my changes: (i) The Dirichlet BCs are now applied (so that time-stepping can go on for an indefinitely long period), (ii) graph plotting is done right within the time-marching loop, (iii) three different initial conditions can be used, etc. Variable names have been changed, and in fact, even comments have been expanded/corrected. (For instance, I use the term “thermal diffusivity” instead of“viscosity,” and use the actual material property value for it.)


This code can be used for the laboratory component (aka “Term-Work”) for the final-year elective course on CFD, in the S. P. Pune University.

This is the same University that denies me approval as a Professor, now because they think that I don’t have the required amount of work-experience (i.e. 10 years of both industrial, research and teaching experience put together).

The thing is, I didn’t always work for companies with annual turnover of 15 crores or more.

According to the PR guy of my employers, this university is now asking for my previous employers (e.g. startups in India/abroad) to certify in writing on their original letterheads not only the fact that they had employed me, but also to identify whether they had had 15 crores or more of annual turnover during the times when they had employed me or not. The rule is new, but applies retrospectively, i.e., it is supposed to go back even to the late 1990s or early ’00s times, when I worked in the software startups, sometimes with barely 1 or 2 employees too in their initial stages.

Most of these companies are by now wound up (e.g. QueriSoft grew to 500+ people whereas I was may be 13th/14th employee in Pune; nirWANa Inc. and Imagine Tech. where I was the fourth employee, and the second employee etc.—with nirWANa growing to US $1 Million in VC funding (coming from Kanwal Rekhi) but only after I left (at which point it had grown to 60+ employees)). I cannot have their certificates either because they can no longer be contacted, or even if the VC funding was US $ 1 Million, the sales turnover was not, etc.

Thus, the quantum of my total experience, according to the certifiable madness of the “authorities” at the S. P. University of Pune, suddenly dwindles down.

This rule, thus, allows the S. P. University of Pune to happily debar me from becoming a Professor.

It also allows employers to employ people like me not following the UGC scale, but on a consolidated, lower-level, salary.

I do not know precisely who among the two parties wishes to emphasize the rule more, to short-sell me. However, here, I am inclined to believe that it is the former. After all, it has been the government-run COEP and the (now S.P.) University of Pune who have happily supported people like Dr. Ashok Ghatol and Dr. (!) G. K. Kharate.

If professors who have worked nowhere else but in the private engineering colleges affiliated to the university rules are to be applied the same rule, then many of them would disqualify. This contradiction apparently does not matter to the University “authorities.” The rule is retrospective in action, but only for the new applicants, as per what our PR gentleman told me.

Anyway, I am going to share my code, but let me do so with a forthright moral denunciation of those office-bearers of the S. P. Pune University (and/or Maharashtra government) who are (or have been) responsible for framing such idiotic, harassing, anti-life and overall morally putrid rules. (For all I know, there could even be a casteist angle to it all; after all, it’s S. P. Pune University!)


Anyway, here is the Python code. It’s been written in v. 2.7 of Python. You will also need numpy and matplotlib installed. Feel free to use any IDE; I use PyCharm Community Edition.

#################################################################
# 1D Transient Conduction, using FTCS, with Dirichlet BCs.
# Code written by and copyright (c)Ajit R. Jadhav.
# Based on the code by Prof. Lorena Barba (12 Steps to NS).
# Shared with an explicit identification of how morally putrid 
# the S. P. Pune University is.
# Version: 27-Jan-2016.

#################################################################
# Importing Routines
# Use numpy arrays for efficiency
import numpy
# Use a neat plotting library
from matplotlib import pyplot  as plt

#################################################################
#  Program Data. All units are in SI
#
# Domain size, in meters
domLen = 0.1
# Number of cells
nc = 20
# Number of nodes
nx = nc + 1
# Compute the width of each cell
dx = domLen / nc
# Thermal diffusivity. This value is for Alumninium, in m^2/s
alpha = 8.418e-5
# Physical time over which the simulation is to be run
totalTime = 10.0
# Temperatures in degree Celcius, to use in constructing
# the initial and boundary conditions
tempLevel_1 = 0.0
tempLevel_2 = 100.0

# Parrot out the main input data
print("Length of the domain, in m: %lf" % domLen)
print("Number of cells: %d. Number of nodes: %d" % (nc, nx))
print("Therefore, the cell-width, in m: %lf" % dx)
print("Thermal diffusivity, in m^2/s: %lf" % alpha)
print("Duration (in physical seconds) for the simulation: %lf" % totalTime)
print("Temperature levels (deg. C): Level 1: %lf, Level 2: %lf" % (tempLevel_1, tempLevel_2))
# Main program data over
#################################################################

# Compute the duration of each time-step, and the number of time-steps required
# Stability criterion: r = \alpha \dfrac{\Delta t}{(\Delta x)^2}
# r <= 0.5 for stability
r = .35
print("Assumed stability criterion value: %lf" % r)
# Time-interval between steps, in seconds
dt = r * dx ** 2 / alpha
print("Duration of each time-step in s, after applying the stability criterion: %lf" % dt)
# Compute the total no. of time-steps (an integer) to fit within the given time-interval
nt = int(totalTime / dt)
print("Number of time-steps required to step through: %d" % nt)
# No. of time-steps to skip for plotting
stepsToSkipInPlotting = 10

# Create the numpy arrays, initialized to zero, for holding
# the primary variable values (here, temperature)

# The main array, holds the values after time-marching through one step
T = numpy.zeros(nx)
# A temporary array, holds the values at the previous time-step
Tn = numpy.zeros(nx)
print(T)
#################################################################
#  Setting up of the initial condition:
#
#  Uncomment the required IC and keep others commented

#########################
# IC1: A single pulse in the middle
# TL = tempLevel_1
# TR = tempLevel_1
# T[ nx/2 ] = tempLevel_2

#########################
# IC2: A step function in the middle
# TL = tempLevel_2
# TR = tempLevel_2
# T[ nx/3.0 : 2.0*nx/3.0  ] = tempLevel_1

#########################
# IC3: Left half at tempLevel_1; right half at tempLevel_2
TL = tempLevel_1
TR = tempLevel_2
T[0: nx / 2] = TL
T[nx / 2 + 1: nx] = TR

#
# Setting up of the initial condition is over
#################################################################

# Create the x-axis vector, for plotting purposes
# About the linspace function
#   numpy.linspace(start, stop, num, endpoint=True, retstep=False, dtype=None)
#   Returns evenly spaced numbers over a specified interval.
xAxis = numpy.linspace(0, domLen, nx)
plt.title("1D Transient Conduction, using FTCS")
plt.xlabel("Distance, in m")
plt.ylabel("Temperature, in deg C")
plt.plot(xAxis, T, 'r')

print("Initial condition is:")
print(T)

#################################################################
#  Time-stepping, using FTCS
#
# Multiplying factor to be used at each time-step
MF = alpha * dt / dx ** 2

# Python range(nSize) returns an array of size nSize
# Thus, range(3) returns [0,1,2]
for n in range(nt):
    # We save the values of T at time n, to Tn,
    Tn = T.copy()
    # and apply the Dirichlet boundary conditions
    # to the solution at the time-step n
    Tn[0] = TL
    Tn[nx - 1] = TR

    # Then, we advance time to (n+1) and set T with it
    # A note on the range() function, and the index values.
    # -- The index to the right-most node in the arrays
    #    T and Tn is: nx-1. If we specify range( nx-1 ),
    #    then i will at most become _equal_ to nx-2
    #    i.e., one less than what we specified.
    for i in range(1, nx - 1):
        # This is the FTCS algorithm
        T[i] = Tn[i] + MF * (Tn[i - 1] - 2.0 * Tn[i] + Tn[i + 1])

    # We skip `stepsToSkipInPlotting' number of steps, in between two plots
    if n and not (n % stepsToSkipInPlotting):
        # Uncomment the following line for erasing out any earlier plots
        # plt.clf()
        # Uncomment the following line if you want to print T to console
        # print( T )
        plt.plot(xAxis, T, 'k')
        plt.pause(0.1)

# The solution after nt number of time-steps
print("The solution after %d steps i.e. %lf physical seconds is: " % (nt, nt * dt))
print(T)
plt.plot(xAxis, T, 'b')
plt.show()

print("done")
#################################################################
#################################################################

Some things to try:

  1. Try different values of the parameter <code>r</code>. For instance, set <code>r = 0.55</code> and see the instability develop.
  2. Try mesh refinement.
  3. Try changing material properties.
  4. Try different initial conditions. For instance, try triangular profile, or a sine-wave profile.
  5. Try setting a higher temperature, and keep the `r’ parameter very close to 0.5. See whether it is the initial condition of a triangle or of a sharp pulse which first develops instability. Try to understand the fact that the von Neumann stability analysis is only linear, and therefore provides only an initial estimate for the stability criterion.
  6. Compare in detail with the analytical solutions. For instance, the analytical solution for the initial triangular profile is worked out in Kreyszig.
  7. If inclined towards programming, try using fft routines to work out analytical solutions, have the program save both the FTCS and the FFT-based solutions to output files, and then write short scripts to plot the errors as functions of, e.g., the shape of the initial profile, space (location), time-step, mesh-refinement, `r’ parameter, etc.

[May be one more editing pass, sometime later this week or so.]

[E&OE]