Update on 2018.01.29; 23.13 IST: No one said I must note an update when I add one here. But I will make an exception, for now. See at the end.
When you say “maths,” what most engineers immediately come to think of is, first and foremost, calculus. There are several reasons for that.
First, admissions to good engineering colleges are competitive (think JEE!), and most students find maths to be the most difficult subject to master. And then, the most difficult portion of the XII standard maths involves calculus. Calculus also is remarkably unlike the maths they already know from their school-time studies (e.g. geometry and algebra). Physics is the next most difficult subject, and at XII standard level, it is calculus-based.
Second, the courses on engineering maths also are heavily involve the ideas first encountered in calculus, such as differential equations. OK, there is some statistics and linear algebra, too, both in XII and later in engineering. But their real usage (distibution functions, moments, coupled linear systems) would be inaccessible without knowing the differential equations.
By way of percentages, a majority of engineers do not in fact pursue any master’s. Even among those who do, a great many of them end up pursuing programs or specializations that don’t actually require much expansion of their mathematical repertoire. For instance, manufacturing engineering, environmental engineering, digital electronics, computer science, etc. Others simply pursue MBA, which actually has an effect of dumbing down on the maths side of their skills.
Thus, a large body of trained S&T people never come across some really wonderful mathematical ideas, ever in their life, even while continuing to believe that they have a fairly good idea of what the further maths might involve. Wrong. These further mathematical ideas are not actually more difficult than the maths already encountered during UG education. But their conceptual character is remarkably different. Perhaps popular science writers could help balance the situation.
Anyway, let me reserve this post for just a listing of certain important ones among such “further” mathematical ideas that I had to learn, mostly on my own, during my studies of Computational Science and Engineering (in the main: FEM and CFD), and quantum physics. Many postgraduate engineers and physicists would of course know well about them. But the fact is, most engineers with only a bachelor’s in engineering wouldn’t. If they are inclined to learn maths (with no examinations to be taken!), they may consider these ideas (whether through history of maths books, or pop-science, or MOOC courses, blog posts, or whatever). I will try to order the topics hopefully from simpler to complex order, but haven’t given much thought to it. In any case, the order is difficult to achieve; many topics have rather big overlaps with each other. In advanced maths, that often happens. Thus, the topics I list here are often just different aspects of some more general techniques/approaches.
Integral Equations: The differential equations paradigm is used throughout the UG engg; think of any place where you invoked the Taylor series. Here the idea is that you capture the physics of some phenomenon over an infinitesimally small region of space, and express a simple algebraic combination of its factors (e.g. balance or conservation of quantities, or their evolution over time) via a differential equation. Then you apply this governing equation to the models of various situations arising in application. Thus, the idea implicitly reinforced is this: the _problem_ formulation proceeds through differential equations, and the _solution_ techniques involve BV/IV’s and techniques of integration.
However, once in a more advanced settings, you find it routine to express the problem itself in terms of an integral equation. For instance, RTT (the Reynolds Transport Theorem) in fluid mechanics, or the path-integral approach in QM. The switch is from integrals as expressing the final solution to the integral terms as expressing various aspects of a problem itself.
Variational Calculus: For simpler problems (rather, for problems where simpler solution techiques are well-suited) such as rigid body mechanics in simpler fields (say uniform and time-invariant gravity), differential equations approach is well suited. But once you come to studying fields—the spatially distributed objects or attributes—it’s the variational approach which makes things simpler. In the differential equations, you are comparing two neighbouring points or instants lying on the curve of a function. In integral equations, you begin considering the entire function at a time, else you couldn’t calculate its definite integral. But still, in a way, the idea of taking an entire function in one go still is rather implicit. In the variational calculus, it begins a full-blown thing. A variation itself is a function—it’s a function obtained by taking differences between the entirety of two functions in one go. Further, it’s an abstract function, because the two functions whose difference it represents, themselves aren’t concretely specified. This is a big leap, and unfortunately, even the best and most helpful among books don’t point it out. The huge difference in thinking, represented by the Lagranian approach, is simply poured onto an unsuspecting student. (Reddy’s or Lancocz’s books are no exception.)
There are several new ideas here. One of the most basic and important ones is: the idea of the delta operator.
Expansion of Functions: Some idea about this is already given during UG engg. But not in the way professional/working physicists, or numerical modelers, or CSE engineers routinely use this idea. Let me illustrate with a concrete example.
To a UG engineer, say an electrical engineer, “expansion of function” means: taking FFT. Or, taking a Fourier transform. But to a quantum physicists, what it means is: a linear combination of basis functions taken as a vector space. To both UG electrical/electronics engineer and a quantum physicist, the basis functions are with complex exponentials. They are wont to list the advantages of the complex-Fourier expansion over the real-polynomial expansion. But to a mechanical engineer doing FEM via the method of weighted residuals, the expansion mostly means only a real-valued polynomial. If he is sufficiently smart, he might even retort back to the EE/QM folks: and how do you prove Euler’s identity, if not in reference to the Taylor series expansion? (Yes, his point is valid. Yes, EE folks’ point also is valid. The thing is: the power series expansion _is_ more fundamental, but given the completeness theorem of complex numbers, when you do the power series expansion using complex numbers, it naturally becomes more powerful.)
But the most remarkable difference in the grasp of what “expansion of function” means, drilled down to the level of an intuitive absolute, is this: An engineer/physicist with advanced training (of QM/CSE/FEM), over a period of time, becomes _unable_ to think of a field as a spatially spread entity. His natural proclivity has already become thinking of an arrow in an abstract vector space of basis functions—in some arbitrary basis set!
He also instinctively keeps the connection to eigenbases ready in his mind.
Ansatz: I won’t write anything new on it. Instead, I will direct you to my past writing here [^] and here [^] and Gershenfeld’s essay here [^].
Operator: I got tired of writing today, so I will expand this point later on. In any case, as I told you, this post is going to grow over a period of time. I will come back and add to it, and also edit it a lot, all unannounced. When I feel that a sufficient amount of material sufficiently well arranged has gathered here, I will then publish a separate post based on the material here.
Eigenbases of Operators: Ditto.
Tensors: The UG engineer understands (if he at all does) tensors as a 3X3 array of some differential terms, most often, a symmetrical arrangement. He may or may not understand tensors as objects that remain invariant under rotation. He certainly does not understand tensors as linear maps between vector spaces, neither does his mind immediately throw up this intimately connected contrast between the inner and the outer products. Nor does he understand a tensor containing differential terms as the first-order approximation in a power-series expansion. Nor does he realize the tensor product over infinite dimensional spaces let alone those over infinite-dimensional function spaces in some arbitrary eigen-basis. And more (which I myself don’t understand, but the QM guys do.)
A group of many intimately related ideas, here:
8.1. Catastrophe Theory: Many UG engineers might have never even heard of the term! Here is my post covering a bit on it. The UG engineering maths syllabii typically don’t cover the idea that properties such as existence, regularity, and uniqueness have to be proved! Even if the syllabus (or the text) cursorily mentions these ideas (e.g. Kreyszig does!), you can safely bet that the student never bothered to read through them because he “knew” that no exam-question will test him on that part. The idea that some neat initial condition may eventually evolve into multiple branches of solutions (i.e. non-uniqueness of solutions simply due to evolution) is a complete unknown to him. So is the non-uniqueness arising due to differing physical contexts having the same governing differential equation.
8.2. Deterministic Chaos: Most UG engineers by now have come to hear of this term. But they don’t understand what it means.
8.3 Well- vs. Ill-Posed Problems: Some UG engineers might have occasionally run into this term.
TBD: A laundry list of things to expand on, or to insert into the right places in the above list:
Differentiation under the integral sign/operator. Integration by parts and orders of continuity. Infinite sequences of functions (via a limiting process) under the integral operator (i.e., Dirac’s delta). Operators that make sense only under an integral sign. Functionals.
Infinite matrices. Vectors and matrices that have functions as elements. Projections of vectors, esp. in function spaces.
Tensors as fluxes of vectors (more accurately: tensor fields as flux-fields of vector fields).
No, I am not an expert on any of the above-mentioned ideas. It’s just that I have run into all of them, have tried to think about them, and have succeeded in understanding the essence of many of them. That’s all. I claim no good mastery. So, don’t come to me with your difficulties on these topics; ask the real experts. (In fact, I can only hope that the above description has come out more right than wrong, that’s all.)
But there are other things in which I seem to know better. For instance, the physical meaning of the delta operator of the calculus of variations.
Alright, bye for now.
Update (not necessary that I note updates here, but I will make an exception for now) on 2018.01.29.23:00 HRS IST: Added the Song section.
A Song I Like:
(Hindi) “dil me jaagee dhaDakan aise…”
Singer: Sunidhi Chauhan
Music: M. M. Kareem (i.e. M. M. Keeravani )
Lyrics: Nida Fazli