In my last post (below), I talked about changing the way my study room is set up—removing the computer network from the desk-top and making room for writing in long hand. Now, I will talk something about a small sign-board (for reminding me) that I am going to put up in my room.
The sign-board will carry a simple command: “Think It Over, and Then, Program!” It will have the sub-title: “Can you make a C++ program out of this?” (The slogan obviously is paraphrasing of sorts of Dirac’s famous commandment: “Shut up and calculate!”)
The thing is that I am reading a lot on and about classical mathematics lately. I got into it when I was preparing for my forthcoming FEM course. Well, it at least began that way. But then, soon, it slipped into quite general readings on the history of calculus of variations, too. And further thereafter, it began getting transformed into finding roots of so many subtle assumptions that we so blithely make today in the areas of mathematical physics, engineering mechanics, elasticity, computational mechanics, etc.
For instance, consider this question: Why is quantum theory linear?
When was the last time I thought about this question? Probably, it was in 1992–94. Now, I am re-picking up the threads again, and this process began when I began reading about CoV.
In mathematics, there are no monolithic blocks of periods such as a period of “classical physics,” or one of “analysis,” etc.; there often are many different streams of thought existing simultaneously at any given time. The emphasis may differ, but the strains continue to exist.
Consider here, that there is this basic idea that physical systems should be described, not using Newtonian ideas of spatially delineated agents such as particles and the forces they enact and react to or the changes in velocities or momenta they suffer, but using an alternate set of ideas such as the potential of a field, a scalar quantity called energy, the idea of action, etc. That is, the ideas originated and propagated by Leibniz, Lagrange, Hamilton, et al.
I am not sure if I am getting the division right, but at least at a cursory glance, it seems that there are these two camps: (i) Fermat, Leibniz, Bernoulli, Euler, Lagrange, Hamilton, et al all taken together on one hand, and, (ii) in his majestic and towering isolation, Newton, alone, on the other hand.
But I am not sure if I am getting the “camping” right.
The reason I am so tentative about this two-camps “theory” of mine is because I am not quite sure how to place three/four other names within the confines of my scheme; they disturb my “theorization”; they are: d’Alembert, Fourier, Gauss, and von Helmholtz (and perhaps one or two other people). Sure, d’Alembert and Fourier were responsible for creating those basic tools which are sympathetic to the ideations of the former camp (and to the action-at-a-distance viewpoint), the tools such as the technique of separation of variables (and the sin of making that appear static which actually is dynamic), and the idea of modeling using the spectral analysis, respectively. But still, somehow, I carry the impression that on the whole, these two Frenchmen never quite fully belonged to the former camp. Not as fully as Leibniz and Hamilton (and their modern-day followers in the area of quantum confusion mechanics) do. The two Germans, Gauss and von Helmholtz, particularly the latter, were just too good by way of their approach and work to belong to the former camp.
Just for the record, I, of course, am on the side of Newton. I never did like the idea of CoV. It has always been a very competent mathematics and still, a bad (or very bad) modeling idea—that’s my opinion of it. I hate to asribe to space what properly belongs to entities. It does no good to say that you can always get force by taking a gradient of a scalar and how it reduces the labor from three component equations to a single scalar equation, but, in the process of saying so, outright evade the issue of how anyone on earth is going to know what specific potential to use. Is divination or day-dreaming the recommended manner? And if not, who is going to calculate the effort in getting to the right potential? Overall, I think it doesn’t really simplify the problem but only changes its appearance and shifts the points of inconvenience, hiding them behind a nice “global” theory… Field theory may have its technical advantages, but the manner of advocacy—the underlying philosophy—of the field theoretic camp is often pretty bad, despite its popularity today, and needs to be exposed.
In any case, I am sure you are convinced that I am reading about a lot of classical mathematical ideas these days. Now ideas are fine as far as reading and thinking goes, but, as an engineer, one also has to get something real out of them. Here, given my strengths and inclinations, I have decided that I would rather create programs out of this kind of thinking of mine, rather than writing or solving analytical mathematical problems. But no, I do not thereby mean to imply that I don’t understand analysis or cannot deal with it… Below, I let me give some concretes of what I mean:
Consider the basic or starting ideas in the calculus of variations. How do people state it? How does the discussion begin?
The way they state it is, first of all, via an equation:
It is as if they would rather be found dead than not write down an equation. But more on this, just a short while later, i.e. right in this post, but after a short while. For the time being, look at that equation again. … Try to think like a fresh student.
The first challenge with the above equation is to decide precisely what is the unknown in or about it.
Naturally, all your learning cries out at you that it has got to be I, whatever it may be. Wrong. Plain wrong. In the great weird world of thinking along the CoV lines, the great unknown, of course, as you know, turns out to be the “y(x)”. The “I” is just a silent spectator, so to speak. The unknown, or the problem is y(x), even if it occurs deeply nested inside an integral on the right hand side of the equation. (Here, most mathematicians would lovingly repeat that idiotic quote (from Boltzmann) about leaving elegance to, of all professionals, cobblers and tailors, and thereby evade all the further issues actually observed and raised thus far. For example, the misleading way of formulating the problem—there is direct evidence of confusion of thinking in there, don’t you think?)
Next, notice that the above equation tells only an incomplete story. The real problem is not the above. The real problem is to find stationarity or “minimality”, i.e. to say something like:
LOL! Can you make any C++ program out of what we have said so far? If not, then, consider whether you (really) understand the matter as well as you should, or not.
Another minor point, concerning analysis and CoV. The inconvenient notation. Two sub-points here. (i) Why include both y(x) and x in the definition of the functional? Wouldn’t the context make the chain-relationship clear? Or is the whole idea to confound the reader as much as possible? Doesn’t the notation betray the (lack of) thinking? (ii) Why insist on using the form of an *equation*? You see, the whole thing becomes inconvenient precisely because the overall idea here is that you have to express everything in the format of “LHS = RHS”. But this kind of a format makes for a very bad notation when the fact to express is a choice from amongst an infinity of alternatives! Why not invent a new notation that clearly brings out the idea that there are an infinity of possible variations and that the one particular integral amongst them—the ultimate solution—is special and is to be singled out? Indeed, why at all call the choices by the name “variations”? Doesn’t using that word implicitly assume that you possess the knowledge of the desired but unknown solution in advance? (Variation, on what?) The idea of teleology is entangled far too deep in this issue, even if I believe that it *can* be separated out. But far more important: Why define terms in reference to an unknown? Isn’t it Platonic/Kantian/Hegelian kind of idealism oozing through here? Why not describe the issue straight-forwardly as an infinite set of definite integrals of a common definition and a common set of the specified boundary conditions? What wrong would that do?
In fact, when the notion of a function is introduced for the first time to kids, it is a normal practice to create two blobs (one each for the input and output sets i.e. the domain and the range sets) and connect them using one-way arrows (showing unique-valued correspondence). Everybody agrees that this kind of a diagram helps in rightly anchoring the idea in the mind. But when it comes to functionals, however, they never give a nomogram kind of a visualization… Why not? Do mathematicians fear losing abstractness of their definitions if they supply one? Oh yeah? Is that the motivation? Or is the motivation to keep the definitions as far away from a concrete-reality-based understanding and as high floating in the air as possible? What kind of motivation explains the complete lack of a good explanatory diagram (a concept map) for so basic concept as a functional, in all the 150 years of its history? … Sometimes, at least, you have to think if there isn’t more than plain teaching incompetence at work here, i.e., if there isn’t a kind of an “ideological” stink involved in here..
But, coming back to the text-book writing… What do you think they suggest by way of a “motivating” example? The same stale stuff of the brachistochrone problem!! Can’t you see that it is such an artificial problem, very specific to the constant gravitational field at the earth’s surface? That it cannot at all bring out the real essence of the idea…. Regardless, every textbook writer thinks it a terrible wrong if he doesn’t start CoV without (i) the brachistochrone (ii) the isoperimetrics (iii) the geodesics problems. (BTW, geodesics are *always* shown drawn on a neat sphere—not even an ellipsoid, let alone an arbitrary shaped surface.)
Here, I think, the command “Think It Over, and Then, Program!” would came in handy… In short, it’s high time we asked mathematicians to “Shut Up” asking us to “Calculate” all the time, and instead, think about the real world and the possible connections between their ideas and the real world…
When I recently deeply thought about how this thing could be better explained, I thought that the easiest way perhaps would be via computer simulation of a modification in the game of carrom.
What you can do is to simulate the game of carrom, but, say, with a soft iron striker (or a wooden striker affixed with a thin stamping of steel on top), and a few electromagnets placed on the sides (or underneath) the main surface of the carrom-board, so as to create a field inside the 2D domain. In the simulation, you would let the user vary the strengths and placements of the electromagnets, as well as the initial velocity (speed and direction) of the striker. You could then ask them to predict in advance where the striker will end up. (We assume c >> v so that the situation is non-relativistic.) The simulation would then show them the actual trajectory. The software could ask them to choose the direction and the speed so that the striker itself would end up in a corner pocket.. The software could also allow them to manually modify the actual trajectory (i.e. introduce variations) and automatically compute the value of the “action” for each variation. The program could help visualize the potential by plotting its surface z = f(x,y) in 3D. … So on and so forth…
It amazes me how stale analytical mathematicians can get…. Incidentally, this word “stale” reminds me of a story about Hamilton that I read somewhere (in an authentic kind of a book on the history of mathematics) a long time back. The story goes, in the times that he developed the idea of quaternions and his grand version of mechanics, Hamilton had gone half-mad of sorts. They had recovered discarded bones leftover after the meals together with Hamilton’s original (and seriously meant) handwritten papers. The papers (and the rest of the stuff) was found thrown all over the floor in his room. Apparently, Hamilton was *not* so engrossed in his work that he would forget to eat his meals; apparently, he had become so *careless* that he would not distinguish between his serious work (the papers)—by his own proclamation, the finest and grandest among mechanical thoughts—and the garbage left over from his meals. (In contrast, Newton *was* known to get so engrossed in his work as to forget taking his meals. But, despite his reputation as a sort of mad-man among quantum mechanicians of the 20th century, Newton never did get into any such mental states as Hamilton evidently did. Indeed, even when Newton ran the royal mint, it was with exemplary efficiency—not with a *mad*ness as historians and quantum physicists and mathematicians have preferred to tell us.) Apparently, the staleness which appears in the teaching of this kind of mathematicization—the case-studies and the ways of presentation of the mathematical ideas—comes about because our quantum mechanicians (and mathematicians and historians) take over into mathematics what were merely Hamilton’s personal habits. That, perhaps, could be the reason why, even today, we have to go through the same stale stuff of only the vertically oriented brachistochrones and the stupid but “mandatory” sequence involving the more or less completely useless (and yawn-inducing) isoperimetric problems. Such staleness!
Our students deserve better. … What I am doing (or, rather, proposing to do) is just a beginning. … You could do better than me. … But yes, the reminder: “Think It Over, and Then, Program: Can you make a C++ program out of this?” is clearly useful in many different ways…. I won’t go as far as to suggest that you should make a software program out of every little idea of mathematics. Or, more seriously (and importantly) that this kind of a “constructivist” approach is the only way to do good mathematics. Nope, it isn’t—though, it could very well be the only way to *validate* mathematical abstractions. … Anyway, I haven’t thought about this matter a great deal. BTW, there is a proper movement of sorts in mathematics which goes by the name “constructivism.” I don’t mean to use this word “constructivist” in that technical sense of the term. In fact, I haven’t read enough about constructivism to be able to form a judgement about it. All that I mean to say here is that mathematical ideas cannot come from thin air, that one must know what the referents of any mathematical concepts are, and if one does, it is easy to construct the higher-level abstraction from the lower-lever ones…
But coming back to my main point here, clearly, thinking about how particular programs could be written, is a way that seems to encourage at least fresh, if not highly creative, thinking…. It certainly helps counter the menace of floating abstractions… Think about it. … After all, not just our students’ but even our own minds deserve better—better than the kind of stale treatments we have been dished out thus far…