# Learnability of machine learning is provably an undecidable?—part 3: closure

Update on 23 January 2019, 17:55 IST:

In this series of posts, which was just a step further from the initial, brain-storming kind of a stage, I had come to the conclusion that based on certain epistemological (and metaphysical) considerations, Ben-David et al.’s conclusion (that learnability can be an undecidable) is logically untenable.

However, now, as explained here [^], I find that this particular conclusion which I drew, was erroneous. I now stand corrected, i.e., I now consider Ben-David et al.’s result to be plausible. Obviously, it merits a further, deeper, study.

However, even as acknowledging the above-mentioned mistake, let me also hasten to clarify that I still stick to my other positions, especially the central theme in this series of posts. The central theme here was that there are certain core features of the set theory which make implications such as Godel’s incompleteness theorems possible. These features (of the set theory) demonstrably carry a glaring epistemological flaw such that applying Godel’s theorem outside of its narrow technical scope in mathematics or computer science is not permissible. In particular, Godel’s incompleteness theorem does not apply to knowledge or its validation in the more general sense of these terms. This theme, I believe, continues to hold as is.

Update over.

Gosh! I gotta get this series out of my hand—and also head! ASAP, really!! … So, I am going to scrap the bits and pieces I had written for it earlier; they would have turned this series into a 4- or 5-part one. Instead, I am going to start entirely afresh, and I am going to approach this topic from an entirely different angle—a somewhat indirect but a faster route, sort of like a short-cut. Let’s get going.

Statements:

Open any article, research paper, book or a post, and what do you find? Basically, all these consist of sentences after sentences. That is, a series of statements, in a way. That’s all. So, let’s get going at the level of statements, from a “logical” (i.e. logic-thoretical) point of view.

Statements are made to propose or to identify (or at least to assert) some (or the other) fact(s) of reality. That’s what their purpose is.

The conceptual-level consciousness as being prone to making errors:

Coming to the consciousness of man, there are broadly two levels of cognition at which it operates: the sensory-perceptual, and the conceptual.

Examples of the sensory-perceptual level consciousness would consist of reaching a mental grasp of such facts of reality as: “This object exists, here and now;” “this object has this property, to this much degree, in reality,” etc. Notice that what we have done here is to take items of perception, and put them into the form of propositions.

Propositions can be true or false. However, at the perceptual level, a consciousness has no choice in regard to the truth-status. If the item is perceived, that’s it! It’s “true” anyway. Rather, perceptions are not subject to a test of truth- or false-hoods; they are at the very base standards of deciding truth- or false-hoods.

A consciousness—better still, an organism—does have some choice, even at the perceptual level. The choice which it has exists in regard to such things as: what aspect of reality to focus on, with what degree of focus, with what end (or purpose), etc. But we are not talking about such things here. What matters to us here is just the truth-status, that’s all. Thus, keeping only the truth-status in mind, we can say that this very idea itself (of a truth-status) is inapplicable at the purely perceptual level. However, it is very much relevant at the conceptual level. The reason is that at the conceptual level, the consciousness is prone to err.

The conceptual level of consciousness may be said to involve two different abilities:

• First, the ability to conceive of (i.e. create) the mental units that are the concepts.
• Second, the ability to connect together the various existing concepts to create propositions which express different aspects of the truths pertaining to them.

It is possible for a consciousness to go wrong in either of the two respects. However, mistakes are much more easier to make when it comes to the second respect.

Homework 1: Supply an example of going wrong in the first way, i.e., right at the stage of forming concepts. (Hint: Take a concept that is at least somewhat higher-level so that mistakes are easier in forming it; consider its valid definition; then modify its definition by dropping one of its defining characteristics and substituting a non-essential in it.)

Homework 2: Supply a few examples of going wrong in the second way, i.e., in forming propositions. (Hint: I guess almost any logical fallacy can be taken as a starting point for generating examples here.)

Truth-hood operator for statements:

As seen above, statements (i.e. complete sentences that formally can be treated as propositions) made at the conceptual level can, and do, go wrong.

We therefore define a truth-hood operator which, when it operates on a statement, yields the result as to whether the given statement is true or non-true. (Aside: Without getting into further epistemological complexities, let me note here that I reject the idea of the arbitrary, and thus regard non-true as nothing but a sub-category of the false. Thus, in my view, a proposition is either true or it is false. There is no middle (as Aristotle said), or even an “outside” (like the arbitrary) to its truth-status.)

Here are a few examples of applying the truth-status (or truth-hood) operator to a statement:

• Truth-hood[ California is not a state in the USA ] = false
• Truth-hood[ Texas is a state in the USA ] = true
• Truth-hood[ All reasonable people are leftists ] = false
• Truth-hood[ All reasonable people are rightists ] = false
• Truth-hood[ Indians have significantly contributed to mankind’s culture ] = true
• etc.

For ease in writing and manipulation, we propose to give names to statements. Thus, first declaring

A: California is not a state in the USA

and then applying the Truth-hood operator to “A”, is fully equivalent to applying this operator to the entire sentence appearing after the colon (:) symbol. Thus,

Truth-hood[ A ] <==> Truth-hood[ California is not a state in the USA ] = false

Just a bit of the computer languages theory: terminals and non-terminals:

To take a short-cut through this entire theory, we would like to approach the idea of statements from a little abstract perspective. Accordingly, borrowing some terminology from the area of computer languages, we define and use two types of symbols: terminals and non-terminals. The overall idea is this. We regard any program (i.e. a “write-up”) written in any computer-language as consisting of a sequence of statements. A statement, in turn, consists of certain well-defined arrangement of words or symbols. Now, we observe that symbols (or words) can be  either terminals or non-terminals.

You can think of a non-terminal symbol in different ways: as higher-level or more abstract words, as “potent” symbols. The non-terminal symbols have a “definition”—i.e., an expansion rule. (In CS, it is customary to call an expansion rule a “production” rule.) Here is a simple example of a non-terminal and its expansion:

• P => S1 S2

where the symbol “=>” is taken to mean things like: “is the same as” or “is fully equivalent to” or “expands to.” What we have here is an example of an abstract statement. We interpret this statement as the following. Wherever you see the symbol “P,” you may substitute it using the train of the two symbols, S1 and S2, written in that order (and without anything else coming in between them).

Now consider the following non-terminals, and their expansion rules:

• P1 => P2 P S1
• P2 => S3

The question is: Given the expansion rules for P, P1, and P2, what exactly does P1 mean? what precisely does it stand for?

• P1 => (P2) P S1 => S3 (P) S1 => S3 S1 S2 S1

In the above, we first take the expansion rule for P1. Then, we expand the P2 symbol in it. Finally, we expand the P symbol. When no non-terminal symbol is left to expand, we arrive at our answer that “P1” means the same as “S3 S1 S2 S1.” We could have said the same fact using the colon symbol, because the colon (:) and the “expands to” symbol “=>” mean one and the same thing. Thus, we can say:

• P1: S3 S1 S2 S1

The left hand-side and the right hand-side are fully equivalent ways of saying the same thing. If you want, you may regard the expression on the right hand-side as a “meaning” of the symbol on the left hand-side.

It is at this point that we are able to understand the terms: terminals and non-terminals.

The symbols which do not have any further expansion for them are called, for obvious reasons, the terminal symbols. In contrast, non-terminal symbols are those which can be expanded in terms of an ordered sequence of non-terminals and/or terminals.

We can now connect our present discussion (which is in terms of computer languages) to our prior discussion of statements (which is in terms of symbolic logic), and arrive at the following correspondence:

The name of every named statement is a non-terminal; and the statement body itself is an expansion rule.

This correspondence works also in the reverse direction.

You can always think of a non-terminal (from a computer language) as the name of a named proposition or statement, and you can think of an expansion rule as the body of the statement.

Easy enough, right? … I think that we are now all set to consider the next topic, which is: liar’s paradox.

The liar paradox is a topic from the theory of logic [^]. It has been resolved by many people in different ways. We would like to treat it from the viewpoint of the elementary computer languages theory (as covered above).

The simplest example of the liar paradox is , using the terminology of the computer languages theory, the following named statement or expansion rule:

• A: A is false.

Notice, it wouldn’t be a paradox if the same non-terminal symbol, viz. “A” were not to appear on both sides of the expansion rule.

To understand why the above expansion rule (or “definition”) involves a paradox, let’s get into the game.

Our task will be to evaluate the truth-status of the named statement that is “A”. This is the “A” which comes on the left hand-side, i.e., before the colon.

In symbolic logic, a statement is nothing but its expansion; the two are exactly and fully identical, i.e., they are one and the same. Accordingly, to evaluate the truth-status of “A” (the one which comes before the colon), we consider its expansion (which comes after the colon), and get the following:

• Truth-hood[ A ] = Truth-hood[ A is false ] = false           (equation 1)

Alright. From this point onward, I will drop explicitly writing down the Truth-hood operator. It is still there; it’s just that to simplify typing out the ensuing discussion, I am not going to note it explicitly every time.

Anyway, coming back to the game, what we have got thus far is the truth-hood status of the given statement in this form:

• A: “A is false”

Now, realizing that the “A” appearing on the right hand-side itself also is a non-terminal, we can substitute for its expansion within the aforementioned expansion. We thus get to the following:

• A: “(A is false) is false”

We can apply the Truth-hood operator to this expansion, and thereby get the following: The statement which appears within the parentheses, viz., the “A is false” part, itself is false. Accordingly, the Truth-hood operator must now evaluate thus:

• Truth-hood[ A ] = Truth-hood[ A is false] = Truth-hood[ (A is false) is false ] = Truth-hood[ A is true ] = true            (equation 2)

Fun, isn’t it? Initially, via equation 1, we got the result that A is false. Now, via equation 2, we get the result that A is true. That is the paradox.

But the fun doesn’t stop there. It can continue. In fact, it can continue indefinitely. Let’s see how.

If only we were not to halt the expansions, i.e., if only we continue a bit further with the game, we could have just as well made one more expansion, and got to the following:

• A: ((A is false) is false) is false.

The Truth-hood status of the immediately preceding expansion now is: false. Convince yourself that it is so. Hint: Always expand the inner-most parentheses first.

Homework 3: Convince yourself that what we get here is an indefinitely long alternating sequence of the Truth-hood statuses that: A is false, A is true, A is false, A is true

What can we say by way of a conclusion?

Conclusion: The truth-status of “A” is not uniquely decidable.

The emphasis is on the word “uniquely.”

We have used all the seemingly simple rules of logic, and yet have stumbled on to the result that, apparently, logic does not allow us to decide something uniquely or meaningfully.

Liar’s paradox and the set theory:

The importance of the liar paradox to our present concerns is this:

Godel himself believed, correctly, that the liar paradox was a semantic analogue to his Incompleteness Theorem [^].

Go read the Wiki article (or anything else on the topic) to understand why. For our purposes here, I will simply point out what the connection of the liar paradox is to the set theory, and then (more or less) call it a day. The key observation I want to make is the following:

You can think of every named statement as an instance of an ordered set.

What the above key observation does is to tie the symbolic logic of proposition with the set theory. We thus have three equivalent ways of describing the same idea: symbolic logic (name of a statement and its body), computer languages theory (non-terminals and their expansions to terminals), and set theory (the label of an ordered set and its enumeration).

As an aside, the set in question may have further properties, or further mathematical or logical structures and attributes embedded in itself. But at its minimal, we can say that the name of a named statement can be seen as a non-terminal, and the “body” of the statement (or the expansion rule) can be seen as an ordered set of some symbols—an arbitrarily specified sequence of some (zero or more) terminals and (zero or more) non-terminals.

Two clarifications:

• Yes, in case there is no sequence in a production at all, it can be called the empty set.
• When you have the same non-terminal on both sides of an expansion rule, it is said to form a recursion relation.

An aside: It might be fun to convince yourself that the liar paradox cannot be posed or discussed in terms of Venn’s diagram. The property of the “sheet” on which Venn’ diagram is drawn is, by some simple intuitive notions we all bring to bear on Venn’s diagram, cannot have a “recursion” relation.

Yes, the set theory itself was always “powerful” enough to allow for recursions. People like Godel merely made this feature explicit, and took full “advantage” of it.

Recursion, the continuum, and epistemological (and metaphysical) validity:

In our discussion above, I had merely asserted, without giving even a hint of a proof, that the three ways (viz., the symbolic logic of statements or  propositions, the computer languages theory, and the set theory) were all equivalent ways of expressing the same basic idea (i.e. the one which we are concerned about, here).

I will now once again make a few more observations, but without explaining them in detail or supplying even an indication of their proofs. The factoids I must point out are the following:

• You can start with the natural numbers, and by using simple operations such as addition and its inverse, and multiplication and its inverse, you can reach the real number system. The generalization goes as: Natural to Whole to Integers to Rationals to Reals. Another name for the real number system is: the continuum.
• You can use the computer languages theory to generate a machine representation for the natural numbers. You can also mechanize the addition etc. operations. Thus, you can “in principle” (i.e. with infinite time and infinite memory) represent the continuum in the CS terms.
• Generating a machine representation for natural numbers requires the use of recursion.

Finally, a few words about epistemological (and metaphysical) validity.

• The concepts of numbers (whether natural or real) have a logical precedence, i.e., they come first. The entire arithmetic and the calculus must come before does the computer-representation of some of their concepts.
• A machine-representation (or, equivalently, a set-theoretic representation) is merely a representation. That is to say, it captures only some aspects or attributes of the actual concepts from maths (whether arithmetic or the continuum hypothesis). This issue is exactly like what we saw in the first and second posts in this series: a set is a concrete collection, unlike a concept which involves a consciously cast unit perspective.
• If you try to translate the idea of recursion into the usual cognitive terms, you get absurdities such as: You can be your child, literally speaking. Not in the sense that using scientific advances in biology, you can create a clone of yourself and regard that clone to be both yourself and your child. No, not that way. Actually, such a clone is always your twin, not child, but still, the idea here is even worse. The idea here is you can literally father your own self.
• Aristotle got it right. Look up the distinction between completed processes and the uncompleted ones. Metaphysically, only those objects or attributes can exist which correspond to completed mathematical processes. (Yes, as an extension, you can throw in the finite limiting values, too, provided they otherwise do mean something.)
• Recursion by very definition involves not just absence of completion but the essence of the very inability to do so.

Closure on the “learnability issue”:

Homework 4: Go through the last two posts in this series as well as this one, and figure out that the only reason that the set theory allows a “recursive” relation is because a set is, by the design of the set theory, a concrete object whose definition does not have to involve an epistemologically valid process—a unit perspective as in a properly formed concept—and so, its name does not have to stand for an abstract mentally held unit. Call this happenstance “The Glaring Epistemological Flaw of the Set Theory” (or TGEFST for short).

Homework 5: Convince yourself that any lemma or theorem that makes use of Godel’s Incompleteness Theorem is necessarily based on TGEFST, and for the same reason, its truth-status is: it is not true. (In other words, any lemma or theorem based on Godel’s theorem is an invalid or untenable idea, i.e., essentially, a falsehood.)

Homework 6: Realize that the learnability issue, as discussed in Prof. Lev Reyzin’s news article (discussed in the first part of this series [^]), must be one that makes use of Godel’s Incompleteness Theorem. Then convince yourself that for precisely the same reason, it too must be untenable.

[Yes, Betteridge’s law [^] holds.]

Other remarks:

Remark 1:

As “asymptotical” pointed out at the relevant Reddit thread [^], the authors themselves say, in another paper posted at arXiv [^] that

While this case may not arise in practical ML applications, it does serve to show that the fundamental definitions of PAC learnability (in this case, their generalization to the EMX setting) is vulnerable in the sense of not being robust to changing the underlying set theoretical model.

What I now remark here is stronger. I am saying that it can be shown, on rigorously theoretical (epistemological) grounds, that the “learnability as undecidable” thesis by itself is, logically speaking, entirely and in principle untenable.

Remark 2:

Another point. My preceding conclusion does not mean that the work reported in the paper itself is, in all its aspects, completely worthless. For instance, it might perhaps come in handy while characterizing some tricky issues related to learnability. I certainly do admit of this possibility. (To give a vague analogy, this issue is something like running into a mathematically somewhat novel way into a known type of mathematical singularity, or so.) Of course, I am not competent enough to judge how valuable the work of the paper(s) might turn out to be, in the narrow technical contexts like that.

However, what I can, and will say is this: the result does not—and cannot—bring the very learnability of ANNs itself into doubt.

Phew! First, Panpsychiasm, and immediately then, Learnability and Godel. … I’ve had to deal with two untenable claims back to back here on this blog!

… Code! I have to write some code! Or write some neat notes on ML in LaTeX. Only then will, I guess, my head stop aching so much…

Honestly, I just downloaded TensorFlow yesterday, and configured an environment for it in Anaconda. I am excited, and look forward to trying out some tutorials on it…

BTW, I also honestly hope that I don’t run into anything untenable, at least for a few weeks or so…

…BTW, I also feel like taking a break… May be I should go visit IIT Bombay or some place in konkan. … But there are money constraints… Anyway, bye, really, for now…

A song I like:

Music: Sooraj (the pen-name of “Shankar” from the Shankar-Jaikishan pair)
Lyrics: Ramesh Anavakar

[Any editing would be minimal; guess I will not even note it down separately.] Did an extensive revision by 2019.01.21 23:13 IST. Now I will leave this post in the shape in which it is. Bye for now.

# Learnability of machine learning is provably an undecidable?—part 2

Update on 23 January 2019, 17:55 IST:

In this series of posts, which was just a step further from the initial, brain-storming kind of a stage, I had come to the conclusion that based on certain epistemological (and metaphysical) considerations, Ben-David et al.’s conclusion (that learnability can be an undecidable) is logically untenable.

However, now, as explained here [^], I find that this particular conclusion which I drew, was erroneous. I now stand corrected, i.e., I now consider Ben-David et al.’s result to be plausible. Obviously, it merits a further, deeper, study.

However, even as acknowledging the above-mentioned mistake, let me also hasten to clarify that I still stick to my other positions, especially the central theme in this series of posts. The central theme here was that there are certain core features of the set theory which make implications such as Godel’s incompleteness theorems possible. These features (of the set theory) demonstrably carry a glaring epistemological flaw such that applying Godel’s theorem outside of its narrow technical scope in mathematics or computer science is not permissible. In particular, Godel’s incompleteness theorem does not apply to knowledge or its validation in the more general sense of these terms. This theme, I believe, continues to hold as is.

Update over.

In this post, we look into the differences of the idea of sets from that of concepts. The discussion here is exploratory, and hence, not very well isolated. There are overlaps of points between sections. Indeed, there are going to be overlaps of points from post to post too! The idea behind this series of posts is not to present a long thought out and matured point of view; it is much in the nature of jotting down salient points and trying to bring some initial structure to them. Thus the writing in this series is just a step further from the stage of brain-storming, really speaking.

There is no direct discussion in this post regarding the learnability issue at all. However, the points we note here are crucial to understanding Godel’s incompleteness theorem, and in that sense, the contents of this post are crucially important in framing the learnability issue right.

Anyway, let’s get going over the differences of sets and concepts.

A concept as an abstract unit of mental integration:

Concepts are mental abstractions. It is true that concepts, once formed, can themselves be regarded as mental units, and qua units, they can further be integrated together into even higher-level concepts, or possibly sub-divided into narrower concepts. However, regardless of the level of abstraction at which a given concept exists, the concretes being subsumed under it are necessarily required to be less abstract than the single mental unit that is the concept itself.

Using the terms of computer science, the “graph” of a concept and its associated concrete units is not only acyclic and directional (from the concretes to the higher-level mental abstraction that is the concept), its connections too can be drawn if and only if the concretes satisfy the rules of conceptual commensurability.

A concept is necessarily a mental abstraction, and as a unit of mental integration, it always exists at a higher level of abstraction as compared to the units it subsumes.

A set as a mathematical object that is just a concrete collection:

Sets, on the other hand, necessarily are just concrete objects in themselves, even if they do represent collections of other concrete objects. Sets take birth as concrete objects—i.e., as objects that don’t have to represent any act of mental isolation and integration—and they remain that way till the end of their life.

For the same reason, set theory carries absolutely no rules whereby constraints can be placed on combining sets. No meaning is supposed to be assigned to the very act of placing braces around the rule which defines admissibility of objects as members into a set (or that of enumeration of their member objects).

The act of creating the collection that is a set is formally allowed to proceed even in the absence of any preceding act of mental differentiations and integrations.

This distinction between these two ideas, the idea of a concept, and that of a set, is important to grasp.

An instance of a mental abstraction vs. a membership into a concrete collection:

In the last post in this series, I had used the terminology in a particular way: I had said that there is a concept “table,” and that there is a set of “tables.” The plural form for the idea of the set was not a typo; it was a deliberate device to highlight this same significant point, viz., the essential concreteness of any set.

The mathematical theory of sets didn’t have to be designed this way, but given the way it anyway has actually been designed, one of the inevitable implications of its conception—its very design—has been this difference which exists between the ideas of concepts and sets. Since this difference is extremely important, it may be worth our while to look at it from yet another viewpoint.

When we look at a table and, having already had reached the concept of “table” we affirm that the given concrete table in front of us is indeed a table, this seemingly simple and almost instantaneously completed act of recognition itself implicitly involves a complex mental process. The process includes invoking a previously generated mental integration—an integration which was, sometime in the past, performed in reference to those attributes which actually exist in reality and which make a concrete object a table. The process begins with the availability of this context as a pre-requisite, and now involves an application of the concept. It involves actively bringing forth the pre-existing mental integration, actively “see” that yet another concrete instance of a table does indeed in reality carry the attributes which make an object a table, and thereby concluding that it is a table.

In other words, if you put the concept table symbolically as:

table = { this table X, that table Y, now yet another table Z, … etc. }

then it is understood that what the symbol on the left hand side stands for is a mental integration, and that each of the concrete entities X, Y, Z, etc. appearing in the list on the right hand-side is, by itself, an instance corresponding to that unit of mental integration.

But if you interpret the same “equation” as one standing for the set “tables”, then strictly speaking, according to the actual formalism of the set theory itself (i.e., without bringing into the context any additional perspective which we by habit do, but sticking strictly only to the formalism), each of the X, Y, Z etc. objects remains just a concrete member of a merely concrete collection or aggregate that is the set. The mental integration which regards X, Y, Z as equally similar instances of the idea of “table” is missing altogether.

Thus, no idea of similarity (or of differences) among the members at all gets involved, because there is no mental abstraction: “table” in the first place. There are only concrete tables, and there is a well-specified but concrete object, a collective, which is only formally defined to be stand for this concrete collection (of those specified tables).

Grasp this difference, and the incompleteness paradox brought forth by Godel begins to dissolve away.

The idea of an infinite set cuts out the preceding theoretical context:

Since the aforementioned point is complex but important, there is no risk in repeating (though there could be boredom!):

There is no place-holder in the set theory which would be equivalent to saying: “being able to regard concretes as the units of an abstract, singular, mental perspective—a perspective reached in recognition of certain facts of reality.”

The way set theory progresses in this regard is indeed extreme. Here is one way to look at it.

The idea of an infinite set is altogether inconceivable before you first have grasped the concept of infinity. On the other hand, grasping the concept of infinity can be accomplished without any involvement of the set theory anyway—formally or informally. However, since every set you actually observe in the concrete reality can only be finite, and since sets themselves are concrete objects, there is no way to conceive of the very idea of an infinite sets, unless you already know what infinity means (at least in some working, implicit, sense). Thus, to generate the concrete members contained in the given infinite set, you of course need the conceptual knowledge of infinite sequences and series.

However, even if the set theory must use this theoretical apparatus of analysis, the actual mathematical object it ends up having still captures only the “concrete-collection” aspect of it—none other. In other words, the set theory drops from its very considerations some of the crucially important aspects of knowledge with which infinite sets can at all be conceived of. For instance, it drops the idea that the infinite set-generating rule is in itself an abstraction. The set theory asks you to supply and use that rule. The theory itself is merely content in being supplied some well-defined entities as the members of a set.

It is at places like this that the infamous incompleteness creeps into the theory—I mean, the theory of sets, not the theory that is the analysis as was historically formulated and practiced.

The name of a set vs. the word that stands for a concept:

The name given to a set (the symbol or label appearing on the left hand-side of the equation) is just an arbitrary and concrete a label; it is not a theoretical place-holder for the corresponding mental concept—not so long as you remain strictly within the formalism, and therefore, the scope of application of, the set theory.

When they introduce you to the set theory in your high-school, they take care to choose each of the examples only such a way that there always is an easy-to-invoke and well-defined concept; this per-existing concept can then be put into a 1:1 correspondence with the definition of that particular set.

But if you therefore begin thinking that there is a well-defined concept for each possible instance of a set, then such a characterization is only a figment of your own imagination. An idea like this is certainly not to be found in the actual formalism of the set theory.

Show me the place in the axioms, or their combinations, or theorems, or even just lemmas or definitions in the set theory where they say that the label for a set, or the rule for formation of a set, must always stand for a conceptually coherent mental integration. Such an idea is simply absent from the mathematical theory.

The designers of the set theory, to put it directly, simply didn’t have the wits to include such ideas in their theory.

Implications for the allowed operations:

The reason why the set theory allows for any arbitrary operands (including those which don’t make any sense in the real world) is, thus, not an accident. It is a direct consequence of the fact that sets are, by design, concrete aggregates, not mental integrations based on certain rules of cognition (which in turn must make a reference to the actual characteristics and attributes possessed by the actually existing objects).

Since sets are mere aggregations, not integrations, as a consequence, we no longer remain concerned with the fact that there have to be two or more common characteristics to the concrete objects being put together, or with the problem of having to pick up the most fundamental one among them.

When it comes to sets, there are no such constraints on the further manipulations. Thus arises the possibility of being apply any operator any which way you feel like on any given set.

Godel’s incompleteness theorem as merely a consequence:

Given such a nature of the set theory—its glaring epistemological flaws—something like Kurt Godel’s incompleteness theorem had to arrive in the scene, sooner or later. The theorem succeeds only because the set theory (on which it is based) does give it what it needs—viz., a loss of a connection between a word (a set label) and how it is meant to be used (the contexts in which it can be further used, and how).

In the next part, we will reiterate some of these points by looking at the issue of (i) systems of axioms based on the set theory on the one hand, and (ii) the actual conceptual body of knowledge that is arithmetic, on the other hand. We will recast the discussion so far in terms of the “is a” vs. the “has a” types of relationships. The “is a” relationship may be described as the “is an instance of a mental integration or concept of” relationship. The “has a” relationship may be described as “is (somehow) defined (in whatever way) to carry the given concrete” type of a relationship. If you are curious, here is the preview: concepts allow for both types of relationships to exist; however, for defining a concept, the “is an instance or unit of” relationship is crucially important. In contrast, the set theory requires and has the formal place for only the “has a” type of relationships. A necessary outcome is that each set itself must remain only a concrete collection.

# Learnability of machine learning is provably an undecidable?—part 1

Update on 23 January 2019, 17:55 IST:

In this series of posts, which was just a step further from the initial, brain-storming kind of a stage, I had come to the conclusion that based on certain epistemological (and metaphysical) considerations, Ben-David et al.’s conclusion (that learnability can be an undecidable) is logically untenable.

However, now, as explained here [^], I find that this particular conclusion which I drew, was erroneous. I now stand corrected, i.e., I now consider Ben-David et al.’s result to be plausible. Obviously, it merits a further, deeper, study.

However, even as acknowledging the above-mentioned mistake, let me also hasten to clarify that I still stick to my other positions, especially the central theme in this series of posts. The central theme here was that there are certain core features of the set theory which make implications such as Godel’s incompleteness theorems possible. These features (of the set theory) demonstrably carry a glaring epistemological flaw such that applying Godel’s theorem outside of its narrow technical scope in mathematics or computer science is not permissible. In particular, Godel’s incompleteness theorem does not apply to knowledge or its validation in the more general sense of these terms. This theme, I believe, continues to hold as is.

Update over.

This one news story has been lying around for about a week on my Desktop:

Lev Reyzin, “Unprovability comes to machine learning,” Nature, vol. 65, pp. 166–167, 10 January 2019 [^]. PDF here: [^]

(I’ve forgotten how I came to know about it though.) The story talks about the following recent research paper:

Ben-David et al., “Learnability can be undecidable,” Nature Machine Intelligence, vol. 1, pp. 44–48, January 2019 [^]. PDF here: [^]

I don’t have the requisite background in the theory of the research paper itself, and so didn’t even try to read through it. However, I did give Reyzin’s news article a try. It was not very successful; I have not yet been able to finish this story yet. However, here are a few notings which I made as I tried to progress through this news story. The quotations here all come from from Reyzin’s news story.

Before we begin, take a moment to notice that the publisher here is arguably the most reputed one in science, viz., the Nature publishing group. As to the undecidability of learnability, its apparent implications for practical machine learning, artificial intelligence, etc., are too obvious to be pointed out separately.

“During the twentieth century, discoveries in mathematical logic revolutionized our understanding of the very foundations of mathematics. In 1931, the logician Kurt Godel showed that, in any system of axioms that is expressive enough to model arithmetic, some true statements will be unprovable.”

Is it because Godel [^] assumed that any system of axioms (which is expressive enough to model arithmetic) would be based on the standard (i.e. mathematical) set theory? If so, his conclusion would not be all that paradoxical, because the standard set theory carries, from an epistemological angle, certain ill-conceived notions at its core. [BTW, throughout this (short) series of posts, I use Ayn Rand’s epistemological theory; see ITOE, 2e [^][^].]

To understand my position (that the set theory is not epistemologically sound), start with a simple concept like “table”.

According to Ayn Rand’s ITOE, the concept “table” subsumes all possible concrete instances of tables, i.e., all the tables that conceivably exist, might have ever existed, and might ever exist in future, i.e., a potentially infinite number of concrete instances of them. Ditto, for any other concept, e.g., “chair.” Concepts are mental abstractions that stand for an infinite concretes of a given kind.

Now, let’s try to run away from philosophy, and thereby come to rest in the arms of, say, a mathematical logician like Kurt Godel [^], or preferably, his predecessors, those who designed the mathematical set theory [^].

The one (utterly obvious) way to capture the fact that there exist tables, but only using the actual terms of the set theory, is to say that there is a set called “tables,” and that its elements consist of all possible tables (i.e., all the tables that might have existed, might conceivably exist, and would ever conceivably exist in future). Thus, the notion again refers to an infinity of concretes. Put into the terms of the set theory, the set of tables is an infinite set.

OK, that seems to work. How about chairs? Once again, you set up a set, now called “chairs,” and proceed to dump within its braces every possible or conceivable chair.

So far, so good. No trouble until now.

The trouble begins when you start applying operators to the sets, say by combining them via unions, or by taking their intersections, and so on—all that Venn’s diagram business [^]. But what is the trouble with the good old Venn diagrams, you ask? Well, the trouble is not so much to the Venn diagrams as it is to the basic set theory itself:

The set theory makes the notion of the set so broad that it allows you to combine any sets in any which way you like, and still be able to call the result a meaningful set—meaningful, as seen strictly from within the set theory.

Here is an example. You can not only combine (take union of) “tables” and “chairs” into a broader set called “furniture,” you are also equally well allowed, by the formalism of the set theory, to absorb into the same set all unemployed but competent programmers, Indian HR managers, and venture capitalists from the San Francisco Bay Area. The set theory does not by itself have anything in its theoretical structure, formalism or even mathematical application repertoire, using which it could possibly so much as raise a finger in such matters. This is a fact. If in doubt, refer to the actual set theory ([^] and links therein), take it strictly on its own terms, in particular, desist mixing into it any extra interpretations brought in by you.

Epistemology, on the other hand, does have theoretical considerations, including explicitly formulated rules at its core, which together allow us to distinguish between proper and improper formulations of concepts. For example, there is a rule that the concrete instances being subsumed under a concept must themselves be conceptually commensurate, i.e., they must possess the same defining characteristics, even if possibly to differing degrees. Epistemology prevents the venture capitalists from the San Francisco Bay Area from being called pieces of furniture because it clarifies that they are people, whereas pieces of furniture are inanimate objects, and for this crucial reason, the two are conceptually incommensurate—they cannot be integrated together into a common concept.

To come back to the set theory, it, however, easily allows for every abstractly conceivable “combination” for every possible operand set(s). Whether the operation has any cognitive merit to it or not, whether it results into any meaningful at all or not, is not at all a consideration—not by the design of the set theory itself (which, many people suppose, is more fundamental to every other theory).

So—and get this right—calling the collection of QC scientists as either politicians or scoundrels is not at all an abuse of the mathematical structure, content, and meaning of the set theory. The ability to take an intersection of the set of all mathematicians who publish papers and the set of all morons is not a bug, it is very much a basic and core feature of the set theory. There is absolutely nothing in the theory itself which says that the intersection operator cannot be applied here, or that the resulting set has to be an empty set. None.

Set theory very much neglects the considerations of the kind of a label there is to a set, and the kind of elements which can be added to it.

More on this, later. (This post has already gone past 1000 words.)

The songs section will come at the conclusion of this (short) series of posts, to be completed soon enough; stay tuned…

Here are a few interesting links I browsed recently, listed in no particular order:

“Mathematicians Tame Turbulence in Flattened Fluids” [^].

The operative word here, of course, is: “flattened.” But even then, it’s an interesting read. Another thing: though the essay is pop-sci, the author gives the Navier-Stokes equations, complete with fairly OK explanatory remarks about each term in the equation.

(But I don’t understand why every pop-sci write-up gives the NS equations only in the Lagrangian form, never Eulerian.)

“A Twisted Path to Equation-Free Prediction” [^]. …

“Empirical dynamic modeling.” Hmmm….

“Machine Learning’s `Amazing’ Ability to Predict Chaos” [^].

Click-bait: They use data science ideas to predict chaos!

8 Lyapunov times is impressive. But ignore the other, usual kind of hype: “…the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. ” [italics added.]

“Your Simple (Yes, Simple) Guide to Quantum Entanglement” [^].

Click-bait: “Entanglement is often regarded as a uniquely quantum-mechanical phenomenon, but it is not. In fact, it is enlightening, though somewhat unconventional, to consider a simple non-quantum (or “classical”) version of entanglement first. This enables us to pry the subtlety of entanglement itself apart from the general oddity of quantum theory.”

Don’t dismiss the description in the essay as being too simplistic; the author is Frank Wilczek.

“A theoretical physics FAQ” [^].

Click-bait: Check your answers with those given by an expert! … Do spend some time here…

Tensor product versus Cartesian product.

If you are engineer and if you get interested in quantum entanglement, beware of the easily confusing terms: The tensor product and the Cartesian product.

The tensor product, you might think, is like the Cartesian product. But it is not. See mathematicians’ explanations. Essentially, the basis sets (and the operations) are different. [^] [^].

But what the mathematicians don’t do is to take some simple but non-trivial examples, and actually work everything out in detail. Instead, they just jump from this definition to that definition. For example, see: “How to conquer tensorphobia” [^] and “Tensorphobia and the outer product”[^]. Read any of these last two articles. Any one is sufficient to give you tensorphobia even if you never had it!

You will never run into a mathematician who explains the difference between the two concepts by first directly giving you a vague feel: by directly giving you a good worked out example in the context of finite sets (including enumeration of all the set elements) that illustrates the key difference, i.e. the addition vs. the multiplication of the unit vectors (aka members of basis sets).

A third-class epistemology when it comes to explaining, mathematicians typically have.

A Song I Like:

(Marathi) “he gard niLe megha…”
Music: Rushiraj
Lyrics: Muralidhar Gode

[As usual, a little streamlining may occur later on.]

# HNY (Marathi). Also, a bit about modern maths.

Happy New (Marathi) Year!

OK.

I will speak in “aaeechee bhaashaa”  (lit.: mother’s language).

“gudhi-paaDawyaachyaa haardik shubhechchhaa.” (lit.: hearty compliments [on the occasion] of “gudhi-paaDawaa” [i.e. the first day of the Marathi new year  [^]].)

I am still writing up my notes on scalars, vectors, tensors, and CFD (cf. my last post). The speed is good. I am making sure that I remain below the RSI [^] detection levels.

BTW, do you know how difficult it can get to explain even the simplest of concepts once mathematicians have had a field day about it? (And especially after Americans have praised them for their efforts?) For instance, even a simple idea like, say, the “dual space”?

Did any one ever give you a hint (or even a hint of a hint) that the idea of “dual space” is nothing but a bloody stupid formalization based on nothing but the idea of taking the transpose of a vector and using it in the dot product? Or the fact that the idea of the transpose of a vector essentially means nothing than more than taking the same old three (or $n$ number of) scalar components, but interpreting them to mean a (directed) planar area instead of an arrow (i.e. a directed line segment)? Or the fact that this entire late 19th–early 20th century intellectual enterprise springs from no grounds more complex than the fact that the equation to the line is linear, and so is the equation to the plane?

[Yes, dear American, it’s the equation not an equation, and the equation is not of a line, but to the line. Ditto, for the case of the plane.]

Oh, but no. You go ask any mathematician worth his salt to explain the idea (say of the dual space), and this modern intellectual idiot would immediately launch himself into blabbering endlessly about “fields” (by which he means something other than what either a farmer or an engineer means; he also knows that he means something else; further, he also knows that not knowing this fact, you are getting confused; but, he doesn’t care to even mention this fact to you let alone explain it (and if you catch him, he ignores you and turns his face towards that other modern intellectual idiot aka the theoretical physicist (who is all ears to the mathematician, BTW))), “space” (ditto), “functionals” (by which term he means two different things even while strictly within the context of his own art: one thing in linear algebra and quite another thing in the calculus of variations), “modules,” (neither a software module nor the lunar one of Apollo 11—and generally speaking, most any modern mathematical idiot would have become far too generally incompetent to be able to design either), “ring” (no, he means neither an engagement nor a bell), “linear forms,” (no, neither Picasso nor sticks), “homomorphism” (no, not not a gay in the course of adding on or shedding body-weight), etc. etc. etc.

What is more, the idiot would even express surprise at the fact that the way he speaks about his work, it makes you feel as if you are far too incompetent to understand his art and will always be. And that’s what he wants, so that his means of livelihood is protected.

(No jokes. Just search for any of the quoted terms on the Wiki/Google. Or, actually talk to an actual mathematician about it. Just ask him this one question: Essentially speaking, is there something more to the idea of a dual space than transposing—going from an arrow to a plane?)

So, it’s not just that no one has written about these ideas before. The trouble is that they have, including the extent to which they have and the way they did.

And therefore, writing about the same ideas but in plain(er) language (but sufficiently accurately) gets tough, extraordinarily tough.

But I am trying. … Don’t keep too high a set of hopes… but well, at least, I am trying…

BTW, talking of fields and all, here are a few interesting stories (starting from today’s ToI, and after a bit of a Google search)[^][^] [^][^].

A Song I Like:

(Marathi) “maajhyaa re preeti phulaa”

# In maths, the boundary is…

In maths, the boundary is a verb, not a noun.

It’s an active something, that, through certain agencies (whose influence, in the usual maths, is wholly captured via differential equations) actually goes on to act [directly or indirectly] over the entirety of a [spatial] region.

Mathematicians have come to forget about this simple physical fact, but by the basic rules of knowledge, that’s how it is.

They love to portray the BV (boundary-value) problems in terms of some dead thing sitting at the boundary, esp. for the Dirichlet variety of problems (esp. for the case when the field variable is zero out there) but that’s not what the basic nature of the abstraction is actually like. You couldn’t possibly build the very abstraction of a boundary unless if first pre-supposed that what it in maths represented was an active [read: physically active] something!

Keep that in mind; keep on reminding yourself at least $10^n$ times every day, where $n$ is an integer $\ge 1$.

A Song I Like:

[Unlike most other songs, this was an “average” one  in my [self-]esteemed teenage opinion, formed after listening to it on a poor-reception-area radio in an odd town at some odd times. … It changed for forever to a “surprisingly wonderful one” the moment I saw the movie in my SE (second year engineering) while at COEP. … And, haven’t yet gotten out of that impression yet… .]

(Hindi) “main chali main chali, peechhe peeche jahaan…”
Music: Shankar-Jaikishan
Lyrics: Shailendra

[May be an editing pass would be due tomorrow or so?]

# Is something like a re-discovery of the same thing by the same person possible?

Yes, we continue to remain very busy.

However, in spite of all that busy-ness, in whatever spare time I have [in the evenings, sometimes at nights, why, even on early mornings [which is quite unlike me, come to think of it!]], I cannot help but “think” in a bit “relaxed” [actually, abstract] manner [and by “thinking,” I mean: musing, surmising, etc.] about… about what else but: QM!

So, I’ve been doing that. Sort of like, relaxed distant wonderings about QM…

Idle musings like that are very helpful. But they also carry a certain danger: it is easy to begin to believe your own story, even if the story itself is not being borne by well-established equations (i.e. by physic-al evidence).

But keeping that part aside, and thus coming to the title question: Is it possible that the same person makes the same discovery twice?

It may be difficult to believe so, but I… I seemed to have managed to have pulled precisely such a trick.

Of course, the “discovery” in question is, relatively speaking, only a part of of the whole story, and not the whole story itself. Still, I do think that I had discovered a certain important part of a conclusion about QM a while ago, and then, later on, had completely forgotten about it, and then, in a slow, patient process, I seem now to have worked inch-by-inch to reach precisely the same old conclusion.

In short, I have re-discovered my own (unpublished) conclusion. The original discovery was may be in the first half of this calendar year. (I might have even made a hand-written note about it, I need to look up my hand-written notes.)

Now, about the conclusion itself. … I don’t know how to put it best, but I seem to have reached the conclusion that the postulates of quantum mechanics [^], say as stated by Dirac and von Neumann [^], have been conceptualized inconsistently.

Please note the issue and the statement I am making, carefully. As you know, more than 9 interpretations of QM [^][^][^] have been acknowledged right in the mainstream studies of QM [read: University courses] themselves. Yet, none of these interpretations, as far as I know, goes on to actually challenge the quantum mechanical formalism itself. They all do accept the postulates just as presented (say by Dirac and von Neumann, the two “mathematicians” among the physicists).

Coming to me, my positions: I, too, used to say exactly the same thing. I used to say that I agree with the quantum postulates themselves. My position was that the conceptual aspects of the theory—at least all of them— are missing, and so, these need to be supplied, and if the need be, these also need to be expanded.

But, as far as the postulates themselves go, mine used to be the same position as that in the mainstream.

Until this morning.

Then, this morning, I came to realize that I have “re-discovered,” (i.e. independently discovered for the second time), that I actually should not be buying into the quantum postulates just as stated; that I should be saying that there are theoretical/conceptual errors/misconceptions/misrepresentations woven-in right in the very process of formalization which produced these postulates.

Since I think that I should be saying so, consider that, with this blog post, I have said so.

Just one more thing: the above doesn’t mean that I don’t accept Schrodinger’s equation. I do. In fact, I now seem to embrace Schrodinger’s equation with even more enthusiasm than I have ever done before. I think it’s a very ingenious and a very beautiful equation.

A Song I Like:

(Hindi) “tum jo hue mere humsafar”
Music: O. P. Nayyar
Singers: Geeta Dutt and Mohammad Rafi
Lyrics: Majrooh Sultanpuri

Update on 2017.10.14 23:57 IST: Streamlined a bit, as usual.

# Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux $\vec{J}$ of a scalar $\phi$ a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?

1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.

2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right $\theta$ and then multiplying the relevant scalar quantity by the $\cos$ of this $\theta$, we can more succinctly write:

$q = \vec{J} \cdot \vec{S}$ (Eq. 1)

where $q$ is the quantity of $\phi$, an intensive scalar property of the fluid flowing across a given finite surface, $\vec{S}$, and $\vec{J}$ is the flux of $\Phi$, the extensive quantity corresponding to the intensive quantity $\phi$.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount $q$ of the transported property $\Phi$.

There also is another problem with this, second, answer.

Notice that in Eq. 1, $\vec{J}$ has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector $\vec{T}$, but the only basic equation available to you is:

$\vec{R} = \int \text{d} x \vec{T}$, (Eq. 2)

assuming that $\vec{T}$ is a function of position $x$.

In Eq. 2, first, the integral operator must operate on $\vec{T}(x)$ so as to produce some other quantity, here, $\vec{R}$. Thus, Eq. 2 can be taken as a definition for $\vec{R}$, but not for $\vec{T}$.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let $\vec{T}$ be free of any operator, to give you:

$\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T}$ (Eq. 3)

It is the Eq. 3 which can now be used as a definition of $\vec{T}$.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of $\vec{T}$. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

$q = \vec{J} \cdot \vec{S}$ (Eq. 1)

as a definition for $\vec{J}$ is that no suitable inverse operator exists when it comes to the dot operator.

3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

$\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n}$ (Eq. 4)

where $\hat{n}$ is the unit normal to the surface $\vec{S}$, defined thus:

$\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|}$ (Eq. 5)

Then, as the crucial next step, we introduce one more equation for $q$, one that is independent of $\vec{J}$. For phenomena involving fluid flows, this extra equation is quite simple to find:

$q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t}$ (Eq. 6)

where $\phi$ is the mass-density of $\Phi$ (the scalar field whose flux we want to define), $\rho$ is the volume-density of mass itself, and $\Omega_{\text{traced}}$ is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface $\vec{S}$ in an arbitrary but small interval of time $\Delta t$. Notice that $\Phi$ is the extensive scalar property being transported via the fluid flow across the given surface, whereas $\phi$ is the corresponding intensive quantity.

Now express $\Omega_{\text{traced}}$ in terms of the imagined maximum normal distance from the plane $\vec{S}$ up to which the forward moving front is found extended after $\Delta t$. Thus,

$\Omega_{\text{traced}} = \xi |\vec{S}|$ (Eq. 7)

where $\xi$ is the traced distance (measured in a direction normal to $\vec{S}$). Now, using the geometric property for the area of parallelograms, we have that:

$\xi = \delta \cos\theta$ (Eq. 8)

where $\delta$ is the traced distance in the direction of the flow, and $\theta$ is the angle between the unit normal to the plane $\hat{n}$ and the flow velocity vector $\vec{U}$. Using vector notation, Eq. 8 can be expressed as:

$\xi = \vec{\delta} \cdot \hat{n}$ (Eq. 9)

Now, by definition of $\vec{U}$:

$\vec{\delta} = \vec{U} \Delta t$, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

$\xi = \vec{U} \Delta t \cdot \hat{n}$ (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

$\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}|$ (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

$q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t}$ (Eq. 13)

Cancelling out the $\Delta t$, Eq. 13 becomes:

$q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}|$ (Eq. 14)

Having got an expression for $q$ that is independent of $\vec{J}$, we can now use it in order to define $\vec{J}$. Thus, substituting Eq. 14 into Eq. 4:

$\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n}$ (Eq. 16)

Cancelling out the two $|\vec{S}|$s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

$\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n}$ (Eq. 17)

In Eq. 17, there is this curious sequence: $\hat{n} \hat{n}$.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot ($\cdot$) operator nor the cross $\times$ operator appearing in between the two $\hat{n}$s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

$\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n}$ (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

$|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right)$ (Eq. 19)

and its direction is given by: $\hat{n}$ (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).

5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

$\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right)$ (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

$\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right)$ (Eq. 21)

The $\otimes$ operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

$\vec{V} = \vec{\vec{T}} \cdot \vec{U}$ (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector $\vec{U}$. When it is multiplied by the tensor $\vec{\vec{T}}$, we get another vector, the output vector: $\vec{V}$. The tensor quantity $\vec{\vec{T}}$ is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.

6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field $\Phi$ must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

$\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right)$ (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

$\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right)$ (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

$\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right)$ (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

$q = \vec{J} \cdot \vec{S}$ (Eq. 1, reproduced)

Express $\vec{S}$ as a product of its magnitude and direction:

$q = \vec{J} \cdot |\vec{S}| \hat{n}$ (Eq. 23)

Divide both sides of Eq. 23 by $|\vec{S}|$:

$\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n}$ (Eq. 24)

“Multiply” both sides of Eq. 24 by $\hat{n}$:

$\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n}$ (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: $\hat{n}$), can you guess something about what the “multiplication” on the right hand-side by $\hat{n}$ might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two $\hat{n}$s forms the “primary” vector space and which $\hat{n}$ forms the dual space, if the tensor product $\hat{n}\otimes\hat{n}$ itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors ($\hat{n}$s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]

# Mathematics—Historic, Contemporary, and Its Relation to Physics

The title of this post does look very ambitious, but in fact the post itself isn’t. I mean, I am not going to even attempt to integrate these diverse threads at all. Instead, I am going to either just jot down a few links, or copy-paste my replies (with a bit editing) that I had made at some other blogs.

1. About (not so) ancient mathematics:

1.1 Concerning calculus: It was something of a goose-bumps moment for me to realize that the historic Indians had very definitely gotten to that branch of mathematics which is known as calculus. You have to understand the context behind it.

Some three centuries ago, there were priority battles concerning invention of calculus (started by Newton, and joined by Liebniz and his supporters). Echoes of these arguments could still be heard in popular science writings as recently as when I was a young man, about three decades ago.

Against this backdrop, it was particularly wonderful that an Indian mathematician as early as some eight centuries ago had gotten to the basic idea of calculus.

The issue was highlighted by Prof. Abinandanan at the blog nanpolitan, here [^]. It was based on an article by Prof. Biman Nath that had appeared in the magazine Frontline [^]. My replies can be found at Abi’s post. I am copy-pasting my replies here. I am also taking the opportunity to rectify a mistake—somehow, I thought that Nath’s article appeared in the Hindu newspaper, and not in the Frontline magazine. My comment (now edited just so slightly):

0. Based on my earlier readings of the subject matter (and I have never been too interested in the topic, and so, it was generally pretty much a casual reading), I used to believe that the Indians had not reached that certain abstract point which would allow us to say that they had got to calculus. They had something of a pre-calculus, I thought.

Based (purely) on Prof. Nath’s article, I have now changed my opinion.

Here are a few points to note:

1. How “jyaa” turned to “sine” makes for a fascinating story. Thanks for its inclusion, Prof. Nath.

2. Aaryabhata didn’t have calculus. Neither did Bramhagupta [my spelling is correct]. But if you wonder why the latter might have laid such an emphasis on the zero about the same time that he tried taking Aaryabhata’s invention further, chances are, there might have been some churning in Bramhagupta’s mind regarding the abstraction of the infinitesimal, though, with the evidence available, he didn’t reach it.

3. Bhaaskara II, if the evidence in the article is correct, clearly did reach calculus. No doubt about it.

He did not only reach a more abstract level, he even finished the concept by giving it a name: “taatkaalik.” Epistemologically speaking, the concept formation was complete.

I wonder why Prof. Nath, writing for the Frontline, didn’t allocate a separate section to Bhaaskara II. The “giant leap” richly deserved it.

And, he even got to the max-min problem by setting the derivative to zero. IMO, this is a second giant leap. Conceptually, it is so distinctive to calculus that even just a fleeting mention of it would be enough to permanently settle the issue.

You can say that Aaryabhata and Bramhagupta had some definite anticipation of calculus. And you can’t possible much more further about Archimedes’ method of exhaustion either. But, as a sum total, I think, they still missed calculus per say.

But with this double whammy (or, more accurately, the one-two punch), Bhaaskara II clearly had got the calculus.

Yes, it would have been nice if he could have left for the posterity a mention of the limit. But writing down the process of reaching the invention has always been so unlike the ancient Indians. Philosophically, the atmosphere would generally be antithetical to such an idea; the scientist, esp. the mathematician, may then be excused.

But then, if mathematicians had already been playing with infinite series with ease, and were already performing the calculus of finite differences in the context of these infinite series, even explicitly composing verses about their results, then they can be excused for not having conceptualized limits.

After all, even Newton initially worked only with the fluxion and Leibniz with the infinitesimal. The modern epsilon-delta definition still was some one–two centuries (in the three–four centuries of modern science) in the coming.

But when you explicitly say “instantaneous,” (i.e. after spelling out the correct thought process leading to it), there is no way one can say that some distance had yet to be travelled to reach calculus. The destination was already there.

And as if to remove any doubt still lingering, when it comes to the min-max condition, no amount of merely geometric thinking would get you there. Reaching of that conclusion means that the train had not already left the first station after entering the calculus territory, but also that it had in fact gone past the second or the third station as well. Complete with an application from astronomy—the first branch of physics.

I would like to know if there are any counter-arguments to the new view I now take of this matter, as spelt out above.

4. Maadhava missed it. The 1/4 vs. 1/6 is not hair-splitting. It is a very direct indication of the fact that either Maadhava did a “typo” (not at all possible, considering that these were verses to be by-hearted by repetition by the student body), or, obviously, he missed the idea of the repeated integration (which in turn requires considering a progressively greater domain even if only infinitesimally). Now this latter idea is at the very basis of the modern Taylor series. If Maadhava were to perform that repeated integration (and he would be a capable mathematical technician to be able to do that should the idea have struck him), then he would surely get 1/6. He would get that number, even if he were not to know anything about the factorial idea. And, if he could not get to 1/6, it’s impossible that he would get the idea of the entire infinite series i.e. the Taylor series, right.

5. Going by the content of the article, Prof. Nath’s conclusion in the last paragraph is, as indicated above, in part, non-sequitur.

6. But yes, I, too, very eagerly look forward to what Prof. Nath has to say subsequently on this and related issues.

But as far as the issues such as the existence of progress only in fits here and there, and indeed the absence of a generally monotonously increasing build-up of knowledge (observe the partial regression in Bramhagupta from Aaryabhat, or in Maadhav from Bhaaskar II), I think that philosophy as the fundamental factor in human condition, is relevant.

7. And, oh, BTW, is “Matteo Ricci” a corrupt form of the original “Mahadeva Rishi” [or “Maadhav Rishi”] or some such a thing? … May Internet battles ensue!

1.2 Concerning “vimaan-shaastra” and estimating $\pi$: Once again, this was a comment that I made at Abi’s blog, in response to his post on the claims concerning “vimaan-shaastra” and all, here[^]. Go through that post, to know the context in which I wrote the following comment (reproduced here with a bit of copy-editing):

I tend not to out of hand dismiss claims about the ancient Indian tradition. However, this one about the “Vimaan”s and all does seem to exceed even my limits.

But, still, I do believe that it can also be very easy to dismiss such claims without giving them due consideration. Yes, so many of them are ridiculous. But not all. Indeed, as a less noted fact, some of the defenders themselves do contradict each other, but never do notice this fact.

Let me give you an example. I am unlike some who would accept a claim only if there is a direct archaeological evidence for it. IMO, theirs is a materialistic position, and materialism is a false premise; it’s the body of the mind-body dichotomy (in Ayn Rand’s sense of the terms). And, so, I am willing to consider the astronomical references contained in the ancient verses as an evidence. So, in that sense, I don’t dismiss a 10,000+ old history of India; I don’t mindlessly accept 600 BC or so as the starting point of civilization and culture, a date so convenient to the missionaries of the Abrahamic traditions. IMO, not every influential commentator to come from the folds of the Western culture can be safely assumed to have attained the levels obtained by the best among the Greek or enlightenment thinkers.

And, so, I am OK if someone shows, based on the astronomical methods, the existence of the Indian culture, say, 5000+ years ago.

Yet, there are two notable facts here. (i) The findings of different proponents of this astronomical method of dating of the past events (say the dates of events mentioned in RaamaayaNa or Mahaabhaarata) don’t always agree with each other. And, more worrisome is the fact that (ii) despite Internet, they never even notice each other, let alone debate the soundness of their own approaches. All that they—and their supporters—do is to pick out Internet (or TED etc.) battles against the materialists.

A far deeper thinking is required to even just approach these (and such) issues. But the proponents don’t show the required maturity.

It is far too easy to jump to conclusions and blindly assert that there were material “Vimaana”s; that “puShpak” etc. were neither a valid description of a spiritual/psychic phenomenon nor a result of a vivid poetic imagination. It is much more difficult, comparatively speaking, to think of a later date insertion into a text. It is most difficult to be judicious in ascertaining which part of which verse of which book, can be reliably taken as of ancient origin, which one is a later-date interpolation or commentary, and which one is a mischievous recent insertion.

Earlier (i.e. decades earlier, while a school-boy or an undergrad in college etc.), I tended to think the very last possibility as not at all possible. Enough people couldn’t possibly have had enough mastery of Sanskrit, practically speaking, to fool enough honest Sanskrit-knowing people, I thought.

Over the decades, guess, I have become wiser. Not only have I understood the possibilities of the human nature better on the up side, but also on the down side. For instance, one of my colleagues, an engineer, an IITian who lived abroad, could himself compose poetry in Sanskrit very easily, I learnt. No, he wouldn’t do a forgery, sure. But could one say the same for every one who had a mastery of Sanskrit, without being too naive?

And, while on this topic, if someone knows the exact reference from which this verse quoted on Ramesh Raskar’s earlier page comes, and drops a line to me, I would be grateful. http://www.cs.unc.edu/~raskar/ . As usual, when I first read it, I was impressed a great deal. Until, of course, other possibilities struck me later. (It took years for me to think of these other possibilities.)

But, in case you missed it, I do want to highlight my question again: Do you know the reference from which this verse quoted by Ramesh Raskar (now a professor at MIT Media Lab) comes? If yes, please do drop me a line.

2. An inspiring tale of a contemporary mathematician:

Here is an inspiring story of a Chinese-born mathematician who beat all the odds to achieve absolutely first-rank success.

I can’t resist the temptation to insert my trailer: As a boy, Yitang Zhang could not even attend school because he was forced into manual labor on vegetable-growing farms—he lived in the Communist China. As a young PhD graduate, he could not get a proper academic job in the USA—even if he got his PhD there. He then worked as an accountant of sorts, and still went on to solve one of mathematics’ most difficult problems.

Alec Wilkinson writes insightfully, beautifully, and with an authentic kind of admiration for man the heroic, for The New Yorker, here [^]. (H/T to Prof. Phanish Suryanarayana of GeorgiaTech, who highlighted this article at iMechanica [^].)

3. FQXi Essay Contest 2015:

(Hindi) “Picture abhi baaki nahin hai, dost! Picture to khatam ho gai” … Or, welcome back to the “everyday” reality of the modern day—modern day physics, modern day mathematics, and modern day questions concerning the relation between the two.

In other words, they still don’t get it—the relation between mathematics and physics. That’s why FQXi [^] has got an essay contest about it. They even call it “mysterious.” More details here [^]. (H/T to Roger Schlafly [^].)

Though this last link looks like a Web page of some government lab (American government, not Indian), do check out the second section on that same page: “II Evaluation Criteria.” The main problem description appears in this section. Let me quote the main problem description right in this post:

The theme for this Essay Contest is: “Trick or Truth: the Mysterious Connection Between Physics and Mathematics”.

In many ways, physics has developed hand-in-hand with mathematics. It seems almost impossible to imagine physics without a mathematical framework; at the same time, questions in physics have inspired so many discoveries in mathematics. But does physics simply wear mathematics like a costume, or is math a fundamental part of physical reality?

Why does mathematics seem so “unreasonably” effective in fundamental physics, especially compared to math’s impact in other scientific disciplines? Or does it? How deeply does mathematics inform physics, and physics mathematics? What are the tensions between them — the subtleties, ambiguities, hidden assumptions, or even contradictions and paradoxes at the intersection of formal mathematics and the physics of the real world?

This essay contest will probe the mysterious relationship between physics and mathematics.

Further, this section actually carries a bunch of thought-provocative questions to get you going in your essay writing. … And, yes, the important dates are here [^].

Is this issue interesting enough? Yes.

Will I write an essay? No.

Why? Because I haven’t yet put my thoughts in a sufficiently coherent form.

However, I notice that the contest announcement itself includes so many questions that are worth attempting. And so, I will think of jotting down my answers to these questions, even if in a bit of a hurry.

However, I will neither further forge the answers together in a single coherent essay, nor will I participate in the contest.

And even if I were to participate… Well, let me put it this way. Going by Max Tegmark’s and others’ inclinations, I (sort of) “know” that anyone with my kind of answers would stand a very slim chance of actually landing the prize. … That’s another important reason for me not even to try.

But, yes, at least this time round, many of the detailed questions themselves are both valid and interesting. And so, it should be worth your while addressing them (or at least knowing what you think of them for your answers). …

As far as I am concerned, the only issue is time. … Given my habits, writing about such things—the deep and philosophical, and therefore fascinating things, the things that are interesting by themselves—have a way of totally getting out of control. That is, even if you know you aren’t going to interact with anyone else. And, mandatory interaction, incidentally, is another FQXi requirement that discourages me from participating.

So, as the bottom-line: no definitive promises, but let me see if I can write a post or a document by just straight-forwardly jotting down my answers to those detailed questions, without bothering to explain myself much, and without bothering to tie my answers together into a coherent whole.

Ok. Enough is enough. Bye for now.

[May be I will come back and add the “A Song I Like” section or so. Not sure. May be I will; may be I won’t. Bye.]

[E&OE]

# Free books on the nature of mathematics

Just passing along a quick tip, in case you didn’t know about it:

Books by Prof. Morris Kline:

1. Mathematics in Western Culture (1954) [^]
2. Mathematics and the Search for Knowledge (1985) [^]
3. Mathematics and the Physical World (1959) [^] (I began Kline’s books with this one.)

Of course, Kline’s 3-volume book, “Mathematical Thought from Ancient to Modern Times,” is the most comprehensive and detailed one. However, it is not yet available off archive.org. But that hardly matters, because the book is in print, and a pretty inexpensive (Rs. ~1600) paperback is available at Amazon [^]. The Kindle edition is just Rs. 400.

(No, I don’t have Kindle. Neither do I plan to buy one. I will probably not use it even if someone gives it to me for free. I am sure I will find someone else to pass it on for free, again! … I don’t have any use for Kindle. I am old enough to like my books only the old-fashioned way—the fresh smell of the paper and the ink included. Or, the crispiness of the fading pages of an old one. And, I like my books better in the paperback format, not hard-cover. Easy to hold while comfortably reclining in my chair or while lying over a sofa or a bed.)

Anyway, back to archive.org.

Anyway, enjoy! (And let me know if you run into some other interesting books at archive.org.)

* * * * *   * * * * *   * * * * *

A Song I Like:
(Hindi) “chain se hum ko kabhie…”
Music: O. P. Nayyar
Singer: Asha Bhosale
Lyrics: S. H. Bihari

Incidentally, I have often thought that this song was ideally suited for a saxophone, i.e., apart from Asha’s voice. Not just any instrument, but, specifically, only a saxophone. … Today I searched for, and heard for the first time, a sax rendering—the one by Babbu Khan. It’s pretty good, though I had a bit of a feeling that someone could do better, probably, a lot better. Manohari Singh? Did he ever play this song on a sax?

As to the other instruments, though I often do like to listen to a flute (I mean the Indian flute (“baansuri”)), this song simply is not at all suited to one. For instance, just listen to Shridhar Kenkare’s rendering. The entire (Hindi) “dard” gets lost, and then, worse: that sweetness oozing out in its place, is just plain irritating. At least to me. On the other hand, also locate on the ‘net a violin version of this song, and listen to it. It’s pathetic. … Enough for today. I have lost the patience to try out any piano version, though I bet it would sound bad, too.

Sax. This masterpiece is meant for the sax. And, of course, Asha.

[E&OE]