Determinism, Indeterminism, and the nature of the laws of physics…

The laws of physics are causal, but this fact does not imply that they can be used to determine each and everything that you feel should be determinable using them, in each and every context in which they apply. What matters is the nature of the laws themselves. The laws of physics are not literally boundless; nothing in the universe is. They are logically bounded by the kind of abstractions they are.


Let’s take a concrete example.

Take a bottle, pour a little water and detergent in it, shake well, and have fun watching the Technicolor wonder which results. Bubbles form; they show resplendent colors. Then, some of them shrink, others grow, one or two of them eventually collapse, and the rest of the network of the similar bubbles adjusts itself. The process continues.

Looking at it in an idle way can be fun: those colorful tendrils of water sliding over those thin little surfaces, those fascinating hues and geometric patterns… That dynamics which unfolds at such a leisurely pace. … Just watching it all can make for a neat time-sink—at least for a while.

But merely having fun watching bubbles collapse is not physics. Physics proper begins with a lawful description of the many different aspects of the visually evident spectacle—be it the explanation as to how those unreal-looking colors come about, or be it an explanation of the mechanisms involved in their shrinkage or growth, and eventual collapse, … Or, a prediction of exactly which bubble is going to collapse next.


For now, consider the problem of determining, given a configuration of some bubbles at a certain time t_0, predicting exactly which bubble is going to collapse next, and why… To solve this problem, we have to study many different processes involved in the bubbles dynamics…


Theories do exist to predict various aspects of the bubble collapse process taken individually. Further it should also be possible to combine them together. The explanation involves such theories as: the Navier-Stokes equations, which govern the flow of soap water in the thin films, and of the motion of the air entrapped within each bubble; the phenomenon of film-breakage, which can involves either the particles-based approaches to modeling of fluids, or, if you insist on a continuum theory, then theories of crack initiatiation and growth in thin lamella/shells; the propagation of a film-breakage, and the propagation of the stress-strain waves associated with the process; and also, theories concerning how the collapse process gets preferentially localized to only one (or at most few) bubbles, which involves again, nonlinear theories from mechanics of materials, and material science.

All these are causal theories. It should also be possible to “throw them together” in a multi-physics simulation.

But even then, they still are not very useful in predicting which bubble in your particular setup is going to collapse next, and when, because not the combination of these theories, but even each theory involved is too complex.

The fact of the matter is, we cannot in practice predict precisely which bubble is going to collapse next.


The reason for our inability to predict, in this context, does not have to do just with the precision of the initial conditions. It’s also their vastness.

And, the known, causal, physical laws which tell us how a sensitive dependence on the smallest changes in the initial conditions deterministically leads to such huge changes in the outcomes, that using these laws to actually make a prediction squarely lies outside of our capacity to calculate.

Even simple (first- or second-order) variations to the initial conditions specified over a very small part of the network can have repercussions for the entire evolution, which is ultimately responsible for predicting which bubble is going to collapse next.


I mention this situation because it is amply illustrative of a special kind of problems which we encounter in physics today. The laws governing the system evolution are known. Yet, in practice, they cannot be applied for performing calculations in every given situation which falls under their purview. The reason for this circumstance is that the very paradigm of formulating physical laws falls short. Let me explain what I mean very briefly here.


All physical laws are essentially quantitative in nature, and can be thought of as “functions,” i.e., as mappings from a specific set of inputs to a specific set of outputs. Since the universe is lawful, given a certain set of values for the inputs, and the specific function (the law) which does the mapping, the output is  uniquely determined. Such a nature of the physical laws has come to be known as determinism. (At least that’s what the working physicist understands by the term “determinism.”) The initial conditions together with the governing equation completely determine the final outcome.

However, there are situations in which even if the laws themselves are deterministic, they still cannot practically be put to use in order to determine the outcomes. One such a situation is what we discussed above: the problem of predicting the next bubble which will collapse.

Where is the catch? It is in here:

When you say that a physical law performs a mapping from a set of input to the set of outputs, this description is actually vastly more general than what appears on the first sight.

Consider another example, the law of Newtonian gravity.

If you have only two bodies interacting gravitationally, i.e., if all other bodies in the universe can be ignored (because their influence on the two bodies is negligibly small in the problem as posed), then the set of the required input data is indeed very small. The system itself is simple because there is only one interaction going on—that between two bodies. The simplicity of the problem design lends a certain simplicity to the system behaviour: If you vary the set of input conditions slightly, then the output changes proportionately. In other words, the change in the output is proportionately small. The system configuration itself is simple enough to ensure that such a linear relation exists between the variations in the input, and the variations in the output. Therefore, in practice, even if you specify the input conditions somewhat loosely, your prediction does err, but not too much. Its error too remains bounded well enough that we can say that the description is deterministic. In other words, we can say that the system is deterministic, only because the input–output mapping is robust under minor changes to the input.

However, if you consider the N-body problem in all its generality, then the very size of the input set itself becomes big. Any two bodies from the N-bodies form a simple interacting pair. But the number of pairs is large, and worse, they all are coupled to each other through the positions of the bodies. Further, the nonlinearities involved in such a problem statement work to take away the robustness in the solution procedure. Not only is the size of the input set big, the end-solution too varies wildly with even a small variation in the input set. If you failed to specify even a single part of the input set to an adequate precision, then the predicted end-state can deterministically become very wildly different. The input–output mapping is deterministic—but it is not robust under minor changes to the input. A small change in the initial angle can lead to an object ending up either on this side of the Sun or that. Small changes produce big variations in predictions.

So, even if the mapping is known and is known to work (deterministically), you still cannot use this “knowledge” to actually perform the mapping from the input to the output, because the mapping is not robust to small variations in the input.

Ditto, for the soap bubbles collapse problem. If you change the initial configuration ever so slightly—e.g., if there was just a small air current in one setup and a more perfect stillness in another setup, it can lead to wildly different predictions as to which bubble will collapse next.

What holds for the N-body problem also holds for the bubble collapse process. The similarity is that these are complex systems. Their parts may be simple, and the physical laws governing such simple parts may be completely deterministic. Yet, there are a great many parts, and they all are coupled together such that a small change in one part—one interaction—gets multiplied and felt in all other parts, making the overall system fragile to small changes in the input specifications.

Let me add: What holds for the N-body problem or the bubble-collapse problems also holds for quantum-mechanical measurement processes. The latter too involves a large number of parts that are nonlinearly coupled to each other, and hence, forms a complex system. It is as futile to expect that you would be able to predict the exact time of the next atomic decay as it is to expect that you will be able to predict which bubble collapses next.

But all the above still does not mean that the laws themselves are indeterministic, or that, therefore, physical theories must be regarded as indeterministic. The complex systems may not be robust. But they still are composed from deterministically operating parts. It’s just that the configuration of these parts is far too complex.


It would be far too naive to think that it should be possible to make exact (non-probabilistic) predictions even in the context of systems that are nonlinear, and whose parts are coupled together in complex manner. It smacks of harboring irresponsible attitudes to take this naive expectation as the standard by which to judge physical theories, and since they don’t come up to your expectations, to jump to the conclusion that physical theories are indeterministic in nature. That’s what has happened to QM.

It should have been clear to the critic of the science that the truth-hood of an assertion (or a law, or a theory) is not subject to whether every complex manner in which it can be recombined with other theoretical elements leads to robust formulations or not. The truth-hood of an assertion is subject only to whether it by itself and in its own context corresponds to reality or not.

The error involved here is similar, in many ways, to expecting that if a substance is good for your health in a certain quantity, then it must be good in every quantity, or that if two medicines are without side-effects when taken individually, they must remain without any harmful effects even when taken in any combination—that there should be no interaction effects. It’s the same error, albeit couched in physicists’ and philosopher’s terms, that’s all.

… Too much emphasis on “math,” and too little an appreciation of the qualitative features, only helps in compounding the error.


A preliminary version of this post appeared as a comment on Roger Schlafly’s blog, here [^]. Schlafly has often wondered about the determinism vs. indeterminism issue on his blog, and often, seems to have taken positions similar to what I expressed here in this post.

The posting of this entry was motivated out of noticing certain remarks in Lee Smolin’s response to The Edge Question, 2013 edition [^], which I recently mentioned at my own blog, here [^].


A song I like:
(Marathi) “kaa re duraavaa, kaa re abolaa…”
Singer: Asha Bhosale
Music: Sudhir Phadke
Lyrics: Ga. Di. Madgulkar


[In the interests of providing better clarity, this post shall undergo further unannounced changes/updates over the due course of time.

Revision history:
2019.04.24 23:05: First published
2019.04.25 14:41: Posted a fully revised and enlarged version.
]

Advertisements

Stay tuned to the NSF on the next evening…

Update on 2019.04.10 18:50 IST: 

Dimitrios Psaltis, University of Arizona in Tucson, EHT project scientist [^]:

The size and shape of the shadow matches the precise predictions of Einstein’s general theory of relativity, increasing our confidence in this century-old theory. Imaging a black hole is just the beginning of our effort to develop new tools that will enable us to interpret the massively complex data that nature gives us.”

Update over.


Stay tuned to the NSF on the next evening (on 10th April 2019 at 06:30 PM IST) for an announcement of astronomical proportions. Or so it is, I gather. See: “For Media” from NSF [^]. Another media advisory made by NSF roughly 9 days ago, i.e. on the Fool’s Day, here [^]. Their news “report”s [^].


No, I don’t understand the relativity theory. Not even the “special” one (when it’s taken outside of its context of the so-called “classical” electrodynamics)—let alone the “general” one. It’s not one of my fields of knowledge.

But if I had to bet my money then, based purely on my grasp of the sociological factors these days operative in science as practised in the Western world, then I would bet a good amount (even Indian Rs. 1,000/-) that the announcement would be just a further confirmation of Einstein’s theory of general relativity.

That’s how such things go, in the Western world, today.

In other words, I would be very, very, very surprised—I mean to say, about my grasp of the sociology of science in the Western world—if they found something (anything) going even apparently contrary to any one of the implications of any one of Einstein’s theories. Here, emphatically, his theory of the General Relativity.


That’s all for now, folks! Bye for now. Will update this post in a minor way when the facts are on the table.


TBD: The songs section. Will do that too, within the next 24 hours. That’s a promise. For sure. (Or, may be, right tonight, if a song nice enough to listen to, strikes me within the next half an hour or so… Bye, really, for now.)


A song I like:

(Hindi) “ek haseen shaam ko, dil meraa kho_ gayaa…”
Lyrics: Raajaa Mehdi Ali Khaan
Music: Madan Mohan
Singer: Mohammad Rafi [Some beautiful singing here…]

 

 

 

The self-field, and the objectivity of the classical electrostatic potentials: my analysis

This blog post continues from my last post, and has become overdue by now. I had promised to give my answers to the questions raised last time. Without attempting to explain too much, let me jot down the answers.


1. The rule of omitting the self-field:

This rule arises in electrostatic interactions basically because the Coulombic field has a spherical symmetry. The same rule would also work out in any field that has a spherical symmetry—not just the inverse-separation fields, and not necessarily only the singular potentials, though Coulombic potentials do show both these latter properties too.

It is helpful here to think in terms of not potentials but of forces.

Draw any arbitrary curve. Then, hold one end of the curve fixed at the origin, and sweep the curve through all possible angles around it, to get a 3D field. This 3D field has a spherical symmetry, too. Hence, gradients at the same radial distance on opposite sides of the origin are always equal and opposite.

Now you know that the negative gradient of potential gives you a force. Since for any spherical potential the gradients are equal and opposite, they cancel out. So, the forces cancel out to.

Realize here that in calculating the force exerted by a potential field on a point-particle (say an electron), the force cannot be calculated in reference to just one point. The very definition of the gradient refers to two different points in space, even if they be only infinitesimally separated apart. So, the proper procedure is to start with a small sphere centered around the given electron, calculate the gradients of the potential field at all points on the surface of this sphere, calculate the sum of the forces exerted on the domain contained inside the spherical surface by these forces, and then take the sphere to the limiting of vanishing size. The sum of the forces thus exerted is the net force acting on that point-particle.

In case of the Coulombic potentials, the forces thus calculated on the surface of any sphere (centered on that particle) turn out to be zero. This fact holds true for spheres of all radii. It is true that gradients (and forces) progressively increase as the size of the sphere decreases—in fact they increase without all bounds for singular potentials. However, the aforementioned cancellation holds true at any stage in the limiting process. Hence, it holds true for the entirety of the self-field.

In calculating motions of a given electron, what matters is not whether its self-field exists or not, but whether it exerts a net force on the same electron or not. The self-field does exist (at least in the sense explained later below) and in that sense, yes, it does keep exerting forces at all times, also on the same electron. However, due to the spherical symmetry, the net force that the field exerts on the same electron turns out to be zero.

In short:

Even if you were to include the self-field in the calculations, if the field is spherically symmetric, then the final net force experienced by the same electron would still have no part coming from its own self-field. Hence, to economize calculations without sacrificing exactitude in any way, we discard it out of considerations.The rule of omitting the self-field is just a matter of economizing calculations; it is not a fundamental law characterizing what field may be objectively said to exist. If the potential field due to other charges exists, then, in the same sense, the self-field too exists. It’s just that for the motions of the self field-generating electron, it is as good as non-existent.

However, the question of whether a potential field physically exists or not, turns out to be more subtle than what might be thought.


2. Conditions for the objective existence of electrostatic potentials:

It once again helps to think of forces first, and only then of potentials.

Consider two electrons in an otherwise empty spatial region of an isolated system. Suppose the first electron (e_1), is at a position x_1, and a second electron e_2 is at a position x_2. What Coulomb’s law now says is that the two electrons mutually exert equal and opposite forces on each other. The magnitudes of these forces are proportional to the inverse-square of the distance which separates the two. For the like charges, the forces is repulsive, and for unlike charges, it is attractive. The amount of the electrostatic forces thus exerted do not depend on mass; they depend only the amounts of the respective charges.

The potential energy of the system for this particular configuration is given by (i) arbitrarily assigning a zero potential to infinite separation between the two charges, and (ii) imagining as if both the charges have been brought from infinity to their respective current positions.

It is important to realize that the potential energy for a particular configuration of two electrons does not form a field. It is merely a single number.

However, it is possible to imagine that one of the charges (say e_1) is held fixed at a point, say at \vec{r}_1, and the other charge is successively taken, in any order, at every other point \vec{r}_2 in the infinite domain. A single number is thus generated for each pair of (\vec{r}_1, \vec{r}_2). Thus, we can obtain a mapping from the set of positions for the two charges, to a set of the potential energy numbers. This second set can be regarded as forming a field—in the 3D space.

However, notice that thus defined, the potential energy field is only a device of calculations. It necessarily refers to a second charge—the one which is imagined to be at one point in the domain at a time, with the procedure covering the entire domain. The energy field cannot be regarded as a property of the first charge alone.

Now, if the potential energy field U thus obtained is normalized by dividing it with the electric charge of the second charge, then we get the potential energy for a unit test-charge. Another name for the potential energy obtained when a unit test-charge is used for the second charge is: the electrostatic potential (denoted as V).

But still, in classical mechanics, the potential field also is only a device of calculations; it does not exist as a property of the first charge, because the potential energy itself does not exist as a property of that fixed charge alone. What does exist is the physical effect that there are those potential energy numbers for those specific configurations of the fixed charge and the test charge.

This is the reason why the potential energy field, and therefore the electrostatic potential of a single charge in an otherwise empty space does not exist. Mathematically, it is regarded as zero (though it could have been assigned any other arbitrary, constant value.)

Potentials arise only out of interaction of two charges. In classical mechanics, the charges are point-particles. Point-particles exist only at definite locations and nowhere else. Therefore, their interaction also must be seen as happening only at the locations where they do exist, and nowhere else.

If that is so, then in what sense can we at all say that potential energy (or electrostaic potential) field does physically exist?

Consider a single electron in an isolated system, again. Assume that its position remains fixed.

Suppose there were something else in the isolated system—-something—some object—every part of which undergoes an electrostatic interaction with the fixed (first) electron. If this second object were to be spread all over the domain, and if every part of it were able to interact with the fixed charge, then we could say that the potential energy field exists objectively—as an attribute of this second object. Ditto, for the electric potential field.

Note three crucially important points, now.

2.1. The second object is not the usual classical object.

You cannot regard the second (spread-out) object as a mere classical charge distribution. The reason is this.

If the second object were to be actually a classical object, then any given part of it would have to electrostatically interact with every other part of itself too. You couldn’t possibly say that a volume element in this second object interacts only with the “external” electron. But if the second object were also to be self-interacting, then what would come to exist would not be the simple inverse-distance potential field energy, in reference to that single “external” electron. The space would be filled with a very weird field. Admitting motion to the property of the local charge in the second object, every locally present charge would soon redistribute itself back “to” infinity (if it is negative), or it all would collapse into the origin (if the charge on the second object were to be positive, because the fixed electron’s field is singular). But if we allow no charge redistributions, and the second field were to be classical (i.e. capable of self-interacting), then the field of the second object would have to have singularities everywhere. Very weird. That’s why:

If you want to regard the potential field as objectively existing, you have to also posit (i.e. postulate) that the second object itself is not classical in nature.

Classical electrostatics, if it has to regard a potential field as objectively (i.e. physically) existing, must therefore come to postulate a non-classical background object!

2.2. Assuming you do posit such a (non-classical) second object (one which becomes “just” a background object), then what happens when you introduce a second electron into the system?

You would run into another seeming contradiction. You would find that this second electron has no job left to do, as far as interacting with the first (fixed) electron is concerned.

If the potential field exists objectively, then the second electron would have to just passively register the pre-existing potential in its vicinity (because it is the second object which is doing all the electrostatic interactions—all the mutual forcings—with the first electron). So, the second electron would do nothing of consequence with respect to the first electron. It would just become a receptacle for registering the force being exchanged by the background object in its local neighborhood.

But the seeming contradiction here is that as far as the first electron is concerned, it does feel the potential set up by the second electron! It may be seen to do so once again via the mediation of the background object.

Therefore, both electrons have to be simultaneously regarded as being active and passive with respect to each other. They are active as agents that establish their own potential fields, together with an interaction with the background object. But they also become passive in the sense that they are mere point-masses that only feel the potential field in the background object and experience forces (accelerations) accordingly.

The paradox is thus resolved by having each electron set up a field as a result of an interaction with the background object—but have no interaction with the other electron at all.

2.3. Note carefully what agency is assigned to what object.

The potential field has a singularity at the position of that charge which produces it. But the potential field itself is created either by the second charge (by imagining it to be present at various places), or by a non-classical background object (which, in a way, is nothing but an objectification of the potential field-calculation procedure).

Thus, there arises a duality of a kind—a double-agent nature, so to speak. The potential energy is calculated for the second charge (the one that is passive), in the sense that the potential energy is relevant for calculating the motion of the second charge. That’s because the self-field cancels out for all motions of the first charge. However,

 The potential energy is calculated for the second charge. But the field so calculated has been set up by the first (fixed) charge. Charges do not interact with each other; they interact only with the background object.

2.4. If the charges do not interact with each other, and if they interact only with the background object, then it is worth considering this question:

Can’t the charges be seen as mere conditions—points of singularities—in the background object?

Indeed, this seems to be the most reasonable approach to take. In other words,

All effects due to point charges can be regarded as field conditions within the background object. Thus, paradoxically enough, a non-classical distributed field comes to represent the classical, massive and charged point-particles themselves. (The mass becomes just a parameter of the interactions of singularities within a 3D field.) The charges (like electrons) do not exist as classical massive particles, not even in the classical electrostatics.


3. A partly analogous situation: The stress-strain fields:

If the above situation seems too paradoxical, it might be helpful to think of the stress-strain fields in solids.

Consider a horizontally lying thin plate of steel with two rigid rods welded to it at two different points. Suppose horizontal forces of mutually opposite directions are applied through the rods (either compressive or tensile). As you know, as a consequence, stress-strain fields get set up in the plate.

From an external viewpoint, the two rods are regarded as interacting with each other (exchanging forces with each other) via the medium of the plate. However, in reality, they are interacting only with the object that is the plate. The direct interaction, thus, is only between a rod and the plate. A rod is forced, it interacts with the plate, the plate sets up stress-strain field everywhere, the local stress-field near the second rod interacts with it, and the second rod registers a force—which balances out the force applied at its end. Conversely, the force applied at the second rod also can be seen as getting transmitted to the first rod via the stress-strain field in the plate material.

There is no contradiction in this description, because we attribute the stress-strain field to the plate itself, and always treat this stress-strain field as if it came into existence due to both the rods acting simultaneously.

In particular, we do not try to isolate a single-rod attribute out of the stress-strain field, the way we try to ascribe a potential to the first charge alone.

Come to think of it, if we have only one rod and if we apply force to it, no stress-strain field would result (i.e. neglecting inertia effects of the steel plate). Instead, the plate would simply move in the rigid body mode. Now, in solid mechanics, we never try to visualize a stress-strain field associated with a single rod alone.

It is a fallacy of our thinking that when it comes to electrostatics, we try to ascribe the potential to the first charge, and altogether neglect the abstract procedure of placing the test charge at various locations, or the postulate of positing a non-classical background object which carries that potential.

In the interest of completeness, it must be noted that the stress-strain fields are tensor fields (they are based on the gradients of vector fields), whereas the electrostatic force-field is a vector field (it is based on the gradient of the scalar potential field). A more relevant analogy for the electrostatic field, therefore might the forces exchanged by two point-vortices existing in an ideal fluid.


4. But why bother with it all?

The reason I went into all this discussion is because all these issues become important in the context of quantum mechanics. Even in quantum mechanics, when you have two charges that are interacting with each other, you do run into these same issues, because the Schrodinger equation does have a potential energy term in it. Consider the following situation.

If an electrostatic potential is regarded as being set up by a single charge (as is done by the proton in the nucleus of the hydrogen atom), but if it is also to be regarded as an actually existing and spread out entity (as a 3D field, the way Schrodinger’s equation assumes it to be), then a question arises: What is the role of the second charge (e.g., that of the electron in an hydrogen atom)? What happens when the second charge (the electron) is represented quantum mechanically? In particular:

What happens to the potential field if it represents the potential energy of the second charge, but the second charge itself is now being represented only via the complex-valued wavefunction?

And worse: What happens when there are two electrons, and both interacting with each other via electrostatic repulsions, and both are required to be represented quantum mechanically—as in the case of the electrons in an helium atom?

Can a charge be regarded as having a potential field as well as a wavefunction field? If so, what happens to the point-specific repulsions as are mandated by the Coulomb law? How precisely is the V(\vec{r}_1, \vec{r}_2) term to be interpreted?

I was thinking about these things when these issues occurred to me: the issue of the self-field, and the question of the physical vs. merely mathematical existence of the potential fields of two or more quantum-mechanically interacting charges.

Guess I am inching towards my full answers. Guess I have reached my answers, but I need to have them verified with some physicists.


5. The help I want:

As a part of my answer-finding exercises (to be finished by this month-end), I might be contacting a second set of physicists soon enough. The issue I want to learn from them is the following:

How exactly do they do computational modeling of the helium atom using the finite difference method (FDM), within the context of the standard (mainstream) quantum mechanics?

That is the question. Once I understand this part, I would be done with the development of my new approach to understanding QM.

I do have some ideas regarding the highlighted question. It’s just that I want to have these ideas confirmed from some physicists before (or along-side) implementing the FDM code. So, I might be approaching someone—possibly you!

Please note my question once again. I don’t want to do perturbation theory. I would also like to avoid the variational method.

Yes, I am very comfortable with the finite element method, which is basically based on the variational calculus. So, given a good (detailed enough) account of the variational method for the He atom, it should be possible to translate it into the FEM terms.

However, ideally, what I would like to do is to implement it as an FDM code.

So there.

Please suggest good references and / or people working on this topic, if you know any. Thanks in advance.


A song I like:

[… Here I thought that there was no song that Salil Chowdhury had composed and I had not listened to. (Well, at least when it comes to his Hindi songs). That’s what I had come to believe, and here trots along this one—and that too, as a part of a collection by someone! … The time-delay between my first listening to this song, and my liking it, was zero. (Or, it was a negative time-delay, if you refer to the instant that the first listening got over). … Also, one of those rare occasions when one is able to say that any linear ordering of the credits could only be random.]

(Hindi) “mada bhari yeh hawaayen”
Music: Salil Chowdhury
Lyrics: Gulzaar
Singer: Lata Mangeshkar

 

The rule of omitting the self-field in calculations—and whether potentials have an objective existence or not

There was an issue concerning the strictly classical, non-relativistic electricity which I was (once again) confronted with, during my continuing preoccupation with quantum mechanics.

Actually, a small part of this issue had occurred to me earlier too, and I had worked through it back then.

However, the overall issue had never occurred to me with as much of scope, generality and force as it did last evening. And I could not immediately resolve it. So, for a while, especially last night, I unexpectedly found myself to have become very confused, even discouraged.

Then, this morning, after a good night’s rest, everything became clear right while sipping my morning cup of tea. Things came together literally within a span of just a few minutes. I want to share the issue and its resolution with you.

The question in question (!) is the following.


Consider 2 (or N) number of point-charges, say electrons. Each electron sets up an electrostatic (Coulombic) potential everywhere in space, for the other electrons to “feel”.

As you know, the potential set up by the i-th electron is:
V_i(\vec{r}_i, \vec{r}) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_i}{|\vec{r} - \vec{r}_i|}
where \vec{r}_i is the position vector of the i-th electron, \vec{r} is any arbitrary point in space, and Q_i is the charge of the i-th electron.

The potential energy associated with some other (j-th) electron being at the position \vec{r}_j (i.e. the energy that the system acquires in bringing the two electrons from \infty to their respective positions some finite distance apart), is then given as:
U_{ij}(\vec{r}_i, \vec{r}_j) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_i\,Q_j}{|\vec{r}_j - \vec{r}_i|}

The notation followed here is the following: In U_{ij}, the potential field is produced by the i-th electron, and the work is done by the j-th electron against the i-th electron.

Symmetrically, the potential energy for this configuration can also be expressed as:
U_{ji}(\vec{r}_j, \vec{r}_i) = \dfrac{1}{4 \pi \epsilon_0} \dfrac{Q_j\,Q_i}{|\vec{r}_i - \vec{r}_j|}

If a system has only two charges, then its total potential energy U can be expressed either as U_{ji} or as U_{ij}. Thus,
U = U_{ji} = U_{ij}

Similarly, for any pair of charges in an N-particle system, too. Therefore, the total energy of an N-particle system is given as:
U = \sum\limits_{i}^{N} \sum\limits_{j = i+1}^{N} U_{ij}

The issue now is this: Can we say that the total potential energy U has an objective existence in the physical world? Or is it just a device of calculations that we have invented, just a concept from maths that has no meaningful physical counterpart?

(A side remark: Energy may perhaps exist as an attribute or property of something else, and not necessarily as a separate physical object by itself. However, existence as an attribute still is an objective existence.)

The reason to raise this doubt is the following.


When calculating the motion of the i-th charge, we consider only the potentials V_j produced by the other charges, not the potential produced by the given charge V_i itself.

Now, if the potential produced by the given charge (V_i) also exists at every point in space, then why does it not enter the calculations? How does its physical efficacy get evaporated away? And, symmetrically: The motion of the j-th charge occurs as if V_j had physically evaporated away.

The issue generalizes in a straight-forward manner. If there are N number of charges, then for calculating the motion of a given i-th charge, the potential fields of all other charges are considered operative. But not its own field.

How can motion become sensitive to only a part of the total potential energy existing at a point even if the other part also exists at the same point? That is the question.


This circumstance seems to indicate as if there is subjectivity built deep into the very fabric of classical mechanics. It is as if the universe just knows what a subject is going to calculate, and accordingly, it just makes the corresponding field mystically go away. The universe—the physical universe—acts as if it were changing in response to what we choose to do in our mind. Mind you, the universe seems to change in response to not just our observations (as in QM), but even as we merely proceed to do calculations. How does that come to happen?… May be the whole physical universe exists only in our imagination?

Got the point?


No, my confusion was not as pathetic as that in the previous paragraph. But I still found myself being confused about how to account for the fact that an electron’s own field does not enter the calculations.

But it was not all. A non-clarity on this issue also meant that there was another confusing issue which also raised its head. This secondary issue arises out of the fact that the Coulombic potential set up by any point-charge is singular in nature (or at least approximately so).

If the electron is a point-particle and if its own potential “is” \infty at its position, then why does it at all get influenced by the finite potential of any other charge? That is the question.

Notice, the second issue is most acute when the potentials in question are singular in nature. But even if you arbitrarily remove the singularity by declaring (say by fiat) a finite size for the electron, thereby making its own field only finitely large (and not infinite), the above-mentioned issue still remains. So long as its own field is finite but much, much larger than the potential of any other charge, the effects due to the other charges should become comparatively less significant, perhaps even negligibly small. Why does this not happen? Why does the rule instead go exactly the other way around, and makes those much smaller effects due to other charges count, but not the self-field of the very electron in question?


While thinking about QM, there was a certain point where this entire gamut of issues became important—whether the potential has an objective existence or not, the rule of omitting the self-field while calculating motions of particles, the singular potential, etc.

The specific issue I was trying to think through was: two interacting particles (e.g. the two electrons in the helium atom). It was while thinking on this problem that this problem occurred to me. And then, it also led me to wonder: what if some intellectual goon in the guise of a physicist comes along, and says that my proposal isn’t valid because there is this element of subjectivity to it? This thought occurred to me with all its force only last night. (Or so I think.) And I could not recall seeing a ready-made answer in a text-book or so. Nor could I figure it out immediately, at night, after a whole day’s work. And as I failed to resolve the anticipated objection, I progressively got more and more confused last night, even discouraged.

However, this morning, it all got resolved in a jiffy.


Would you like to give it a try? Why is it that while calculating the motion of the i-th charge, you consider the potentials set up by all the rest of the charges, but not its own potential field? Why this rule? Get this part right, and all the philosophical humbug mentioned earlier just evaporates away too.

I would wait for a couple of days or so before coming back and providing you with the answer I found. May be I will write another post about it.


Update on 2019.03.16 20:14 IST: Corrected the statement concerning the total energy of a two-electron system. Also simplified the further discussion by couching it preferably in terms of potentials rather than energies (as in the first published version), because a Coulombic potential always remains anchored in the given charge—it doesn’t additionally depend on the other charges the way energy does. Modified the notation to reflect the emphasis on the potentials rather than energy.


A song I like:

[What else? [… see the songs section in the last post.]]
(Hindi) “woh dil kahaan se laaoon…”
Singer: Lata Mangeshkar
Music: Ravi
Lyrics: Rajinder Kishen


A bit of a conjecture as to why Ravi’s songs tend to be so hummable, of a certain simplicity, especially, almost always based on a very simple rhythm. My conjecture is that because Ravi grew up in an atmosphere of “bhajan”-singing.

Observe that it is in the very nature of music that it puts your mind into an abstract frame of mind. Observe any singer, especially the non-professional ones (or the ones who are not very highly experienced in controlling their body-language while singing, as happens to singers who participate in college events or talent shows).

When they sing, their eyes seem to roll in a very peculiar manner. It seems random but it isn’t. It’s as if the eyes involuntarily get set in the motions of searching for something definite to be found somewhere, as if the thing to be found would be in the concrete physical space outside, but within a split-second, the eyes again move as if the person has realized that nothing corresponding is to be found in the world out there. That’s why the eyes “roll away.” The same thing goes on repeating, as the singer passes over various words, points of pauses, nuances, or musical phrases.

The involuntary motions of the eyes of the singer provide a window into his experience of music. It’s as if his consciousness was again and again going on registering a sequence of two very fleeting experiences: (i) a search for something in the outside world corresponding to an inner experience felt in the present, and immediately later, (ii) a realization (and therefore the turning away of the eyes from an initially picked up tentative direction) that nothing in the outside world would match what was being searched for.

The experience of music necessarily makes you realize the abstractness of itself. It tends to make you realize that the root-referents of your musical experience lie not in a specific object or phenomenon in the physical world, but in the inner realm, that of your own emotions, judgments, self-reflections, etc.

This nature of music makes it ideally suited to let you turn your attention away from the outside world, and has the capacity or potential to induce a kind of a quiet self-reflection in you.

But the switch from the experience of frustrated searches into the outside world to a quiet self-reflection within oneself is not the only option available here. Music can also induce in you a transitioning from those unfulfilled searches to a frantic kind of an activity: screams, frantic shouting, random gyrations, and what not. In evidence, observe any piece of modern American / Western pop-music.

However, when done right, music can also induce a state of self-reflection, and by evoking certain kind of emotions, it can even lead to a sense of orderliness, peace, serenity. To make this part effective, such a music has to be simple enough, and orderly enough. That’s why devotional music in the refined cultural traditions is, as a rule, of a certain kind of simplicity.

The experience of music isn’t the highest possible spiritual experience. But if done right, it can make your transition from the ordinary experience to a deep, profound spiritual experience easy. And doing it right involves certain orderliness, simplicity in all respects: tune, tone, singing style, rhythm, instrumental sections, transitions between phrases, etc.

If you grow up listening to this kind of a music, your own music in your adult years tends to reflect the same qualities. The simplicity of rhythm. The alluringly simple tunes. The “hummability quotient.” (You don’t want to focus on intricate patterns of melody in devotional music; you want it to be so simple that minimal mental exertion is involved in rendering it, so that your mental energy can quietly transition towards your spiritual quest and experiences.) Etc.

I am not saying that the reason Ravi’s music is so great is because he listened his father sing “bhajan”s. If this were true, there would be tens of thousands of music composers having talents comparable to Ravi’s. But the fact is that Ravi was a genius—a self-taught genius, in fact. (He never received any formal training in music ever.) But what I am saying is that if you do have the musical ability, having this kind of a family environment would leave its mark. Definitely.

Of course, this all was just a conjecture. Check it out and see if it holds or not.

… May be I should convert this “note” in a separate post by itself. Would be easier to keep track of it. … Some other time. … I have to work on QM; after all, exactly only half the month remains now. … Bye for now. …


Learnability of machine learning is provably an undecidable?—part 3: closure

Update on 23 January 2019, 17:55 IST:

In this series of posts, which was just a step further from the initial, brain-storming kind of a stage, I had come to the conclusion that based on certain epistemological (and metaphysical) considerations, Ben-David et al.’s conclusion (that learnability can be an undecidable) is logically untenable.

However, now, as explained here [^], I find that this particular conclusion which I drew, was erroneous. I now stand corrected, i.e., I now consider Ben-David et al.’s result to be plausible. Obviously, it merits a further, deeper, study.

However, even as acknowledging the above-mentioned mistake, let me also hasten to clarify that I still stick to my other positions, especially the central theme in this series of posts. The central theme here was that there are certain core features of the set theory which make implications such as Godel’s incompleteness theorems possible. These features (of the set theory) demonstrably carry a glaring epistemological flaw such that applying Godel’s theorem outside of its narrow technical scope in mathematics or computer science is not permissible. In particular, Godel’s incompleteness theorem does not apply to knowledge or its validation in the more general sense of these terms. This theme, I believe, continues to hold as is.

Update over.


Gosh! I gotta get this series out of my hand—and also head! ASAP, really!! … So, I am going to scrap the bits and pieces I had written for it earlier; they would have turned this series into a 4- or 5-part one. Instead, I am going to start entirely afresh, and I am going to approach this topic from an entirely different angle—a somewhat indirect but a faster route, sort of like a short-cut. Let’s get going.


Statements:

Open any article, research paper, book or a post, and what do you find? Basically, all these consist of sentences after sentences. That is, a series of statements, in a way. That’s all. So, let’s get going at the level of statements, from a “logical” (i.e. logic-thoretical) point of view.

Statements are made to propose or to identify (or at least to assert) some (or the other) fact(s) of reality. That’s what their purpose is.


The conceptual-level consciousness as being prone to making errors:

Coming to the consciousness of man, there are broadly two levels of cognition at which it operates: the sensory-perceptual, and the conceptual.

Examples of the sensory-perceptual level consciousness would consist of reaching a mental grasp of such facts of reality as: “This object exists, here and now;” “this object has this property, to this much degree, in reality,” etc. Notice that what we have done here is to take items of perception, and put them into the form of propositions.

Propositions can be true or false. However, at the perceptual level, a consciousness has no choice in regard to the truth-status. If the item is perceived, that’s it! It’s “true” anyway. Rather, perceptions are not subject to a test of truth- or false-hoods; they are at the very base standards of deciding truth- or false-hoods.

A consciousness—better still, an organism—does have some choice, even at the perceptual level. The choice which it has exists in regard to such things as: what aspect of reality to focus on, with what degree of focus, with what end (or purpose), etc. But we are not talking about such things here. What matters to us here is just the truth-status, that’s all. Thus, keeping only the truth-status in mind, we can say that this very idea itself (of a truth-status) is inapplicable at the purely perceptual level. However, it is very much relevant at the conceptual level. The reason is that at the conceptual level, the consciousness is prone to err.

The conceptual level of consciousness may be said to involve two different abilities:

  • First, the ability to conceive of (i.e. create) the mental units that are the concepts.
  • Second, the ability to connect together the various existing concepts to create propositions which express different aspects of the truths pertaining to them.

It is possible for a consciousness to go wrong in either of the two respects. However, mistakes are much more easier to make when it comes to the second respect.

Homework 1: Supply an example of going wrong in the first way, i.e., right at the stage of forming concepts. (Hint: Take a concept that is at least somewhat higher-level so that mistakes are easier in forming it; consider its valid definition; then modify its definition by dropping one of its defining characteristics and substituting a non-essential in it.)

Homework 2: Supply a few examples of going wrong in the second way, i.e., in forming propositions. (Hint: I guess almost any logical fallacy can be taken as a starting point for generating examples here.)


Truth-hood operator for statements:

As seen above, statements (i.e. complete sentences that formally can be treated as propositions) made at the conceptual level can, and do, go wrong.

We therefore define a truth-hood operator which, when it operates on a statement, yields the result as to whether the given statement is true or non-true. (Aside: Without getting into further epistemological complexities, let me note here that I reject the idea of the arbitrary, and thus regard non-true as nothing but a sub-category of the false. Thus, in my view, a proposition is either true or it is false. There is no middle (as Aristotle said), or even an “outside” (like the arbitrary) to its truth-status.)

Here are a few examples of applying the truth-status (or truth-hood) operator to a statement:

  • Truth-hood[ California is not a state in the USA ] = false
  • Truth-hood[ Texas is a state in the USA ] = true
  • Truth-hood[ All reasonable people are leftists ] = false
  • Truth-hood[ All reasonable people are rightists ] = false
  • Truth-hood[ Indians have significantly contributed to mankind’s culture ] = true
  • etc.

For ease in writing and manipulation, we propose to give names to statements. Thus, first declaring

A: California is not a state in the USA

and then applying the Truth-hood operator to “A”, is fully equivalent to applying this operator to the entire sentence appearing after the colon (:) symbol. Thus,

Truth-hood[ A ] <==> Truth-hood[ California is not a state in the USA ] = false


Just a bit of the computer languages theory: terminals and non-terminals:

To take a short-cut through this entire theory, we would like to approach the idea of statements from a little abstract perspective. Accordingly, borrowing some terminology from the area of computer languages, we define and use two types of symbols: terminals and non-terminals. The overall idea is this. We regard any program (i.e. a “write-up”) written in any computer-language as consisting of a sequence of statements. A statement, in turn, consists of certain well-defined arrangement of words or symbols. Now, we observe that symbols (or words) can be  either terminals or non-terminals.

You can think of a non-terminal symbol in different ways: as higher-level or more abstract words, as “potent” symbols. The non-terminal symbols have a “definition”—i.e., an expansion rule. (In CS, it is customary to call an expansion rule a “production” rule.) Here is a simple example of a non-terminal and its expansion:

  • P => S1 S2

where the symbol “=>” is taken to mean things like: “is the same as” or “is fully equivalent to” or “expands to.” What we have here is an example of an abstract statement. We interpret this statement as the following. Wherever you see the symbol “P,” you may substitute it using the train of the two symbols, S1 and S2, written in that order (and without anything else coming in between them).

Now consider the following non-terminals, and their expansion rules:

  • P1 => P2 P S1
  • P2 => S3

The question is: Given the expansion rules for P, P1, and P2, what exactly does P1 mean? what precisely does it stand for?

Answer:

  • P1 => (P2) P S1 => S3 (P) S1 => S3 S1 S2 S1

In the above, we first take the expansion rule for P1. Then, we expand the P2 symbol in it. Finally, we expand the P symbol. When no non-terminal symbol is left to expand, we arrive at our answer that “P1” means the same as “S3 S1 S2 S1.” We could have said the same fact using the colon symbol, because the colon (:) and the “expands to” symbol “=>” mean one and the same thing. Thus, we can say:

  • P1: S3 S1 S2 S1

The left hand-side and the right hand-side are fully equivalent ways of saying the same thing. If you want, you may regard the expression on the right hand-side as a “meaning” of the symbol on the left hand-side.

It is at this point that we are able to understand the terms: terminals and non-terminals.

The symbols which do not have any further expansion for them are called, for obvious reasons, the terminal symbols. In contrast, non-terminal symbols are those which can be expanded in terms of an ordered sequence of non-terminals and/or terminals.

We can now connect our present discussion (which is in terms of computer languages) to our prior discussion of statements (which is in terms of symbolic logic), and arrive at the following correspondence:

The name of every named statement is a non-terminal; and the statement body itself is an expansion rule.

This correspondence works also in the reverse direction.

You can always think of a non-terminal (from a computer language) as the name of a named proposition or statement, and you can think of an expansion rule as the body of the statement.

Easy enough, right? … I think that we are now all set to consider the next topic, which is: liar’s paradox.


Liar’s paradox:

The liar paradox is a topic from the theory of logic [^]. It has been resolved by many people in different ways. We would like to treat it from the viewpoint of the elementary computer languages theory (as covered above).

The simplest example of the liar paradox is , using the terminology of the computer languages theory, the following named statement or expansion rule:

  • A: A is false.

Notice, it wouldn’t be a paradox if the same non-terminal symbol, viz. “A” were not to appear on both sides of the expansion rule.

To understand why the above expansion rule (or “definition”) involves a paradox, let’s get into the game.

Our task will be to evaluate the truth-status of the named statement that is “A”. This is the “A” which comes on the left hand-side, i.e., before the colon.

In symbolic logic, a statement is nothing but its expansion; the two are exactly and fully identical, i.e., they are one and the same. Accordingly, to evaluate the truth-status of “A” (the one which comes before the colon), we consider its expansion (which comes after the colon), and get the following:

  • Truth-hood[ A ] = Truth-hood[ A is false ] = false           (equation 1)

Alright. From this point onward, I will drop explicitly writing down the Truth-hood operator. It is still there; it’s just that to simplify typing out the ensuing discussion, I am not going to note it explicitly every time.

Anyway, coming back to the game, what we have got thus far is the truth-hood status of the given statement in this form:

  • A: “A is false”

Now, realizing that the “A” appearing on the right hand-side itself also is a non-terminal, we can substitute for its expansion within the aforementioned expansion. We thus get to the following:

  • A: “(A is false) is false”

We can apply the Truth-hood operator to this expansion, and thereby get the following: The statement which appears within the parentheses, viz., the “A is false” part, itself is false. Accordingly, the Truth-hood operator must now evaluate thus:

  • Truth-hood[ A ] = Truth-hood[ A is false] = Truth-hood[ (A is false) is false ] = Truth-hood[ A is true ] = true            (equation 2)

Fun, isn’t it? Initially, via equation 1, we got the result that A is false. Now, via equation 2, we get the result that A is true. That is the paradox.

But the fun doesn’t stop there. It can continue. In fact, it can continue indefinitely. Let’s see how.

If only we were not to halt the expansions, i.e., if only we continue a bit further with the game, we could have just as well made one more expansion, and got to the following:

  • A: ((A is false) is false) is false.

The Truth-hood status of the immediately preceding expansion now is: false. Convince yourself that it is so. Hint: Always expand the inner-most parentheses first.

Homework 3: Convince yourself that what we get here is an indefinitely long alternating sequence of the Truth-hood statuses that: A is false, A is true, A is false, A is true

What can we say by way of a conclusion?

Conclusion: The truth-status of “A” is not uniquely decidable.

The emphasis is on the word “uniquely.”

We have used all the seemingly simple rules of logic, and yet have stumbled on to the result that, apparently, logic does not allow us to decide something uniquely or meaningfully.


Liar’s paradox and the set theory:

The importance of the liar paradox to our present concerns is this:

Godel himself believed, correctly, that the liar paradox was a semantic analogue to his Incompleteness Theorem [^].

Go read the Wiki article (or anything else on the topic) to understand why. For our purposes here, I will simply point out what the connection of the liar paradox is to the set theory, and then (more or less) call it a day. The key observation I want to make is the following:

You can think of every named statement as an instance of an ordered set.

What the above key observation does is to tie the symbolic logic of proposition with the set theory. We thus have three equivalent ways of describing the same idea: symbolic logic (name of a statement and its body), computer languages theory (non-terminals and their expansions to terminals), and set theory (the label of an ordered set and its enumeration).

As an aside, the set in question may have further properties, or further mathematical or logical structures and attributes embedded in itself. But at its minimal, we can say that the name of a named statement can be seen as a non-terminal, and the “body” of the statement (or the expansion rule) can be seen as an ordered set of some symbols—an arbitrarily specified sequence of some (zero or more) terminals and (zero or more) non-terminals.

Two clarifications:

  • Yes, in case there is no sequence in a production at all, it can be called the empty set.
  • When you have the same non-terminal on both sides of an expansion rule, it is said to form a recursion relation.

An aside: It might be fun to convince yourself that the liar paradox cannot be posed or discussed in terms of Venn’s diagram. The property of the “sheet” on which Venn’ diagram is drawn is, by some simple intuitive notions we all bring to bear on Venn’s diagram, cannot have a “recursion” relation.

Yes, the set theory itself was always “powerful” enough to allow for recursions. People like Godel merely made this feature explicit, and took full “advantage” of it.


Recursion, the continuum, and epistemological (and metaphysical) validity:

In our discussion above, I had merely asserted, without giving even a hint of a proof, that the three ways (viz., the symbolic logic of statements or  propositions, the computer languages theory, and the set theory) were all equivalent ways of expressing the same basic idea (i.e. the one which we are concerned about, here).

I will now once again make a few more observations, but without explaining them in detail or supplying even an indication of their proofs. The factoids I must point out are the following:

  • You can start with the natural numbers, and by using simple operations such as addition and its inverse, and multiplication and its inverse, you can reach the real number system. The generalization goes as: Natural to Whole to Integers to Rationals to Reals. Another name for the real number system is: the continuum.
  • You can use the computer languages theory to generate a machine representation for the natural numbers. You can also mechanize the addition etc. operations. Thus, you can “in principle” (i.e. with infinite time and infinite memory) represent the continuum in the CS terms.
  • Generating a machine representation for natural numbers requires the use of recursion.

Finally, a few words about epistemological (and metaphysical) validity.

  • The concepts of numbers (whether natural or real) have a logical precedence, i.e., they come first. The entire arithmetic and the calculus must come before does the computer-representation of some of their concepts.
  • A machine-representation (or, equivalently, a set-theoretic representation) is merely a representation. That is to say, it captures only some aspects or attributes of the actual concepts from maths (whether arithmetic or the continuum hypothesis). This issue is exactly like what we saw in the first and second posts in this series: a set is a concrete collection, unlike a concept which involves a consciously cast unit perspective.
  • If you try to translate the idea of recursion into the usual cognitive terms, you get absurdities such as: You can be your child, literally speaking. Not in the sense that using scientific advances in biology, you can create a clone of yourself and regard that clone to be both yourself and your child. No, not that way. Actually, such a clone is always your twin, not child, but still, the idea here is even worse. The idea here is you can literally father your own self.
  • Aristotle got it right. Look up the distinction between completed processes and the uncompleted ones. Metaphysically, only those objects or attributes can exist which correspond to completed mathematical processes. (Yes, as an extension, you can throw in the finite limiting values, too, provided they otherwise do mean something.)
  • Recursion by very definition involves not just absence of completion but the essence of the very inability to do so.

Closure on the “learnability issue”:

Homework 4: Go through the last two posts in this series as well as this one, and figure out that the only reason that the set theory allows a “recursive” relation is because a set is, by the design of the set theory, a concrete object whose definition does not have to involve an epistemologically valid process—a unit perspective as in a properly formed concept—and so, its name does not have to stand for an abstract mentally held unit. Call this happenstance “The Glaring Epistemological Flaw of the Set Theory” (or TGEFST for short).

Homework 5: Convince yourself that any lemma or theorem that makes use of Godel’s Incompleteness Theorem is necessarily based on TGEFST, and for the same reason, its truth-status is: it is not true. (In other words, any lemma or theorem based on Godel’s theorem is an invalid or untenable idea, i.e., essentially, a falsehood.)

Homework 6: Realize that the learnability issue, as discussed in Prof. Lev Reyzin’s news article (discussed in the first part of this series [^]), must be one that makes use of Godel’s Incompleteness Theorem. Then convince yourself that for precisely the same reason, it too must be untenable.

[Yes, Betteridge’s law [^] holds.]


Other remarks:

Remark 1:

As “asymptotical” pointed out at the relevant Reddit thread [^], the authors themselves say, in another paper posted at arXiv [^] that

While this case may not arise in practical ML applications, it does serve to show that the fundamental definitions of PAC learnability (in this case, their generalization to the EMX setting) is vulnerable in the sense of not being robust to changing the underlying set theoretical model.

What I now remark here is stronger. I am saying that it can be shown, on rigorously theoretical (epistemological) grounds, that the “learnability as undecidable” thesis by itself is, logically speaking, entirely and in principle untenable.

Remark 2:

Another point. My preceding conclusion does not mean that the work reported in the paper itself is, in all its aspects, completely worthless. For instance, it might perhaps come in handy while characterizing some tricky issues related to learnability. I certainly do admit of this possibility. (To give a vague analogy, this issue is something like running into a mathematically somewhat novel way into a known type of mathematical singularity, or so.) Of course, I am not competent enough to judge how valuable the work of the paper(s) might turn out to be, in the narrow technical contexts like that.

However, what I can, and will say is this: the result does not—and cannot—bring the very learnability of ANNs itself into doubt.


Phew! First, Panpsychiasm, and immediately then, Learnability and Godel. … I’ve had to deal with two untenable claims back to back here on this blog!

… My head aches….

… Code! I have to write some code! Or write some neat notes on ML in LaTeX. Only then will, I guess, my head stop aching so much…

Honestly, I just downloaded TensorFlow yesterday, and configured an environment for it in Anaconda. I am excited, and look forward to trying out some tutorials on it…

BTW, I also honestly hope that I don’t run into anything untenable, at least for a few weeks or so…

…BTW, I also feel like taking a break… May be I should go visit IIT Bombay or some place in konkan. … But there are money constraints… Anyway, bye, really, for now…


A song I like:

(Marathi) “hirvyaa hirvyaa rangaachi jhaaDee ghanadaaTa”
Music: Sooraj (the pen-name of “Shankar” from the Shankar-Jaikishan pair)
Lyrics: Ramesh Anavakar
Singers: Jaywant Kulkarni, Sharada


[Any editing would be minimal; guess I will not even note it down separately.] Did an extensive revision by 2019.01.21 23:13 IST. Now I will leave this post in the shape in which it is. Bye for now.