# Expanding on the procedure of expanding: Where is the procedure to do that?

Update on 18th June 2017:

See the update to the last post; I have added three more diagrams depicting the mathematical abstraction of the problem, and also added a sub-question by way of clarifying the problem a bit. Hopefully, the problem is clearer and also its connection to QM a bit more apparent, now.

Here I partly expand on the problem mentioned in my last post [^]. … Believe me, it will take more than one more post to properly expand on it.

The expansion of an expanding function refers to and therefore requires simultaneous expansions of the expansions in both the space and frequency domains.

The said expansions may be infinite [in procedure].

In the application of the calculus of variations to such a problem [i.e. like the one mentioned in the last post], the most important consideration is the very first part:

Among all the kinematically admissible configurations…

[You fill in the rest, please!]

A Song I Like:

[I shall expand on this bit a bit later on. Done, right today, within an hour.]

(Hindi) “goonji see hai, saari feezaa, jaise bajatee ho…”
Music: Shankar Ahasaan Loy
Lyrics: Javed Akhtar

# How I am simultaneously both: right, and probably, partly wrong…

[A few inline updates added, and some very minor copy-editing done at a few places, on 2014.12.28.]

In my research, I have found that I am simultaneously both right and, probably, partly wrong.

* * * * *   * * * * *   * * * * *

First, the probably partly wrong part. I will assume as if I am actually (partly) wrong, and write it. (The probability that I am wrong is very high.)

The error concerns my theory for the mechanism of propagation of photons that I had put forth almost a decade ago, formally published in Dec. 2005. I had had that idea since 1991/1992. The more recent experimental evidence, I guess, shows it wrong.

First, on to the mistake, and then, to the experimental evidence. [Here, I will not bother explaining myself. Instead, I will assume that you are well familiar with my published papers, know about the field, and thus, go straight-away to the heart of the matter.]

Ok. About the mistake. Essentially, it is that the mechanism that I had proposed implied that photons in free space have a diffusive dynamics. My browsing of the experimental evidence indicates that they instead have a ballistic dynamics. The theory, as presented in the papers, therefore, fails.

The experimental evidence comes from the PDC entangled photon pairs—i.e., if I get the evidence right.

Consider two PDC entangled photons. They are found at the intersection of the two cones. Assume that in experiment, the cones can be intercepted by detectors lying in a plane normal to the central axis, at any arbitrary distance. Suppose that the detectors’ plane lies at distance $x_1$ from the crystal and that you detect the entangled pair “simultaneously” at a certain time $t_1$. The question is, if the detectors’ plane is placed at a distance $10 x_1$, what would be the detection time? If $10 t_1$, then it obviously means that the dynamics is ballistic.

I presume that this in fact is the case in the actual experimentation, even though I could not easily find any direct experimental data to verify the same. But there is another reason why the dynamics must be ballistic.

If the dynamics were to be diffusive, then the probability of the simultaneously generated photon pairs ending up on the intersection spots would be so small that no one would think of producing photon pairs this way.

In my theory, the dynamics is diffusive because the photons change their directions randomly. This circumstance is impossible to realize in free space, because photons wouldn’t collide. By Pauli’s principle, two electrons would, but not two photons. Photons being bosons can occupy the same region of space without noticing each other.

[Update on 2014.12.28: The mistake isn’t “elementary.” Take two identical billiard balls and let them undergo perfectly elastic collision. The outcome is, in a way, the same as if two balls were to pass through each other without noticing the other’s presence. Think a bit more about this example and the mathematical (or “statistical”) in-distinguishability. … And also realize, despite Pauli’s principle, the mainstream QM doesn’t have anything to say on the matter: they don’t localize the photon while it is in propagation, in the first place. So, it’s not as if the issue has already been clarified by the physicists in their text-books and that the engineer is being audacious. BTW, most of the physicists who at all replied my emails, including an “Objectivist” one, had advised me to first study physics well, presumably before writing papers.]

I myself detected the error; none else pointed out. Indeed, none else (say a student or a reviewer) even hinted at a possibility. Not a single physicist to whom I had on my own sent my paper, came up with this objection.

The only person(s) to raise any sort of objection here were, hold your breath, my thesis examiners! They had begun by asking me whether the simulation I performed was for electrons or photons. There had followed a string of questions on their part, and simple, examinee-like assertions (rather than logically most thoroughly sound answers) on my part. They had kept the matter in abeyance before moving on to the next set of questions.

Yes, IMO, they could have granted me a PhD.

The reason is, firstly, that I had worked very hard on it and therefore had ample other material to justify a PhD. (One of the examiners had noted the relatively unusually large volume of work submitted for the degree.) I had other work or observations or results concerning numerical modelling, Huygens-Fresnel principle, and diffusion equation, and some of it still looks very fresh and original to me even now (especially the last; more on it later, right in this post).

Secondly, even for wave-fields, the description I presented could still be used as an abstract model for simulation of other linear wave fields (and I had pointed out some of these applications right in my papers), though, now, not for photons in free space.

Thirdly, many of the non-mainstream assumptions which I explicitly made do not get invalidated by the current experimental evidence, and remain worth taking note of. These include identification of photon (i) as a spatially localized phenomenon or condition  (ii) in aether. To build a new theory is the whole point behind doing theoretical research. As a PhD student one must show that one has learnt if not mastered the art of working with fresh theoretical concepts in a logical, consistent, manner. I did show some ability in this regard, and this part of the work does not get much affected by a re-evaluation of the theory in the light of the fresh experimental evidence.

Fourthly, though now there remains little to my theory if it is taken as an integrated whole, the fact of the matter is, in the process of having worked hard to build a model that later on turns out to be erroneous, I learnt a lot even before the error could be detected. And, thus, presenting a better model would be far easier. Indeed to the experts I can already tell on the fly: For a universe consisting only of electrons as fermions and photons as bosons, a model whereby the fermion is the pollen-grain and the photons are the bumping particles can easily be built, after throwing in a set of rules to get the phases (including spin and chirality) right. [Update on 2014.12.28: And, of course, while writing this post itself, I did know that a clarification for the creation and annihilation operators would be necessary, too. I just forgot to mention it in the original post.] No, I am not going to rush building one immediately. I just wanted to point out how easy it becomes to build a new, consistent, theory. The required integrations are already there.

That’s why, even if I guess I will have to withdraw the theory, I call it as being only partly wrong.

[Update on 2014.12.28: I think I should still pursue obtaining precise experimental evidence, before formally withdrawing my theory.]

A few other notes:

I didn’t know about the PDC photons or their behavior when I built my theory. No researcher/reviewer pointed it out to me, perhaps because they didn’t make the connection themselves. Realize, in the absence of the knowledge of the ballistic dynamics of the PDC photons, mine is a perfectly sensible theory. The very words ballistic vs. diffusive is something I read for the first time while casually browsing the home page of the optics group at the Raman Research Institute last year or so. I still don’t know whether the terms technically apply the issue I outlined above, but still, presumably, I have made clear the error I made.

* * * * *   * * * * *   * * * * *

Now, on to the part where I am fully right. This, of course, concerns the diffusion equation.

The experiment that any one could perform at home (the one I hinted at, in my last post) consists of this one:

Take a blotting paper and put a small drop of blue ink in the centre. Watch the boundary separating the blue and white portions grow. [Yes, I know I have mentioned this experiment in my blog posts before. But now there is a bit of a new angle, so read on anyway.]

The Fourier theory does not acknowledge the existence of this front. In Fourier theory, the blueness is spread over the entire paper right from the word go.

If possible, make a video of the front, and find out the speed of the propagation of the front.

By arguments rooted in the numerical analysis methods, as well as by Einstein’s stochastic approach, the theory predicts a uniform speed of propagation for the front. (In Einstein’s theory, it’s stochastically uniform.) In Fourier’s theory, the concept is inapplicable and hence, if it at all must be used, then it can be said that the front has an infinite speed. [Since the blotting paper experiment does not fulfil all the requirements of the diffusion equation, your experimentally observed front may not have a uniform outward speed. But, the experiment does fulfil the aspect of a support limited to a sub-domain vs. that over the full domain.] [Update on 2014.12.28: Drop me a line if you want to perform a more complicated experiment that should conform to the diffusion equation better, at home or at school/college lab, and I will give you a few ideas.]

Here is another funny thing. Suppose the initial concentration/temperature profile is in the form of the cosine curve in between $-\pi$ to $+\pi$, and that the domain also runs between the same limits. If you know Fourier’s theory, you know that essentially, we are simplifying the situation to the greatest extent by taking just a single basis function in the Fourier expansion (i.e., apart from the bias, which, for diffusion of mass, would have to be taken as $-1$ here).

Suppose further that the boundary conditions are maintained to be zero field variable (concentration/temperature) at both the endpoints.

Since the profile covers the entirety of the domain, the solution at every point, throughout the diffusion, would consist of only a cosine curve; it’s just that its height would go on dropping with exponential decay, as the diffusing species goes out of the domain.

Now, do a funny thing. Keep the initial profile as in the above example, but let the domain run from $-10 \pi$ to $+10 \pi$.  The boundary conditions continue to remain the same: zero field variable.

If your intuition is like mine, we would expect the solution in the two cases to remain the same. But, in Fourier’s theory, they are not.

Since the domain size has increased, there is a portion $9 \pi$ long on each side of the initial profile that must be brought into the initial Fourier expansion. This introduces an infinity of basis functions into the initial profile. Naturally, the profile at any future time also is different from the first case.

[Update on 2014.12.28: I had also thought of mentioning the fact that the Fourier solution would be still different for a domain of total size $\pm 2 \pi$, but forgot to write about it in the original post.]

To a mathematician (and to any modern theoretical physicist—the same ones who can’t detect a mistake in my QM papers), the existence of different solutions for the same set of initial and boundary conditions, makes perfect sense. At least they don’t seem to notice anything amiss here. But what their position implies is the following, for your experiment at home.

If you take a bigger blotting paper, the shape of the solution should be different.

In other words, in Fourier theory, the solution crucially depends on the domain size, too—not just on the local dynamics of the diffusing species.

The experiment which I was going to request, would have made use of this domain-size dependence, for photons propagation. But then, as I said, I caught my own error, and so, I am not going to request performance of that kind of an experiment.

But what about the intuition, you say?

Well, if you take a local theory—Einstein’s stochastic, or those having roots in the numerical methods (and I need to give a name to these, just for convenience in reference)—the solution profile remains identical regardless of the domain size.

[Update on 2014.12.28: Indeed, you could even cut a significant portion of the paper near one of its edges after the diffusion has already begun, so long as the blue front has not reached that place, and it would still not affect the solution. I forgot to mention this point while writing the original version of this post.]

Getting more technical once again: There also is an implication for the instantaneous flux rate and the total outward flux, for the two theories. The local theories predict a zero flux until the front reaches the boundary. But since the diffusing species is conserved (conservation of mass or heat power, etc.), what it implies is that the local theories must give a faster flux rate at the times after the front reaches the boundary.

Keeping the conservation angle aside, the zero-flux state is something that should be easily verifiable in experiment. If it is experimentally verified, then, the local-physics theories win; else, the Fourier theory does.

I bet that the local theories would win. There is no blueness near the edges until the front travels there.

And, as far as I can tell, even if not photons, at least electrons do obey the diffusive dynamics in the free space. If careful experimentation is conducted, I predict (even ahead of building my new quantum theory) that there should be an experimentally verifiable zero-flux period, in contradiction to the mainstream quantum mechanics. And if it is observed, then it means that many mainstream ideas—or their mystifying components, at least—are completely wrong.

For this very reason, I don’t expect them to conduct the necessary experimentation with electrons. Or, try to think of any novel experimentation scheme with PDC photons.

In the meanwhile, I will build my new theory. However, now with another difference. The last time, I was content identifying the photon as a local condition, without specifying anything regarding its structure or the local dynamics. That was because, as a PhD student in engineering, I couldn’t afford to be too speculative. Now, however, I can afford to be a bit more relaxed, and begin considering toy ideas for more detailed models for the quanta. Inasmuch as photons are massless and there is aether, the situation is ripe for me to toy with some fluids-based ideas or models. Since I anyway do CFD, no one in engineering would even notice what I was doing was toying with some QM-related ideas. … Nice, no?

* * * * *   * * * * *   * * * * *

Anyway, best wishes for a merry Christmas, and a very happy and prosperous new year. … No, I don’t think I will be writing any post in the remaining parts of this year. So, there. Wish you a very happy and prosperous new year, too…

* * * * *   * * * * *   * * * * *

A Song I Like:

(Hindi) “jalate hain jis ke liye…”
Singer: Talat Mahmood
Music: S. D. Burman
Lyrics: Majrooh Sultanpuri

[As usual, I may come back and do some minor copy-editing and additions, though the main matter will remain as is. [Update on 2014.12.28: Done. I won’t bother any more with this post. If anything more is to be added, I will simply write a fresh post.]]

[E&OE]

/

# Yo—5: Giving thanks to the Fourier transform

Every year, at the time of thanksgiving, the CalTech physicist (and author of popular science books) Sean Carroll picks up a technique, principle, or theory of physics (or mathematics), for giving his thanks. Following this tradition (of some 8 years, I gather), Carroll has, for this year, picked up the Fourier transform as the recipient of his thanks. [^]

That way, it’s quite a good choice, if you ask me. …

…Though, of course, as soon as I began reading Carroll’s post, a certain thing to immediately cross my mind was what someone had said concerning Fourier’s theory.

Fourier’s is the most widely used theory in the entire history of physics, he had said, as well as the most abused one . … The words may not be exact, but that was the sense of what had been said. Someone respectable had said it, though I can’t any longer recall exactly who. (Perhaps, an engineer, not a physicist.)

The Fourier theory has fascinated me for long; I have published not just a paper on it but also quite a few blog posts.

To cut a long story short, I would pick out (i) the Lagrangian program (including what is known as the Lagrangian mechanics as well as the calculus of variations, the stationarity/minimum/maximum/action etc. principles, the Hamiltonian mechanics, etc.) and (ii) the Fourier theory, as the two basic “pillars” over which every modern quantum-mechanical riddle rests.

Yes, including wave-particle duality, quantum entanglement, EPR, Bell’s inequalities,  whatnot….

As I have been pointing out, the biggest good point that both these theories have in common is that they allow us to at all perform at least some kind of a mathematical calculation of the analytical kind—even if, often times, only in a physically approximate sense—in situations where none would otherwise be possible.

The bad point goes with the good point.

The biggest bad point common to both of them is that they both take some physics that actually occurs only locally (say the classical Newtonian mechanics) and smear it onto a supposedly equivalent “world”—an imaginary non-entity serving as a substitute for the actually existing physical world. And, this non-entity, in both theories (Lagrangian and Fourier’s) is global in nature.

The substitution of the global mathematics in place of the local physics is the sin common to the abuse of both the theories.

Think of the brachistochrone problem, for instance [^]. The original Newtonian approach of working with the local forces using $\vec{F} = d\vec{p}/dt$ (including their reactions), is in principle applicable also in this situation. The trouble is, both the gravitational potential field and the constraints are continuous in nature, not discrete. As the bead descends on the curve, it undergoes an infinity of collisions, and so, as far as performing calculations go, the vector approach can’t be put to use in a direct manner here: you can’t possibly calculate an infinity of forces, or reactions to them, or use them to incrementally calculate the changes in velocities that these come to enforce. Thus, it is the complexity of the constraints (or the “boundary conditions”)—though not the inapplicability of the basic governing physical laws—that make Newton’s original approach impracticable in situations like the brachistochrone. The Lagrangian approach allows us to approach the same problem in a mathematically far simpler manner. [Newton himself was one of the very first to solve this problem using this alternative approach which, later on, to be formalized by Lagrange. (Look up the “lion’s paws” story.)]

Something similar happens also with the Fourier analysis. Even if a phenomenon is decidedly local, like diffusion of the physically distinct material particles (or parcels) from one place to another, the Fourier theory takes these distinct (spatially definite) particles, and then replaces them by positing a global non-entity that is spread everywhere in the universe, but with some peak coinciding with where the actual particles physically are. The so-smeared non-entity is the place-holder [!] for the spatially delimited particles, in Fourier’s theory. The globally spread-out entity is not just an abstraction, but, really speaking, also an approximation—a mathematical approximation. And as far as the inaccuracies in the calculations go, it turns out, this approximation does work out very well in practice. (The reason is not mystical. It is simply that the diffusing particles (atoms/molecules) are so small and so numerous in the physically existing universe.) But if you therefore commit the error of substituting this approximate mathematical abstraction in place of the exact physical reality, you directly end up having the riddles of QM.

If you are interested in pursuing this matter further, you should see my conference paper, first. (Drop me a line if you haven’t already downloaded it when it was available off my Web site, or can’t locate it any other way.) … Though I have also written quite a few posts on the topic, they don’t make for the best material—they are far too informally written (meaning: written completely on the fly and without any previously thought out structure at all). They also too lengthy, and often dwell on technical aspects that are too detailed.

And, that way, they don’t have much mathematical depth, anyway.

But since I seem to be the only person in the entire world who has ever thought along these lines (and one who continues to care), you may want to have a look at myQ detailed musings, too: [^] [^] [^][^].

(… And, no, as far as this issue goes, by no means am I done. I would continue exploring this topic further in my research, also in the future. Though, let me wind it all up for now… This was supposed to be a short and sweet post—a “Yo” post!)

* * * * *   * * * * *   * * * * *

A Song I Like:

(Marathi) “ekaTyaane ekaTe gardeet chaalaave”