Data Science links—1

Oakay… My bookmarks library has grown too big. Time to move at least a few of them to a blog-post. Here they are. … The last one is not on Data Science, but it happens to be the most important one of them all!



On Bayes’ theorem:

Oscar Bonilla. “Visualizing Bayes’ theorem” [^].

Jayesh Thukarul. “Bayes’ Theorem explained” [^].

Victor Powell. “Conditional probability” [^].


Explanations with visualizations:

Victor Powell. “Explained Visually.” [^]

Christopher Olah. Many topics [^]. For instance, see “Calculus on computational graphs: backpropagation” [^].


Fooling the neural network:

Julia Evans. “How to trick a neural network into thinking a panda is a vulture” [^].

Andrej Karpathy. “Breaking linear classifiers on ImageNet” [^].

A. Nguyen, J. Yosinski, and J. Clune. “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images” [^]

Melanie Mitchell. “Artificial Intelligence hits the barrier of meaning” [^]


The Most Important link!

Ijad Madisch. “Why I hire scientists, and why you should, too” [^]


A song I like:

(Western, pop) “Billie Jean”
Artist: Michael Jackson

[Back in the ’80s, this song used to get played in the restaurants from the Pune camp area, and also in the cinema halls like West-End, Rahul, Alka, etc. The camp area was so beautiful, back then—also uncrowded, and quiet.

This song would also come floating on the air, while sitting in the evening at the Quark cafe, situated in the middle of all the IITM hostels (next to skating rink). Some or the other guy would be playing it in a nearby hostel room on one of those stereo systems which would come with those 1 or 2 feet tall “hi-fi” speaker-boxes. Each box typically had three stacked speakers. A combination of a separately sitting sub-woofer with a few small other boxes or a soundbar, so ubiquitous today, had not been invented yet… Back then, Quark was a completely open-air cafe—a small patch of ground surrounded by small trees, and a tiny hexagonal hut, built in RCC, for serving snacks. There were no benches, even, at Quark. People would sit on those small concrete blocks (brought from the civil department where they would come for testing). Deer would be roaming very nearby around. A daring one or two could venture to come forward and eat pizza out of your (fully) extended hand!…

…Anyway, coming back to the song itself, I had completely forgotten it, but got reminded when @curiouswavefn mentioned it in one of his tweets recently. … When I read the tweet, I couldn’t make out that it was this song (apart from Bach’s variations) that he was referring to. I just idly checked out both of them, and then, while listening to it, I suddenly recognized this song. … You see, unlike so many other guys of e-schools of our times, I wouldn’t listen to a lot of Western pop-songs those days (and still don’t). Beatles, ABBA and a few other groups/singers, may be, also the Western instrumentals (a lot) and the Western classical music (some, but definitely). But somehow, I was never too much into the Western pop songs. … Another thing. The way these Western singers sing, it used to be very, very hard for me to figure out the lyrics back then—and the situation continues mostly the same way even today! So, recognizing a song by its name was simply out of the question….

… Anyway, do check out the links (even if some of them appear to be out of your reach on the first reading), and enjoy the song. … Take care, and bye for now…]

 

Advertisements

Non-Interview Questions on Data Science—Part 1

This entry is the first in a series of posts which will note some of the questions that no one will ever ask you during any interview for any position in the Data Science industry.

Naturally, if you ask for my opinion, you should not consider modifying these questions a bit and posting them as a part of your own post on Medium.com, AnalyticsVidhya, KDNuggets, TowardsDataScience, ComingFromDataScience, etc.

No, really! There would be no point in lifting these questions and posting them as if they were yours, because no one in the industry is ever going to get impressed by you because you raised them. … I am posting them here simply because… because “I am like that only.”

OK, so here is the first installment in this practically useless series. (I should know. I go jobless.)

(Part 1 mostly covers linear and logistic regression, and just a bit of probability.)


Q.1: Consider the probability theory. How are the following ideas related to each other?: random phenomenon, random experiment, trial, result, outcome, outcome space, sample space, event, random variable, and probability distribution. In particular, state precisely the difference between a result and an outcome, and between an outcome and an event.

Give a few examples of finite and countably infinite sample spaces. Give one example of a random variable whose domain is not the real number line. (Hint: See the Advise at the end of this post concerning which books to consult.)


Q.2: In the set theory, when a set is defined through enumeration, repeated instances are not included in the definition. In the light of this fact, answer the following question: Is an event a set? or is it just a primitive instance subsumed in a set? What precisely is the difference between a trial, a result of a trial, and an event? (Hint: See the Advise at the end of this post concerning which books to consult.)


Q.3: Select the best alternative: In regression for making predictions with a continuous target data, if a model is constructed in reference to the equation y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \beta_3 x_i^3, then:
(a) It is a sub-type of the linear regression model.
(b) It is a polynomial regression model.
(c) It is a nonlinear regression model because powers > 1 of the independent variable x_i are involved.
(d) It is a nonlinear regression model because more than two \beta_m terms are involved.
(e) Both (a) and (b)
(g) Both (b) and (c)
(f) Both (c) and (d)
(g) All of (b), (c), and (d)
(h) None of the above.
(Hint: Don’t rely too much on the textbooks being used by the BE (CS) students in the leading engineering colleges in Pune and Mumbai.)


Q.4: Consider a data-set consisting of performance of students on a class test. It has three columns: student ID, hours studied, and marks obtained. Suppose you decide to use the simple linear regression technique to make predictions.

Let’s say that you assume that the hours studied are the independent variable (predictor), and the marks obtained are the dependent variable (response). Making this assumption, you make a scatter plot, carry out the regression, and plot the regression line predicted by the model too.

The question now is: If you interchange the designations of the dependent and independent variables (i.e., if you take the marks obtained as predictors and the hours studied as responses), build a second linear model on this basis, and plot the regression line thus predicted, will it coincide with the line plotted earlier or not. Why or why not?

Repeat the question for the polynomial regression. Repeat the question if you include the simplest interaction term in the linear model.


Q.5: Draw a schematic diagram showing circles for nodes and straight-lines for connections (as in the ANN diagrams) for a binary logistic regression machine that operates on just one feature. Wonder why your text-book didn’t draw it in the chapter on the logistic regression.


Q.6: Suppose that the training input for a classification task consists of r number of distinct data-points and c number of features. If logistic regression is to be used for classification of this data, state the number of the unknown parameters there would be. Make suitable assumptions as necessary, and state them.


Q.7: Obtain (or write) some simple Python code for implementing from the scratch a single-feature binary logistic regression machine that uses the simple (non-stochastic) gradient descent method that computes the gradient for each row (batch-size of 1).

Modify the code to show a real-time animation of how the model goes on changing as the gradient descent algorithm progresses. The animation should depict a scatter plot of the sample data (y vs. x) and not the parameters space (\beta_0 vs. \beta_1). The animation should highlight the data-point currently being processed in a separate color. It should also show a plot of the logistic function on the same graph.

Can you imagine, right before running (or even building) the animation, what kind of visual changes is the animation going to depict? how?


Q.8: What are the important advantage of the stochastic gradient descent method over the simple (non-stochastic) gradient descent?


Q.9: State true or false: (i) The output of the logistic function is continuous. (ii) The minimization of the cost function in logistic regression involves a continuous dependence on the undetermined parameters.

In the light of your answers, explain the reason why the logistic regression can at all be used as a classification mechanism (i.e. for targets that are “discrete”, not continuous). State only those axioms of the probability theory which are directly relevant here.


Q.10: Draw diagrams in the parameters-space for the Lasso regression and the Ridge regression. The question now is to explain precisely what lies inside the square or circular region. In each case, draw an example path that might get traced during the gradient descent, and clearly explain why the progress occurs the way it does.


Q.11: Briefly explain how the idea of the logistic regression gets applied in the artificial neural networks (ANNs). Suppose that a training data-set has c number of features, r number of data-rows, and M number of output bins (i.e. classification types). Assuming that the neural network does not carry any hidden layers, calculate the number of logistic regressions that would be performed in a single batch. Make suitable assumptions as necessary.

Does your answer change if you consider the multinomial logistic regression?


Q.12: State the most prominent limitation of the gradient descent methods. State the name of any one technique which can overcome this limitation.


Advise: To answer the first two questions, don’t refer to the programming books. In fact, don’t even rely too much on the usual textbooks. Even Wasserman skips over the topic and Stirzaker is inadquate. Kreyszig is barely OK. A recommended text (more rigorous but UG-level, and brief) for this topic is: “An Introduction to Probability and Statistics” (2015) Rohatgi and Saleh, Wiley.


Awww… Still with me?

If you read this far, chances are very bright that you are really^{really} desperately looking for a job in the data science field. And, as it so happens, I am also a very, very kind hearted person. I don’t like to disappoint nice, ambitious… err… “aspiring” people. So, let me offer you some real help before you decide to close this page (and this blog) forever.

Here is one question they might actually ask you during an interview—especially if the interviewer is an MBA:

A question they might actually ask you in an interview: What are the three V’s of big data? four? five?

(Yes, MBA’s do know arithmetic. At least, it was there on their CAT / GMAT entrance exams. Yes, you can use this question for your posts on Medium.com, AnalyticsVidhya, KDNuggets, TowardsDataScience, ComingFromDataScience, etc.)


A couple of notes:

  1. I might come back and revise the questions to make them less ambiguous or more precise.
  2. Also, please do drop a line if any of the questions is not valid, or shows a poor understanding on my part—this is easily possible.

 


A song I like:

[Credits listed in a random order. Good!]

(Hindi) “mausam kee sargam ko sun…”
Music: Jatin-Lalit
Singer: Kavita Krishnamoorthy
Lyrics: Majrooh Sultanpuri


History:

First written: Friday 14 June 2019 11:50:25 AM IST.
Published online: 2019.06.16 12:45 IST.
The songs section added: 2019.06.16 22:18 IST.

Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…

After I wrote the last post [^], several points struck me. Some of the points that were mostly implicit needed to be addressed systematically. So, I began writing a small document containing these after-thoughts, focusing more on the structural side of the argument.

However, I don’t find time to convert these points + statements into a proper write-up. At the same time, I want to get done with this topic, at least for now, so that I can better focus on some other tasks related to data science. So, let me share the write-up in whatever form it is in, currently. Sorry for its uneven tone and all (compared to even my other writing, that is!)


Causality as a concept is very poorly understood by present-day physicists. They typically understand only one sense of the term: evolution in time. But causality is a far broader concept. Here I agree with Ayn Rand / Leonard Peikoff (OPAR). See the Ayn Rand Lexicon entry, here [^]. (However, I wrote the points below without re-reading it, and instead, relying on whatever understanding I have already come to develop starting from my studies of the same material.)

Physical universe consists of objects. Objects have identity. Identity is the sum total of all characteristics, attributes, properties, etc., of an object. Objects act in accordance with their identity; they cannot act otherwise. Interactions are not primary; they do not come into being without there being objects that undergo the interactions. Objects do not change their respective identities when they take actions—not even during interactions with other objects. The law of causality is a higher-level view taken of this fact.

In the cause-effect relationship, the cause refers to the nature (identity) of an object, and the effect refers to an action that the object takes (or undergoes). Both refer to one and the same object. TBD: Trace the example of one moving billiard ball undergoing a perfectly elastic collision with another billiard ball. Bring out how the interaction—here, the pair of the contact forces—is a name for each ball undergoing an action in accordance with its nature. An interaction is a pair of actions.


A physical law as a mapping (e.g., a function, or even a functional) from inputs to outputs.

The quantitative laws of physics often use the real number system, i.e., quantification with infinite precision. An infinite precision is a mathematical concept, not physical. (Expect physicists to eternally keep on confusing between the two kinds of concepts.)

Application of a physical law traces the same conceptual linkages as are involved in the formulation of law, but in the reverse direction.

In both formulation of a physical law and in its application, there is always some regime of applicability which is at least implicitly understood for both inputs and outputs. A pertinent idea here is: range of variations. A further idea is the response of the output to small variations in the input.


Example: Prediction by software whether a cricket ball would have hit the stumps or not, in an LBW situation.

The input position being used by the software in a certain LBW decision could be off from reality by millimeters, or at least, by a fraction of a millimeter. Still, the law (the mapping) is such that it produces predictions that are within small limits, so that it can be relied on.

Two input values, each theoretically infinitely precise, but differing by a small magnitude from each other, may be taken to define an interval or zone of input variations. As to the zone of the corresponding output, it may be thought of as an oval produced in the plane of the stumps, using the deterministic method used in making predictions.

The nature of the law governing the motion of the ball (even after factoring in aspects like effects of interaction with air and turbulence, etc.) itself is such that the size of the O/P zone remains small enough. (It does not grow exponentially.) Hence, we can use the software confidently.

That is to say, the software can be confidently used for predicting—-i.e., determining—the zone of possible landing of the ball in the plane of the stumps.


Overall, here are three elements that must be noted: (i) Each of the input positions lying at the extreme ends of the input zone of variations itself does have an infinite precision. (ii) Further, the mapping (the law) has theoretically infinite precision. (iii) Each of the outputs lying at extreme ends of the output zone also itself has theoretically infinite precision.

Existence of such infinite precision is a given. But it is not at all the relevant issue.

What matters in applications is something more than these three. It is the fact that applications always involve zones of variations in the inputs and outputs.

Such zones are then used in error estimates. (Also for engineering control purposes, say as in automation or robotic applications.) But the fact that quantities being fed to the program as inputs themselves may be in error is not the crux of the issue. If you focus too much on errors, you will simply get into an infinite regress of error bounds for error bounds for error bounds…

Focus, instead, on the infinity of precision of the three kinds mentioned above, and focus on the fact that in addition to those infinitely precise quantities, application procedure does involve having zones of possible variations in the input, and it also involves the problem estimating how large the corresponding zone of variations in the output is—whether it is sufficiently small for the law and a particular application procedure or situation.

In physics, such details of application procedures are kept merely understood. They are hardly, if ever, mentioned and discussed explicitly. Physicists again show their poor epistemology. They discuss such things in terms not of the zones but of “error” bounds. This already inserts the wedge of dichotomy: infinitely precise laws vs. errors in applications. This dichotomy is entirely uncalled for. But, physicists simply aren’t that smart, that’s all.


“Indeterministic mapping,” for the above example (LBW decisions) would the one in which the ball can be mapped as going anywhere over, and perhaps even beyond, the stadium.

Such a law and the application method (including the software) would be useless as an aid in the LBW decisions.

However, phenomenologically, the very dynamics of the cricket ball’s motion itself is simple enough that it leads to a causal law whose nature is such that for a small variation in the input conditions (a small input variations zone), the predicted zone of the O/P also is small enough. It is for this reason that we say that predictions are possible in this situation. That is to say, this is not an indeterministic situation or law.


Not all physical situations are exactly like the example of the predicting the motion of the cricket ball. There are physical situations which show a certain common—and confusing—characteristic.

They involve interactions that are deterministic when occurring between two (or few) bodies. Thus, the laws governing a simple interaction between one or two bodies are deterministic—in the above sense of the term (i.e., in terms of infinite precision for mapping, and an existence of the zones of variations in the inputs and outputs).

But these physical situations also involve: (i) a nonlinear mapping, (ii) a sufficiently large number of interacting bodies, and further, (iii) coupling of all the interactions.

It is these physical situations which produce such an overall system behaviour that it can produce an exponentially diverging output zone even for a small zone of input variations.

So, a small change in I/P is sufficient to produce a huge change in O/P.

However, note the confusing part. Even if the system behaviour for a large number of bodies does show an exponential increase in the output zone, the mapping itself is such that when it is applied to only one pair of bodies in isolation of all the others, then the output zone does remain non-exponential.

It is this characteristic which tricks people into forming two camps that go on arguing eternally. One side says that it is deterministic (making reference to a single-pair interaction), the other side says it is indeterministic (making reference to a large number of interactions, based on the same law).

The fallacy arises out of confusing a characteristic of the application method or model (variations in input and output zones) with the precision of the law or the mapping.


Example: N-body problem.

Example: NS equations as capturing a continuum description (a nonlinear one) of a very large number of bodies.

Example: Several other physical laws entering the coupled description, apart from the NS equations, in the bubbles collapse problem.

Example: Quantum mechanics


The Law vs. the System distinction: What is indeterministic is not a law governing a simple interaction taken abstractly (in which context the law was formed), but the behaviour of the system. A law (a governing equation) can be deterministic, but still, the system behavior can become indeterministic.


Even indeterministic models or system designs, when they are described using a different kind of maths (the one which is formulated at a higher level of abstractions, and, relying on the limiting values of relative frequencies i.e. probabilities), still do show causality.

Yes, probability is a notion which itself is based on causality—after all, it uses limiting values for the relative frequencies. The ability to use the limiting processes squarely rests on there being some definite features which, by being definite, do help reveal the existence of the identity. If such features (enduring, causal) were not to be part of the identity of the objects that are abstractly seen to act probabilistically, then no application of a limiting process would be possible, and so not even a definition probability or randomness would be possible.

The notion of probability is more fundamental than that of randomness. Randomness is an abstract notion that idealizes the notion of absence of every form of order. … You can use the axioms of probability even when sequences are known to be not random, can’t you? Also, hierarchically, order comes before does randomness. Randomness is defined as the absence of (all applicable forms of) orderliness; orderliness is not defined as absence of randomness—it is defined via the some but any principle, in reference to various more concrete instances that show some or the other definable form of order.

But expect not just physicists but also mathematicians, computer scientists, and philosophers, to eternally keep on confusing the issues involved here, too. They all are dumb.


Summary:

Let me now mention a few important take-aways (though some new points not discussed above also crept in, sorry!):

  • Physical laws are always causal.
  • Physical laws often use the infinite precision of the real number system, and hence, they do show the mathematical character of infinite precision.
  • The solution paradigm used in physics requires specifying some input numbers and calculating the corresponding output numbers. If the physical law is based on real number system, than all the numbers used too are supposed to have infinite precision.
  • Applications always involve a consideration of the zone of variations in the input conditions and the corresponding zone of variations in the output predictions. The relation between the sizes of the two zones is determined by the nature of the physical law itself. If for a small variation in the input zone the law predicts a sufficiently small output zone, people call the law itself deterministic.
  • Complex systems are not always composed from parts that are in themselves complex. Complex systems can be built by arranging essentially very simpler parts that are put together in complex configurations.
  • Each of the simpler part may be governed by a deterministic law. However, when the input-output zones are considered for the complex system taken as a whole, the system behaviour may show exponential increase in the size of the output zone. In such a case, the system must be described as indeterministic.
  • Indeterministic systems still are based on causal laws. Hence, with appropriate methods and abstractions (including mathematical ones), they can be made to reveal the underlying causality. One useful theory is that of probability. The theory turns the supposed disadvantage (a large number of interacting bodies) on its head, and uses limiting values of relative frequencies, i.e., probability. The probability theory itself is based on causality, and so are indeterministic systems.
  • Systems may be deterministic or indeterministic, and in the latter case, they may be described using the maths of probability theory. Physical laws are always causal. However, if they have to be described using the terms of determinism or indeterminism, then we will have to say that they are always deterministic. After all, if the physical laws showed exponentially large output zone even when simpler systems were considered, they could not be formulated or regarded as laws.

In conclusion: Physical laws are always causal. They may also always be regarded as being deterministic. However, if systems are complex, then even if the laws governing their simpler parts were all deterministic, the system behavior itself may turn out to be indeterministic. Some indeterministic systems can be well described using the theory of probability. The theory of probability itself is based on the idea of causality albeit measures defined over large number of instances are taken, thereby exploiting the fact that there are far too many objects interacting in a complex manner.


A song I like:

(Hindi) “ho re ghungaroo kaa bole…”
Singer: Lata Mangeshkar
Music: R. D. Burman
Lyrics: Anand Bakshi