Fluxes, scalars, vectors, tensors…. and, running in circles about them!

0. This post is written for those who know something about Thermal Engineering (i.e., fluid dynamics, heat transfer, and transport phenomena) say up to the UG level at least. [A knowledge of Design Engineering, in particular, the tensors as they appear in solid mechanics, would be helpful to have but not necessary. After all, contrary to what many UGC and AICTE-approved (Full) Professors of Mechanical Engineering teaching ME (Mech – Design Engineering) courses in SPPU and other Indian universities believe, tensors not only appear also in fluid mechanics, but, in fact, the fluids phenomena make it (only so slightly) easier to understand this concept. [But all these cartoons characters, even if they don’t know even this plain and simple a fact, can always be fully relied (by anyone) about raising objections about my Metallurgy background, when it comes to my own approval, at any time! [Indians!!]]]

In this post, I write a bit about the following question:

Why is the flux \vec{J} of a scalar \phi a vector quantity, and not a mere number (which is aka a “scalar,” in certain contexts)? Why is it not a tensor—whatever the hell the term means, physically?

And, what is the best way to define a flux vector anyway?


1.

One easy answer is that if the flux is a vector, then we can establish a flux-gradient relationship. Such relationships happen to appear as statements of physical laws in all the disciplines wherever the idea of a continuum was found useful. So the scope of the applicability of the flux-gradient relationships is very vast.

The reason to define the flux as a vector, then, becomes: because the gradient of a scalar field is a vector field, that’s why.

But this answer only tells us about one of the end-purposes of the concept, viz., how it can be used. And then the answer provided is: for the formulation of a physical law. But this answer tells us nothing by way of the very meaning of the concept of flux itself.


2.

Another easy answer is that if it is a vector quantity, then it simplifies the maths involved. Instead of remembering having to take the right \theta and then multiplying the relevant scalar quantity by the \cos of this \theta, we can more succinctly write:

q = \vec{J} \cdot \vec{S} (Eq. 1)

where q is the quantity of \phi, an intensive scalar property of the fluid flowing across a given finite surface, \vec{S}, and \vec{J} is the flux of \Phi, the extensive quantity corresponding to the intensive quantity \phi.

However, apart from being a mere convenience of notation—a useful shorthand—this answer once again touches only on the end-purpose, viz., the fact that the idea of flux can be used to calculate the amount q of the transported property \Phi.

There also is another problem with this, second, answer.

Notice that in Eq. 1, \vec{J} has not been defined independently of the “dotting” operation.

If you have an equation in which the very quantity to be defined itself has an operator acting on it on one side of an equation, and then, if a suitable anti- or inverse-operator is available, then you can apply the inverse operator on both sides of the equation, and thereby “free-up” the quantity to be defined itself. This way, the quantity to be defined becomes available all by itself, and so, its definition in terms of certain hierarchically preceding other quantities also becomes straight-forward.

OK, the description looks more complex than it is, so let me illustrate it with a concrete example.

Suppose you want to define some vector \vec{T}, but the only basic equation available to you is:

\vec{R} = \int \text{d} x \vec{T}, (Eq. 2)

assuming that \vec{T} is a function of position x.

In Eq. 2, first, the integral operator must operate on \vec{T}(x) so as to produce some other quantity, here, \vec{R}. Thus, Eq. 2 can be taken as a definition for \vec{R}, but not for \vec{T}.

However, fortunately, a suitable inverse operator is available here; the inverse of integration is differentiation. So, what we do is to apply this inverse operator on both sides. On the right hand-side, it acts to let \vec{T} be free of any operator, to give you:

\dfrac{\text{d}\vec{R}}{\text{d}x} = \vec{T} (Eq. 3)

It is the Eq. 3 which can now be used as a definition of \vec{T}.

In principle, you don’t have to go to Eq. 3. In principle, you could perhaps venture to use a bit of notation abuse (the way the good folks in the calculus of variations and integral transforms always did), and say that the Eq. 2 itself is fully acceptable as a definition of \vec{T}. IMO, despite the appeal to “principles”, it still is an abuse of notation. However, I can see that the argument does have at least some point about it.

But the real trouble with using Eq. 1 (reproduced below)

q = \vec{J} \cdot \vec{S} (Eq. 1)

as a definition for \vec{J} is that no suitable inverse operator exists when it comes to the dot operator.


3.

Let’s try another way to attempt defining the flux vector, and see what it leads to. This approach goes via the following equation:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} (Eq. 4)

where \hat{n} is the unit normal to the surface \vec{S}, defined thus:

\hat{n} \equiv \dfrac{\vec{S}}{|\vec{S}|} (Eq. 5)

Then, as the crucial next step, we introduce one more equation for q, one that is independent of \vec{J}. For phenomena involving fluid flows, this extra equation is quite simple to find:

q = \phi \rho \dfrac{\Omega_{\text{traced}}}{\Delta t} (Eq. 6)

where \phi is the mass-density of \Phi (the scalar field whose flux we want to define), \rho is the volume-density of mass itself, and \Omega_{\text{traced}} is the volume that is imaginarily traced by that specific portion of fluid which has imaginarily flowed across the surface \vec{S} in an arbitrary but small interval of time \Delta t. Notice that \Phi is the extensive scalar property being transported via the fluid flow across the given surface, whereas \phi is the corresponding intensive quantity.

Now express \Omega_{\text{traced}} in terms of the imagined maximum normal distance from the plane \vec{S} up to which the forward moving front is found extended after \Delta t. Thus,

\Omega_{\text{traced}} = \xi |\vec{S}| (Eq. 7)

where \xi is the traced distance (measured in a direction normal to \vec{S}). Now, using the geometric property for the area of parallelograms, we have that:

\xi = \delta \cos\theta (Eq. 8)

where \delta is the traced distance in the direction of the flow, and \theta is the angle between the unit normal to the plane \hat{n} and the flow velocity vector \vec{U}. Using vector notation, Eq. 8 can be expressed as:

\xi = \vec{\delta} \cdot \hat{n} (Eq. 9)

Now, by definition of \vec{U}:

\vec{\delta} = \vec{U} \Delta t, (Eq. 10)

Substituting Eq. 10 into Eq. 9, we get:

\xi = \vec{U} \Delta t \cdot \hat{n} (Eq. 11)

Substituting Eq. 11 into Eq. 7, we get:

\Omega_{\text{traced}} = \vec{U} \Delta t \cdot \hat{n} |\vec{S}| (Eq. 12)

Substituting Eq. 12 into Eq. 6, we get:

q = \phi \rho \dfrac{\vec{U} \Delta t \cdot \hat{n} |\vec{S}|}{\Delta t} (Eq. 13)

Cancelling out the \Delta t, Eq. 13 becomes:

q = \phi \rho \vec{U} \cdot \hat{n} |\vec{S}| (Eq. 14)

Having got an expression for q that is independent of \vec{J}, we can now use it in order to define \vec{J}. Thus, substituting Eq. 14 into Eq. 4:

\vec{J} \equiv \dfrac{q}{|\vec{S}|} \hat{n} = \dfrac{\phi \rho \vec{U} \cdot \hat{n} |\vec{S}|}{|\vec{S}|} \hat{n} (Eq. 16)

Cancelling out the two |\vec{S}|s (because it’s a scalar—you can always divide any term by a scalar (or even  by a complex number) but not by a vector), we finally get:

\vec{J} \equiv \phi \rho \vec{U} \cdot \hat{n} \hat{n} (Eq. 17)


4. Comments on Eq. 17

In Eq. 17, there is this curious sequence: \hat{n} \hat{n}.

It’s a sequence of two vectors, but the vectors apparently are not connected by any of the operators that are taught in the Engineering Maths courses on vector algebra and calculus—there is neither the dot (\cdot) operator nor the cross \times operator appearing in between the two \hat{n}s.

But, for the time being, let’s not get too much perturbed by the weird-looking sequence. For the time being, you can mentally insert parentheses like these:

\vec{J} \equiv \left[ \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \right) \right] \hat{n} (Eq. 18)

and see that each of the two terms within the parentheses is a vector, and that these two vectors are connected by a dot operator so that the terms within the square brackets all evaluate to a scalar. According to Eq. 18, the scalar magnitude of the flux vector is:

|\vec{J}| = \left( \phi \rho \vec{U}\right) \cdot \left( \hat{n} \right) (Eq. 19)

and its direction is given by: \hat{n} (the second one, i.e., the one which appears in Eq. 18 but not in Eq. 19).


5.

We explained away our difficulty about Eq. 17 by inserting parentheses at suitable places. But this procedure of inserting mere parentheses looks, by itself, conceptually very attractive, doesn’t it?

If by not changing any of the quantities or the order in which they appear, and if by just inserting parentheses, an equation somehow begins to make perfect sense (i.e., if it seems to acquire a good physical meaning), then we have to wonder:

Since it is possible to insert parentheses in Eq. 17 in some other way, in some other places—to group the quantities in some other way—what physical meaning would such an alternative grouping have?

That’s a delectable possibility, potentially opening new vistas of physico-mathematical reasonings for us. So, let’s pursue it a bit.

What if the parentheses were to be inserted the following way?:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20)

On the right hand-side, the terms in the second set of parentheses evaluate to a vector, as usual. However, the terms in the first set of parentheses are special.

The fact of the matter is, there is an implicit operator connecting the two vectors, and if it is made explicit, Eq. 20 would rather be written as:

\vec{J} \equiv \left( \hat{n} \otimes \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 21)

The \otimes operator, as it so happens, is a binary operator that operates on two vectors (which in general need not necessarily be one and the same vector as is the case here, and whose order with respect to the operator does matter). It produces a new mathematical object called the tensor.

The general form of Eq. 21 is like the following:

\vec{V} = \vec{\vec{T}} \cdot \vec{U} (Eq. 22)

where we have put two arrows on the top of the tensor, to bring out the idea that it has something to do with two vectors (in a certain order). Eq. 22 may be read as the following: Begin with an input vector \vec{U}. When it is multiplied by the tensor \vec{\vec{T}}, we get another vector, the output vector: \vec{V}. The tensor quantity \vec{\vec{T}} is thus a mapping between an arbitrary input vector and its uniquely corresponding output vector. It also may be thought of as a unary operator which accepts a vector on its right hand-side as an input, and transforms it into the corresponding output vector.


6. “Where am I?…”

Now is the time to take a pause and ponder about a few things. Let me begin doing that, by raising a few questions for you:

Q. 6.1:

What kind of a bargain have we ended up with? We wanted to show how the flux of a scalar field \Phi must be a vector. However, in the process, we seem to have adopted an approach which says that the only way the flux—a vector—can at all be defined is in reference to a tensor—a more advanced concept.

Instead of simplifying things, we seem to have ended up complicating the matters. … Have we? really? …Can we keep the physical essentials of the approach all the same and yet, in our definition of the flux vector, don’t have to make a reference to the tensor concept? exactly how?

(Hint: Look at the above development very carefully once again!)

Q. 6.2:

In Eq. 20, we put the parentheses in this way:

\vec{J} \equiv \left( \hat{n} \hat{n} \right) \cdot \left( \phi \rho \vec{U} \right) (Eq. 20, reproduced)

What would happen if we were to group the same quantities, but alter the order of the operands for the dot operator?  After all, the dot product is commutative, right? So, we could have easily written Eq. 20 rather as:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \hat{n} \right) (Eq. 21)

What could be the reason why in writing Eq. 20, we might have made the choice we did?

Q. 6.3:

We wanted to define the flux vector for all fluid-mechanical flow phenomena. But in Eq. 21, reproduced below, what we ended up having was the following:

\vec{J} \equiv \left( \phi \rho \vec{U} \right) \cdot \left( \hat{n} \otimes \hat{n} \right) (Eq. 21, reproduced)

Now, from our knowledge of fluid dynamics, we know that Eq. 21 seemingly stands only for one kind of a flux, namely, the convective flux. But what about the diffusive flux? (To know the difference between the two, consult any good book/course-notes on CFD using FVM, e.g. Jayathi Murthy’s notes at Purdue, or Versteeg and Malasekara’s text.)

Q. 6.4:

Try to pursue this line of thought a bit:

Start with Eq. 1 again:

q = \vec{J} \cdot \vec{S} (Eq. 1, reproduced)

Express \vec{S} as a product of its magnitude and direction:

q = \vec{J} \cdot |\vec{S}| \hat{n} (Eq. 23)

Divide both sides of Eq. 23 by |\vec{S}|:

\dfrac{q}{|\vec{S}|} = \vec{J} \cdot \hat{n} (Eq. 24)

“Multiply” both sides of Eq. 24 by \hat{n}:

\dfrac{q} {|\vec{S}|} \hat{n} = \vec{J} \cdot \hat{n} \hat{n} (Eq. 25)

We seem to have ended up with a tensor once again! (and more rapidly than in the development in section 4. above).

Now, looking at what kind of a change the left hand-side of Eq. 24 undergoes when we “multiply” it by a vector (which is: \hat{n}), can you guess something about what the “multiplication” on the right hand-side by \hat{n} might mean? Here is a hint:

To multiply a scalar by a vector is meaningless, really speaking. First, you need to have a vector space, and then, you are allowed to take any arbitrary vector from that space, and scale it up (without changing its direction) by multiplying it with a number that acts as a scalar. The result at least looks the same as “multiplying” a scalar by a vector.

What then might be happening on the right hand side?

Q.6.5:

Recall your knowledge (i) that vectors can be expressed as single-column or single-row matrices, and (ii) how matrices can be algebraically manipulated, esp. the rules for their multiplications.

Try to put the above developments using an explicit matrix notation.

In particular, pay particular attention to the matrix-algebraic notation for the dot product between a row- or column-vector and a square matrix, and the effect it has on your answer to question Q.6.2. above. [Hint: Try to use the transpose operator if you reach what looks like a dead-end.]

Q.6.6.

Suppose I introduce the following definitions: All single-column matrices are “primary” vectors (whatever the hell it may mean), and all single-row matrices are “dual” vectors (once again, whatever the hell it may mean).

Given these definitions, you can see that any primary vector can be turned into its corresponding dual vector simply by applying the transpose operator to it. Taking the logic to full generality, the entirety of a given primary vector-space can then be transformed into a certain corresponding vector space, called the dual space.

Now, using these definitions, and in reference to the definition of the flux vector via a tensor (Eq. 21), but with the equation now re-cast into the language of matrices, try to identify the physical meaning the concept of “dual” space. [If you fail to, I will sure provide a hint.]

As a part of this exercise, you will also be able to figure out which of the two \hat{n}s forms the “primary” vector space and which \hat{n} forms the dual space, if the tensor product \hat{n}\otimes\hat{n} itself appears (i) before the dot operator or (ii) after the dot operator, in the definition of the flux vector. Knowing the physical meaning for the concept of the dual space of a given vector space, you can then see what the physical meaning of the tensor product of the unit normal vectors (\hat{n}s) is, here.

Over to you. [And also to the UGC/AICTE-Approved Full Professors of Mechanical Engineering in SPPU and in other similar Indian universities. [Indians!!]]

A Song I Like:

[TBD, after I make sure all LaTeX entries have come out right, which may very well be tomorrow or the day after…]

Advertisements

4 thoughts on “Fluxes, scalars, vectors, tensors…. and, running in circles about them!

  1. In your original equation you defined q in terms of the magnitude of the component of vector J which is normal to the surface, ie. parallel to the unit normal of the surface, but in so doing you lose information on the direction of J. You later redefined J as being normal to the surface, which is not generally true. The J you came up with in terms of the surface normal vector and the magnitude of q is not the same J as you started with. Of course there is no inverse to the dot operator – it is a lossy operator that eliminates at least one dimension of the input parameters. Many inputs can produce the same output because there is a loss of the total number of independent parameters in executing the operation. In this case, many different magnitudes and directions of J can result in the same q, as long as their components parallel to n are all the same. So, what exactly are you trying to show?

    • 1. Could you please provide equation numbers, so that there is no ambiguity about what precisely it is that we are talking about.

      2. In the meanwhile, if it helps, let me add:

      Yes, you are an astute reader. In section 3., e.g. in Eq. 3, I do in fact redefine \vec{J} as being normal to the reference surface, whereas in Eq. 1, I had taken it as oriented at an arbitrary angle w.r.t. the surface. … It would take a careful set of eyes to spot it right on the first reading, and congrats are due to you for that.

      But, no, I don’t thereby lose the information on the direction of \vec{J} this way; it is very clearly being given by the unit surface normal \hat{n}, and the normal is explicitly present right from Eq. 4.

      An aside: In Eq. 6, \Omega_{\text{traced}} is a function of \theta, the angle between the fluid velocity vector \vec{U} and the unit normal \hat{n} to the reference surface \vec{S}. Thus, instead of having a flux be arbitrarily oriented, I have the surface be arbitrarily oriented. Yes, this is a redefinition of sorts of the terms, but that’s precisely a part of the fun!

      3. Could you please completely redo the development in terms of an arbitrarily oriented \vec{J}? I would be happy to have a look at it.

      4. What exactly am I trying to show?

      That a tensor can come up very unsuspectingly even in an apparently simple definition of a more simple quantity, a vector quantity. That’s quite unexpected. The running around in circles refers to this part—that a seemingly reasonable definition of a simpler quantity (vector) turns out to be making reference to a more abstract/more advanced quantity (tensor). That is one part.

      Another part is to show that tensors arise very naturally in fluids. Personally, this aspect had come as a pleasant surprise to me during my studies. I myself had thought that the best way to approach tensors was via solid mechanics, via the stress and strain tensors. Today I think that an arguably easier route goes through fluid mechanics. I wanted to just highlight it.

      I also wanted to have some related fun by trying to confuse the reader. … But this part was minor.

      After all, as a next part, I here also drive the reader almost to the point that he could easily identify the physical roots of the rather abstruse concept of a “dual” vector space. Show me a single text/notes/paper where any author does that—identifying the physical roots of “dual” spaces. At least I haven’t found any. …

      So, it was a multi-purpose post.

      5. But, before closing, let me repeat: Could you please show me how a development similar to that in the section 3. might proceed, but completely on your own terms—keeping \vec{J} arbitrarily oriented. In particular, would you be able to escape the “a tensor–dotted to–a vector” structure in such a development? I would be happy to see how such a development might proceed.

      Thanks, again, for a very thoughtful (and quick and alert) comment though, and bye for now!

      Best,

      –Ajit

      • Hello, Ajit.
        It’s been over 30 years since I had to do any of that kind of math, so I will not attempt it here (and the current embedded system I am working on is screaming for my attention so no respite there… ah well, no rest for the wicked!) My point with regards to information loss, however, is simply that the dot operator is what one might call a compression function of a sort, from the information perspective, in that it “compresses” multiple pieces of data into just one. Each vector requires multiple values to define it, that is inherently their nature, but a dot (or inner, as some prefer to call it) product combines all those separate pieces of information in a certain way to produce a single-dimensional value. No matter how one tries, one cannot get two independent pieces of information from just one without another one as input, that other one having some (at least partial) independence from the first. Your redefinition of the surface in order to align its unit normal to the original J works but assumes you already have information on the vector field defining that flux in the first place and it requires that the surface over which you integrate that flux (if you desired to know total flux) is closed. Any flow both entering and leaving the volume would be unaccounted for since it necessarily could not be normal to the surface at every point IF the entire surface was embedded within the flow stream and therefore was subject to flow (assuming the surface is finite) but if the surface was defined in such a way that it could be partitioned into one area with total flux going in (say, negative total flux) and another with total flux going out (positive total flux) and an arbitrarily small separation between the two where the surface has net zero flux then the shape of the surface could be kept manageable (such as an imaginary “flow tube” in laminar flow – that cylinder you get by defining a tube along flow lines and with ends normal to the flow stream) and would not end up with difficult to manage characteristics (such as points or regions of undefinable curvature – a differential geometry nightmare). I suspect that one may require reasonably laminar flow to make such a thing computationally manageable. The assumption is, I guess, that one already has information on the flow structure and then defines the surface with respect to that structure – and that is that extra information, knowing the flow structure a-priori.

        However, I take your point regarding the view of tensors from the fluid mechanical perspective. I vaguely recall my first encounter with basic tensor math when studying the stress/strain relationships in solids and I think the problem some people had was that they had difficulty visualising the non-parallel effects of a stress, that is, how it is that a longitudinal stress creates both a longitudinal strain AND transverse strains which all then result in stresses in other directions and so forth, and how much worse it gets with non-homogenous materials and how such a confusing “mess” can simply be managed with a matrix (or matrices) of coefficients that relate each direction’s stress to another direction’s strain, and then how that matrix is turned into a simple mathematical symbol with sub/super scripts and maniplulated with other similarly perverse looking symbols and so forth. I recall some similar confusion when dealing with the mathematics of products-of-intertia in rotating systems… it all gets quite confusing. Perhaps there was just not enough in-depth study of matrices, and the mathmatical tools for handling them, in the first place… or at least that is my perspective all these years later. From the fluids point of view, though, it might be a bit much to ask of most freshmen. Then again, I have not been in school for over 30 years so I really cannot say. If one can visualise what a tensor’s components represent, it makes understanding them all that much easier. It just gets very confusing when dealing with abstract tensors such as that relating different co-ordinate systems to a given phenomena (such as you find in general relativity theory). All too hairy. Leave it for folk who have the time.

        Must get back to this embedded thing.

        Cheers.

      • I am “happy” that not only the text in the main post but also my reply above both came out to be really confusing. As you know, that was a part of the whole game.

        Yes, the argument in the main post is erroneous. (And my reply above continues the game!)

        The reason that despite being erroneous the argument in the main post looks convincing or “apparently solid” is not because I possess great skills in confusing people (I actually don’t). It’s because of outright bad pedagogy concerning tensors. Which, in turn, is because of bad working epistemology concerning anything related to higher/abstract maths. In particular, they (the mathematicians and physicists) always take pains to emphasize the rotational invariance of tensors. So, when a part of the argument apparently involves an apparent rotational invariance, they just tend to think that the argument must be solid—even if it actually is wrong. If only they begin identifying the physical roots of the idea of tensors (esp., the asymmetrical tensors)…. Once you do that, it is easy to see that the rotational invariance is a consequence—an inevitable, but, fundamentally speaking, a non-essential aspect of what makes a tensor, a tensor.

        Anyway, no point continuing to play the game with you; you hit the nail on its head right on the first read.

        Guess I will at least drop a few hints for the correct answers, before I close this thread.

        The flux vector (in the context of flow phenomena) is defined as \vec{J} \equiv \rho \phi \vec{U}, full stop. No further dot or tensor products are involved in the very definition of the flux itself! To nail the error further, one just has to realize that the flux vector remains the same regardless of the plane across which it acts, whereas the traction vector involved in the definition of the stress tensor changes with the orientation of the plane of the cut. …

        As to the diffusive vs. convective (or even advective) fluxes, well, though no textbook or notes or research paper says it, the fact of the matter is, the definition for the diffusive flux also remains (or at least can remain) “exactly” the same—except for the “little” difference that the \vec{U} appearing in the definition now refers to an imaginary velocity of the advancement of a certain front via only the diffusive flow. This velocity is in general different from what we usually mean by the velocity field of fluid flow. The velocity which we usually mean by the velocity field of a fluid flow is what appears in the definition of the convective flux. The velocity vector used for the diffusive flux would be different. That’s the only difference—when it comes to defining the very flux itself, even for a diffusive flow. Textbooks and notes typically completely gloss over this detail, and rather than pointing out the existence of two different flow velocities, they directly come to state the diffusive flux only in terms of its flux-to-gradient law, in terms of the corresponding gradient field (\vec{J}^{\text{D}} = - \Gamma \nabla \phi). [May be, they are afraid to ascribe anything even just seemingly local to any aspect of the phenomenon of diffusion, that’s why.]

        As to the dual space, I will write about it still some other time. Enough to say right away that if by a vector space you mean “an arrow from a point/along a line“, then the simplest example of its dual space is the arrow of the normal identifying a plane. The difference between a line and a plane is illustrative of the difference between a vector space and its dual. The equation to a line is linear, and refers to two points through which it goes (or, just one point if you shift the origin to the other point); the equation to a plane also is linear, but it refers to the reciprocals of the intercepts made to the three Cartesian axes. Keep that “intuition” (i.e. geometrico-physical basis) alive at the back of your mind, and the whole difficulty about dual spaces crumbles down in no time.

        OK, bye for now, and best,

        –Ajit

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s