Taking square roots with geometry and being stalked by a number

René Descartes La Geometrie (french / english ) is an historically important work since it signified one of the early attempts at uniting geometry and algebra but also because it’s an instructive read on how you can have a different perspective on the nature of numbers.

In the actual text he starts out by characterizing addition, multiplication, and taking square roots as geometric operations and lines and it is from this last operation that I will derive this post. To find (construct) a square root (radical) of a numbers represented as a length proportional to another unit length yo can follows a rather simple procedure. I have sketched it here

ConstructionOfSquareRoot

 

It’s not an original construction of course. It can be found in Euclid (Book VI Prop 13) but it’s a nice concrete representation to have if you somehow doubt that you can truly take square roots of any (positive) real number and not just those which occur in certain right triangles. Still what I was really looking for when I was working on this was a way of providing a visual construction which in a transparent way relates proportions of lengths and proportions of area. Since I can take the square root in this construction as the side of a square with corresponding are I get the following illustrations

PropotionalSquaresToLengths

Proportions of length next to equal proportions of area

 

 

From this we might observe a sight curiousity in how the square being for 1 and 2 not entirely contained in the half-circle but which for some value in between 2 and 3 ‘slips into the circle’. Now for what number does this happen. I.e for what proportions do we get the following case?

SquareInHalfCircle

Square inside a halfcircle

 

 

If x is the side of the square then from the construction we know that x^2 = x + 1 and dammit it’s the equation for the golden ratio \phi. Explicitely

\phi = \cfrac{1}{2}(1 + \sqrt{5})

Might as well highlight the golden rectangle.

GoldenRectangle

Golden rectangle highlighted. The bigger rectangle is also golden

Of course I should have seen this coming since this is indeed one of the known ways you can construct golden rectangles and ratios but still, it was kind of funny.

 

Playing around with Euclid and constructions

I was playing with Euclid’s Elements perhaps two months back when I had some spare time (or rather when I was bored with whatever it is I was supposed to do). I had never actually read it before and I guess there are two reasons for it. Firstly there aren’t any modern translation of it into Swedish and the older ones read like and equivalent of Shakespearean English with a hard to read way of spelling so to come into contact with it as a kid is not impossible but nevertheless hard. Also in school geometry isn’t taught in the spirit of Euclid with more of an emphasis on using geometry formulas to teach algebra and once trigonometry enters the picture Euclid presentation seems somewhat irrelevant.

Having read the first book and skimmed through some parts which I was interested I’m not yet in the camp that yearns for the return of ‘axiomatic geometry’ in schools since even though I find the presentation appealing I still agree more with the camp that thinks that education is supposed to be organized to developing useful skills and — yeah I’m not convinced that this is the case. But it is interesting and I can perhaps appreciate the structure more now that I have a better grasp of how mathematics works in general.

Still, what I’ve appreciated the most looking into this area are the construction (the arguably least relevant apsect of it). I have a a special place in my heart for ‘proof by construction’ and of course the Elements is rife with them. Even if you discard the proof aspect of geometry just sitting and playing with the (virtual) straightedge and compass turns out to be ridiculous amount of fun. To start with the question of ‘Can I do it?’ and then you play around with the tools to see what you end up with. Just two tools end up yielding absurd levels of complexity and compared with the 5+ operations of algebra it’s arguably simple.

This phase of trying out straightedge construction has also coincided with me wishing to learn how to draw graphs in TikZ (QTikZ) so I get to practice it through outlining construction schemes.

For example one of the first questions I asked was how to find the circle which intersects three arbitrary non-colinear points in the plane with the compass and straightedge. The reason that was the first question I asked was really because it’s the first computer lab you do at KTH (my school) utilizing linear algebra. Then it has a follow up question of taking an arbitrary number of points with and you’re supposed to find an approximate circle with the least square method but let’s ignore that.

What I’ve found so lovely working with these tools have simply been that the constructions almost always seem to practically write themselves once I got a hang of the rules of the game. At least to me.

In this case we of course start with developing the basic operation if finding a perpendicular to/bisecting a line which follows from the first proposition in the book.

Constructperpendicular

Bisecting a line or drawing perpendicular

Now this is of course obvious to everyone (especially Americans who I’m told are actually told who Euclid was/might have been in school) but these things aren’t in the Swedish curriculum anymore so to me it’s been like discovering something new and beautiful. You get this almost childish joy from seeing it when you look at it as puzzles rather than formal mathematics.

Now for finding the circle which intersects three points I really started from a simpler question of how one could recover the center of a circle in case it had been lost. For me the fundamental property of a circle is not really its definition but that it is a perfectly symmetric shape and so in cutting a circle (disc) in half I should always end up cutting it along a diagonal. From this principle the following procedure using a cord for finding the circles center is to me very natural

Image

Finding the center of a circle

A proof that it works can be made but its intuitively obvious that it works even without one.

From this, the idea that a perpendicular to a chord passes through the center of the circle, it is immediately obvious that the perpendicular to two different chords should intersect at the center. This simple idea is all that you need to at least guess that the following procedure using two virtual chords should construct a circle.

Image

Finding a circle which intersects three non-colinear points

Of course at heart, or in the way I at least thought about it while doing it, this was really not a proof of that you can draw a circle through any three points but rather that you can reconstruct a circle from three points that were taken to lie on an ‘existing’ circle. But as I said these intuitive constructions seem to always come with the lines necessary for forming a formal proof and by just drawing 3 extra lines I can complete it to a formal existence proof with a simple argument.

Image

Verifying that circle passes through all three points

The uniqueness part is also simple to see as the two perpendiculars can only meet at one point in our geometry. (All this is probably in Euclid but I’ve been more into solving stuff for myself than reading through it)

I’ve done this with a tonne of different constructions now and the overall feeling I’ve gotten is that this really should be quite easy to turn into a puzzle game. Maybe I’m wrong but so long as the constructions are put in a certain order (the way I discovered them in this case) it could be a pretty entertaining game. Maybe you need to be able to see the nature of the corresponding proofs for it to work but I wouldn’t think so. I think it should work just fine.

Let me just round off by outlining the algebraic solution to the question of the existence, uniqueness, and construction of a circle from three points. Let (a_1,a_2), (b_1, b_2), (c_1, c_2) be the coordinates of the three points. Then any center (x,y) of a circle with radius R intersecting all three points must satisfy the three equations

(x - a_1)^2 + (y - a_2)^2 = R^2

(x - b_1)^2 + (y - b_2)^2 = R^2

(x - c_1)^2 + (y - c_2)^2 = R^2

which may  be cast into linear form by subtracting the second from the first and the third from the second to give us

2(b_1 - a_1)x + 2(b_2 - a_2)y = b_1^2 + b_2^2 - a_1^2 - a_2^2

2(c_1 - b_1)x + 2(c_2 - b_2)y = c_1^2 + c_2^2 - b_1^2 - b_2^2

discarding the third. Then we have linear system which is solvable with a unique solution if the points are not colinear as can be seen from either the determinant of the fact that the two columns in the corresponding matrices are linearly independent if that is the case. The radius can be recovered from the distance between (x,y) and one of the other points.

This is arguably the simplest solution since all issues of construction, existence and uniqueness are put into a single generalization procedure but you also need much more advanced machinery to do it. The other one takes next to none.

Learning about Olinde Rodrigues

When I first encountered the Dirac equation in a particle physics course few months ago I decided that I needed to delve more deeply into the mathematical areas which lie at its core and surrounding which means a return to algebra, group theory and to some extent topology. The physical principles behind many of the equations and reasoning in the genuine physics literature is fairly sound but I can’t help feeling as if I’m standing at the bottom of the valley , unable to decide the shape of the world outside it.

I’ve picked up some books which I decided would at least be interesting regardless of whether they end up helping me resolve the questions that arose with the intent of reading them over the summer. One of them was Simon L. Altmanns in his book Rotations, Quaternions, and Double Groups and it is not often that I find a book where the foreword and introduction acts as more than simple filler. The content in this case was a historical exposé on two men and their work. Willian Rowan Hamilton, the father of quaternions, and another mathematician called Olinde Rodrigues.  Now who is Rodrigues I asked? I know of only one formula which bears such a name and he was indeed the first to in 1816 give the so called Rodrigues formula for the Legendre polynomials, known to every physics student who ever wanted to solve the Schrödinger equation of hydrogen or any other  partial differential equation with a spherical symmetry.

Apparently Olinde Rodrigues, who lived in France all his life,  was responsible for many other things as I learned reading more about this man whom I had previously only known as disembodied name attached to a formula. And though I found his story interesting it was compounded by the fact that I time and time again reading an online article on him would find Altmann to be attached to it in some way, chronicling his life  and adding new details upon each retelling. I find some joy in the thought that every man, in a way, has another dedicated to his memory. Now if this dedication is just I will have to decide upon actually reading what came of his work directly but apparently besides working in Legendre polynomials he also did contributions to combinatorics, and odd essay on continued fractions, and banking which was his actual profession (though he was a trained mathematician). Altmanns primary interest however is in Rodrigues work on rotations (the rotation group) and the connection to quaternions and algebra and physics.

One of the simpler questions for example (and I’m being ironic about that) being how to construct from two subsequent rotations a single rotation which acts in the same way and furthermore how to describe a general rotation in a transparent and efficient way. One alternative is Euler angles which I remember vaguely from my course in rigid mechanics and Rodrigues provided his take on how they could be represented with his own parameters and representations.

The question of how to compose two rotations formally is by the way fundamental to understanding how a rigid body such as a sphere can move though space. A rather simple construction shows that any no matter how a rigid body has been moved about its center of mass it can be returned to its original configuration with two simple rotations.

For this we can use the analogy of a solid sphere with a thin transparent shell encapsulating it and on which we consider two pairs of points which are initially superimposed. A and B are two points on the inner sphere and A’ and B’ are two points on the shell which and are initially superimposed on top of the two points on the inner sphere. Then the outer shell is moved around in some way and the claim is now that the two points A’ and B’ can again be superimposed on the A and B points by two simple rotations. The first rotation will return A’ to A and is done by rotating the outer shell around the normal to the plane AOA’ where O is the common center of the sphere and the shell. Now keeping A’ fixed we rotate the shell around the axis formed by OA (or OA’) which superimposes B’ onto B (which can be done since AB is the same as A’B’). And we’re back where we started.

Furthermore two rotations composed are equivalent to a single rotation, a fact which to our modern sensibilities could be proved with matrix arguments. But the question still remains how to arrive at the two to the third in a good way and what the algebra associated with such a composition is.

Of course the happy accident is that studying these fairly concrete things actually gives us insight into quantum field theory but I’m not going to complain.

Now back to Rodrigues. What makes him interesting besides proximity to interesting things are some of the circumstances of his life. For one he was of Jewish decent living in the early 19th century, a time where  some of the restrictions associated with that people in terms of educational opportunity was being lifted or at least relaxed pawing way for the unarguable contributions of Jewish mathematicians. But still this racial issue was not yet resolved and may have influenced his choice to favor banking over academics. Too me however I am most entertained in seeing him follow in that proud tradition of french mathematicians and scientist of getting embroiled in some rather non-mathematical enterprises.

This seems almost an condition of the french around this time in history and Rodrigues was no different. Of course this is at heart a consequence  of the fact that political and french might be argued to have been inseparable terms from the revolution of 1789  and close a century after it. Though its arguably stretching things a bit.

Nevertheless we may often find a political connection. I am reminded of how Fourier acted as prefect to Napoleon and was supposedly rather wacky in his final years. Evariste Galois was a vocal opponent of Charles X ambitions for an absolute monarchy and was arrested after allegedly making a toast to his demise. Binet on the other hand was perhaps too supporting of Charles X and had to resign his position in 1830 when the latter was deposed. The same event which also drove Cauchy out of France until 1838.

Olinde Rodrigues on the other hand had his political work connected not to the ruling forces but instead to an obscure political and religious movement called Saint-Simonism originally headed by — you guessed it — the then compte of Saint-Simon. (Claude Henri de Rouvroy), usually simply reffered to as Saint-Simon. His ideas have their place in the 19th century development of utopian socialist ideas. However, as with so many thinkers, he drove himself close to financial ruin and depressed an disappointed attempted to take his life in 1823.

 He then placed his watch on his desk and wrote his last thoughts until the allotted
time arrived. His aim, alas, was so poor that of the seven shots to his head only
one penetrated his cranium, causing him to lose one eye but not his life (Frank Manuel through Altmann)

Poor Saint-Simon but had he not lived our narrative of Rodrigues would have become ever so much more dull as Olinde Rodrigues , in the process of supporting Saint-Simons recovery and writing the followings years, grew to regard his work a as prophetic and immensely important. Though my interest was originally in Rodrigues I took the time to read one of Saint-Simons last works New Christianity (original french) which exemplifies the religious character of his later writings and which I would summarize as emphasizing the social agenda of Christianity in working to improve the condition of the poorer classes.

After Saint-Simons death in 1825 Rodrigues along with others formed what was essentially a sect devoted to Saint-Simonism holding rites where he and others would be refered to as Père by brothers of the movement. It had its ups and downs as I interprete it, culminating in a more charismatic follower Enfantin taking charge of it, turning it towards sexuality and the morality of incest and adultery (because of course that’s what was going to happen). Rodrigues eventually broke with the movement which died shortly afterwards but its an entertaining episode.

Some of the texts by Altmann I read on the subject of Olinde Rodrigues were besides the aforementioned foreword this shorter article along with a longer preview chapter of an AMS book. And furthermore yet another Altmann  article focusing more on the history of quaterions.

Ranting about division in education and stumbling into an old friend while doing it

I hang out a lot on study aid forums these days which I enjoy, both as a means of continuously refreshing what I already know as well as keeping track of how the curriculum has been changing since I moved on to university studies. One of the hot topics these days in Swedish media is the discussion of the state of our schools and how we are falling behind in international rankings and so on and every year since 2008 there has been some half-assed attempt to ‘fix’ it by changing up the curriculum or moving obligations to and from teachers. The perhaps most major reform took place in 2011 when the gymnasium (highschool equivalent) curriculum was revamped. Something which mostly entailed ditching much of the classical geometry for some reason and making up the difference with more (elementary) calculus and making some areas of discreet math and number theory mandatory to ‘seek to smoothen the transition to higher education.

I don’t particularly agree with all of the priorities but at least it has meant that I get to work with helping gymnasium students with some new types of problems as the confused students bring their problems to the forums. One the things which have entered the standard curriculum are things like modular arithmetic and prime factorization and the students are  just awful at it. Now I too had problems with understanding residues and work with modular arithmetic at that stage but having moved on it really makes no sense at all. I mean you work with residues in elementary school for Christ sake. The first thing you do when learning to divide whole numbers is to separate a number into a quotient and a residue like 13 = 4 \cdot 3 + 1 although you tend to write it like 13/3 = 4 + 1/3 often just discarding the ‘indivisible part’ as an uncomfortable absurdity to be dealt with later. I think this desire to want to be able to divide everything is the principal reason why kids are often very willing to accept rational and by extension real numbers because they solve this problem for them. But in the process everyone seems to forget that integers aren’t just real numbers but form their own domain of logic and have their own special rules. This rush in education to move on to real numbers before really developing integers properly is understandable and it works out more or less fine but it is kind of pathetic that we end up with 16 to 18 year olds who can’t conceptualize euclidean division anymore even though it should have been the first thing they learned once upon a time. The main perpetrator is of course memory decay but I also suspect that calculators too play a part in this corruption of the concept of division as at a certain point and onward students aren’t asked to be able to do division by hand anymore and so division itself as an operation fades into obscurity.

Now I don’t actually think that a person needs to be able to perform difficult number arithmetic by hand and minds (I for one am terrible at it) but these things are at heart algorithms and the ability to understand and develop algorithms is priceless. Long division for example is a beautiful algorithm and understanding it requires one to apply it throughout ones education that one is given the opportunity to discuss the logic behind it.

That rounds up my little rant but consequently I felt the need to just playing around with division and trying to find more varied and hopefully good examples of problems which force a student to work out residues of division and actively work with decomposition of the form p = nq + r. Now I have never actually done any work with continued fractions since as far as I know they are completely ignored by physics and I’m far from good mathematician who has studies every math thing there is in detail but I had been looking at them lately since I have been interested in the representations of irrational numbers. Anyhow I found them to be an excellent example of a situation where one naturally finds the need to perform euclidean division.

So just to be consistent I should say that a continued fraction is a rational number written in a particular form

a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{\ddots + \cfrac{1}{a_n}}}}= [a_0;a_1,a_2,...,a_n]

For example

\cfrac{13}{17} = 0 + \cfrac{1}{\cfrac{17}{13}} = 0 + \cfrac{1}{1 + \cfrac{4}{13}} = 0 + \cfrac{1}{1 + \cfrac{1}{3 + \cfrac{1}{4}}} = [0,1,3,4]

A rational number can be rewritten in this way algorithmically and the relevant part is simply that in each step you’re doing euclidean division in the denominator with respect to the numerator as illustrated by:

\cfrac{a}{b} = \cfrac{a}{na + r} = \cfrac{1}{n + \cfrac{r}{a}}

Now that closes the book on this theme, the rest of this post should just outline my embarrassment and joy at accidentally chosing a particular fraction. So I mentioned I don’t work with continued fractions much? In fact never. Well instead of reading anything on them I decided to just play with it and see if I could figure out anything interesting on my own. (It is a weekend after all) I just wanted to see what some infinite continued fraction would turn out to be so I picked the most obvious infinite continuous fraction I could imagine and set out to work it out. I chose [1,1,1,1,...] because that just seemed reasonable somehow. I got to 8 ones ([1;1,1,1,1,1,1,1]) before I realized what it was turning out to be.

It’s kind of awkward to stumble into the one number I find to be the most missrepresented and overrated number in mathematics education on the first go at evaluating an infinite continued fraction. Makes me look like an idiot for calling it so.

How Walter Rudin broke apart my mind 2 years after his death

Every single day of working with Principles of Mathematical Analysis (Baby Rudin) I hated the book. I hated my teacher for making this obviously non-introductory text the mandatory text for the course. I hated myself for reading through the pages at the speed a 6 year old reads prose. I hated the way a proof would just appear as if out of thin air with no motivating paragraphs as to what was to be achieved or why this or that route was taken. I hated it all — but most of all I hated Rudin himself.

At that point I had already taken 2 years of physics and maths and I thought of myself as having a fairly good grip on things. Sure the stuff I had read wasn’t what I would now call rigorous but it was intuitive. Most of it had been to just apply methods and state theorems rather than really prove why they worked beyond outlining the ‘idea’ or intuitive principles. Sure, there were theorems to be proven in tests from time to time but even then it boiled down to memorizing it and not really thinking much about how a proof is properly structured.That’s totally fine but it had in no way prepared me to think critically about math and differentiating what I knew I could prove and what I knew because I had been told. To from this background make the jump to working with chains of 12+ sequential proofs where every statement is built on another statement running back to the simplest of notions completely overwhelmed me. I felt like my mind simply wasn’t build to keep track of it all.

It took me half a semester to recover my senses and figure out the structure of the book and how to work with it but in the end it was the closest to a transformative things I have experienced it since  going to university. I was both humbled and more curious once the course was done and I don’t think I would have been if it hadn’t been for Rudins desire for the book to be (essentially) self-contained and lacking in embellishments, as to the supposed meaning of what — forcing you to think for yourself.

The books still has too many problems for it to be good enough to warrant it’s widespread usage but to me it still was one of the most important reads I’ve made in the past 3 years. To actually have become aware of what I knew, didn’t know, and can know about mathematics way is priceless.

There are only 3 real unital algebras of dimension 2. Awesome

This is such a trivial result that it hardly merits a post but it’s quite elegant and I was really happy to have come across yet another application of the involution-idempotent-relation which I mentioned briefly at the end of my second blog post as an example of the sort of general things and techniques which I always strive to find.

So just a brief summary of concepts to start us off: An algebra is just a natural extension of the concept of a vector space where the algebra itself is a vector space which you have endowed with some form of multiplication. Think matrix algebras where you can add and multiply but not necessarily divide and you have essentially captured the whole concept as all algebras can in a sense be represented as matrix algebras. An algebra is said to be real if the corresponding vector space is real and finally it is said to be unital if it has a multiplicative unit, an element 1 which leaves everything invariant 1a = a. (Mathematical purists might stress the properties of the multiplication operation but as I said its just supposed to have the same properties as with matrix  multiplication where we essentially only need to remember that it need not be commutative ab \neq ba).

The prototypical real algebra is the set of complex numbers \mathbf C  where the natural basis elements are 1 and i and where every element is written as a linear combination of the two; a + ib The complex numbers is a real 2 dimensional unital algebra and as it turns out its one of only 3 possible such algebras. The classification is fairly simple and can be done in terms of matrices but it’s not really necessary. Here goes:

Essentially if A is such an algebra then the fact that it is 2 dimensional means every element can be written as a (real) linear combination of 2 elements  a1 + b x = a + bx where 1 and x are the basis elements. Now being inspired by the complex numbers we consider the square of x, x^2, which naturally is also an element of A and so there are real numbers a,b such that

x^2 = a + bx

But similarly to completing the square when solving a quadratic equation we may construct a new element y from x whose square is a multiple of 1, that is  ‘real’.

y^2 =(x - b / 2)^2 = a - b^2 / 4

 If need be we could further rescale y so that its square is either -1, 0, or 1 and lets call this normalized element i. The case where i^2 = -1 is already familiar so we need not dwell on it but we are nevertheless left with two other potential algebras. Lets see what they are.

Lets begin with the case where i^2 = 1. Turns out that this is also a very trivial and/of familiar kind of algebra. Now what is i? It’s an involution! So we can construct from it and the unit 1 two idempotent elements

p_1 = \frac{1}{2}(1 + i), \qquad p_2 = \frac{1}{2}(1 - i)

which have the properties

p_1^2 = p_1, \qquad p_2^2 = p_2, \qquad p_1 p_2 = p_2 p_1 = 0

And since they are linearly independent we may take them as a basis for the algebra and we immediately identify the algebraic structure

(a p_1 + bp_2)(cp_1 + dp_2) = ac p_1 + bd p_2

which is just ‘term-wise’ multiplication, the trivial algebra in \mathbf{R}^2. Another name one can use for this algebra are the ‘Double numbers’.

The third type of algebra, the one with i^2 = 0 is also interesting and you probably have seen it before but it does not permit an illuminating rewrite but instead simply is what it is. Thanks to a friend at the forums I can attribute to them the name the Dual numbers http://en.wikipedia.org/wiki/Dual_number

So there we are; There are the Complex numbers, the Double numbers and the Dual numbers. Funny enough, and I might return to this in a future post, each of these system links to one of the 3 types rotations and geometries; trigonometric, hyperbolic and parabolic. A summary full of color can be found in this arxiv article:  http://arxiv.org/abs/0707.4024

Economy from behind an armchair

Of all the subjects or things in the world I think nothing confuses people like economics. We all get the jist of it. We know what money is and what we can use it for and how this complicated flux of transactions flows parallel to everything that happens in the world, small or large. But the minute you try to stop and think about, how it all works, it you feels like you’ll get a brain hemorrhage and I have to question whether I’m really sentiment because I don’t feel like any principle I design actually seems to work at all.

Maybe this is not universal but for me I think this experience stems from the sense that an understanding of economics should be possible to from simply sitting in an armchair smoking a proverbial cigar while theorizing reasonable scenarios. I mean its just a man made thing, completely theoretical, and we’re all a part of it every day so a small list of guiding principles and axioms and we should get this whole thing straightened out right? But no it just doesn’t work that way and of course that’s because economics is actually an empirical science (or at least discipline) which cannot be derived philosophy. Its much too complex. Still, its annoying because as a layman my primary tools are reflection and ‘deduction’ and when those tools consistently yields absurd conclusions one is left without much to go on.

Now what I really was discussing today, pertaining to economics was why house prices are increasing so much more rapidly in Stockholm and Sweden than you might expect. There are a number of theories of course, most quite easily summarized to a layman (and therefore probably horribly incorrect) such as it having to do with residence shortages (demand), low interest rates, good economy and optimism, plain stupidity, a combination of those — or none of those. There are many consequences of these trends but the one I was discussing over this evenings tea session was how taxation affects peoples desires to sell their properties. The prototypical example being an old couple which bought their apartment decades ago for what is today next to nothing but who now can’t be motivated to move out since they have to pay huge amounts of taxes on the nominal profit that is inevitably made from selling the apartment even if they move to one of equivalent market value. It’s profit in the eyes of the state but it does obviously have a lock-in effect. Our discussion was essentially centered on the soundness and fairness of such taxation as opposed to focusing much on the nature of price increases; with me taking the side that such profits are real and must be taxed like all other profits and with my conversational partner arguing points such that it would improve mobility if such profits could be off-set or delayed according to some scheme.

This was just standard political  debate but it did inspire me to think of a particular thought experiment which does yield one of those seemingly absurd conclusions. The concept was simply how a loss averse individual selling an object of ‘constant value’ should behave in a system with both inflation and a profit tax which does not correct for inflation (i.e every real system).

Assume the commodity was bought for 1 unit of currency and its intrinsic value, be there such a thing, does not change over a period of time over which there has been some inflation and in which the equivalent value of the commodity is now (seemingly increased to) i (for example 1.20). If the commodity is sold simply for its ‘true value’ i then the seller will not recover the true value since the nominal profit 1 - i will be taxed with some rate r (for example 0.2). If the seller is loss averse then he should sell it for more than its ‘true value’ with a selling price of v. The principle should be that what remains after taxation should be what one was ‘entitled to’, the increase in nominal value which was only due to inflation. This relation is provided by the equation

(1 - p)(1 - r) = 1 - i

Which is solved for

p = \frac{i - t}{1 - t} \geq i

What is funny is that under a hypothesis of a constant ‘true value’ the loss averse seller in this system is forced to over-price his commodity not to make a loss but since it is a true value no buyer will accept it for it will be perceived as their loss if they do. Of course there are resolutions to this apparent stationary state if we abandon any one axiom such as loss aversion or the notion of a true price, or simply allow for them to meet in the middle with the taxing party redistributing the taxed money in a fair way. But truly the funniest correction is to let buyers be less loss averse than sellers (which is sometimes reasonable) in which case naive recursion will artificially drive up the price upon every reselling.

Thoughts on the needs associated with understanding something

Despite them being the subject I’ve actually studied the least; education, teaching and learning remains three of my favorite (interconnected) subjects to think about. Of course it would be strange if I didn’t think about those things almost on a daily basis. Looking at my current situation and life so far I have spent all of my life within some institution which has sought actively to instill me with knowledge and morality and as such it is almost hard to separate my own person from my thoughts on those institutions and the process of learning itself.

The thought that has bugged me today has simply been how we at different stages in studying something have different needs with regards to how we approach them, essentially how we need obtain a sense of understanding something. Essentially understanding something, as I see it stems from being able to work with a set of related concepts and models in a qualitative way — to within a certain framework be able to answer a question in a way that in the given setting is socially acceptable in some way. In this I incorporate the admittedly pretty nonsensical way of looking at the self as a society of one which is just supposed to reflect that my own sense of understanding is something that develops from essentially talking to myself, providing a narrative that when spoken back to myself makes sense. I don’t need a real person to agree with me to feel I understand something but I need to feel like I could explain it to one if I had to. To imagine a dialogue through the monologue.

And this idea that understanding arises from a form of internal dialogue shapes what I need to feel like I understand something because as I grow I become able to ask myself more questions challenging my own ability to form an appropriate narrative. For example, lately I’ve been trying to quantify, on my own, the length of the day as it relates to the orbit of the earth around the sun. I feel that I definitely understand why the length of the day varies over the course of the year as the direction of the earths tilt relative to the sun varies. By simply imaging a model of the earth with various circles corresponding to latitudes and the shadow the sun casts on the earth, dividing it in half,  as it would in mid summer and mid winter I definitely feel like I have understood the essence of the phenomenon. I feel like I could manage explaining the idea to someone who was simply asking that one question without much follow up. However, it still gnaws in my soul somehow that I can’t quantify the rate of change of the length of the day over. Essentially what part of the year is the change in the length of the day the greatest?

I think why I feel like asking that question, or thinking that it’s an important question, is that I ought to be able to answer it and therein lies what I think is a core drive in seeking understanding. You have to be confident that you can understand it. If you don’t, then you will not try. That is why instilling intellectual confidence in a student is so important and probably  is what we fail in doing too often.

Silly rubber band. Why can’t you be more like a spring.

So last Tuesday evening I was lazying around on the forums as usual giving my two cent on how to solve simple math and physics problems. In particular there was this one gymnasium student who had done a physics lab in school with a rubber band and who was outlining how he wanted to interprete it. He didn’t post anything more so there was no discussion to be had and I do suspect that what I saw as a fundamental flaw or issue was really just a misrepresentation. Nevertheless the description itself still spurred me to do a little mini experiment that evening.

 

He had suspended a small weight from the rubber band and simply measured the elongation of the rubber band as a result of it’s induced gravitational force. Then he postulated that the rubber band would follow Hookes law

F = k x

determined the expected ‘spring constant’ k from this single measurment and used this linear extrapolation to find the weight of another heavier object suspended in the same way from the elongation it caused. The problem is of course that rubber bands don’t elongate in a linear way when you apply a force to it as you would see if you took more than a single measurement. His error was a natural one but it gave me a bit of a laugh when it reminded me of the famous Coulomb torsion experiment for studying the electrostatic force where he apparently only disclosed 3 measurements (data points) and argued on their basis alone. There is naturally a bit of historical debate as to how he did his measurements and why but the point I take from it that it’s really rather silly not to make lots of measurements when the apparatus is right in front of you. It’s so much easier to catch your errors when you do.

So back to me. I decided to spend a couple of hours to set up a simple a experiment with household objects to loosely characterize a rubberbands elongation a a function of applied weight (load). So I attached a rubber to some strings along a rules and at the other end of the string attached to the rubber band I attached a small card board cup which I filled up with some standard weights in the form of some iron balls I had lying around as part of an old toy set. As I loaded up the cup with balls, one by one, I measured the elongation relative to the length when a single ball was in the cup and doing this twice with a day apart the elongation response is displayed in the graph below.

ImageOnly took me 3-4 hours total to work on and I was pretty happy with at least capturing the non-linearity with things I only had to stretch for. I do recognize a number of flaws though, the pribary one being that it was difficult to increase the load in a continuous way as I effectively had to drop the balls into the cup and the 1mm and the decrease in effective weight once the ball has stopped causes some oscillations and once elongated it won’t return to the same length as before the load making the graph a little misrepresentative. I also probably should have been in less of a rush to do the measurements as it sometimes took time for it to properly settle. In the last 5 measurements I observed that it would not actually settle into the proper elongation until after 45 seconds which I had missed to account for. Essentially all measurements in the second experiment past the jump in the red graph are probably too bad to be reliable but the two measurements done at different days agree pretty well which makes them look pretty reliable.

The natural length of the rubber band was about 6cm and it finally snapped when it was extented to 26cm. Pictures of the actual setup at: http://imgur.com/a/KmTyF