That thing which used to hang above the stairway was a so called Goethe Barometer. Now for a long time I had maintained an image in my mind that the bulbous shape of the device was mostly for aesthetic purposes or to store excess fluid to limit the effect of evaporation and while those are probably part of the reason they’re not the most interesting one.

In thinking about this design we should think of it as a variation of a U-shaped tube with one end capped. One can imagine calibrating the device by originally having the tube be open at both ends at which point the surface level in both columns are level, and this is retained when one end of the tube is capped so long as the enclosing process doesn’t compress the volume.. If the atmospheric pressure then decreases a difference in pressure between the trapped gas pocket at one end and the atmosphere is established and the water level at the open is raised as it is pushed up by this difference in pressure. Pascals law tells us the difference in pressure between the trapped gas and the atmosphere is related to the height difference of the two levels according to

Thus such a device can be used as a form of barometer. One of the problems though is that it’s difficult to extract absolute pressure information from the system. Ideally we’d like to be able to derive the pressure difference relative to the pressure when the barometer was closed but unfortunately the pressure in the trapped volume is not constant as a water level change can only occur if the volume of the enclosed gas changes which necessitates a change in it’s pressure .

Thus while qualitative information about whether the pressure is higher or lower than the pressure at sealing can be gathered, the relation isn’t straight forward.

What we’ll do now is to try to arrive at a formula relating the height difference in an ideal barometer and see if we can gather some insights as to why the Goethe barometer looks the way it does in the process. We’ll use this diagram for the principal quantities during calibration and after a pressure change.

We have two kinds of relations from which to mathematically extract the quantities., geometric and physical.

**Geometric constrains**. Under the condition that the liquid is in-compressible constrains exist relating the height changes.

(The volume of the liquid is constant)

**Physical contraints** We neglect temperature variations, capillary forces, vapor pressure and other presumably secondary phenomena and assume the principal physics are Pascals law and the pressure in the trapped gas being governed by the ideal gas law.

(Ideal gas law)

(Pascals law)

**Solution:** Out goal will be to relate exterior pressure, to the height difference . From the geometric constraints we get

Now let us think about what characterizes the Goethe Barometer, the fact that the back containing the trapped gas is so much wider than the tubular part open to the air. Eyeballing it so in the grand scheme of things it can be neglected which besides implying and also simplifies the expression

In effect one will have to include the volume of the trapped gas as is the same order of magnitude as in the real case with normal open air pressure variations, and this equation doesn’t simplify further. Also if the surface bordering the trapped gas changes in area as it moves vertically this equation fails and you need to compute directly using the volume changes instead of going via the areas.

One could imagine using a very very large in which case the trapped gas volume would remain close to original pressure.

Finally, since many classrooms often contain a genuine U-tube where the area of the two openings are the same I’ll write down the equation for that case

Okay, I think that’s all for now though the model could use some developement principally with respect to temperature corrections, both at equilibrium and changes induced by the work involved in compressing or expanding the gas.

]]>

In reading the material this week however I came across a pretty neat problem which I’d like to present some solutions for. So in Skott et al’s Matematik för lärare Didaktik (Mathematics for teachers), page 270 we find a suggested seminar acticity which mentions the following (meta problem).

**“A math book contains the following exercise: “Consider the right triangle with hypotenuse 8, and altitude relative to the hypotenuse 5. What is the area?” This exercise contains an error. Find it.”**

So superficially this is about the concept of computing the area of a triangle using base(foot) and height(altitude)and the student will be reminded of the important idea that a triangle has more than one height depending on what is considered the base (or “foot”), but in actuality it about first reaching a conclusion about the hypothenuse and hypothenuse-height not being independent quantities but in fact connected by mathematical relations.

Since the reader might want to solve or engage with the problem themselves I’ve inserted the read-more breaker here to act as a loose spoiler-marker but the point here will be to investigate some different ways of presenting the solution.

**Posing the problem in formal terms:**

As the exercise contains only two numerical quantities and only one piece of structural information (the triangle being right) one should quickly realize that the two quantities must somehow be incompatible. How you arrive at this conclusion is personal but the basic conclusion is that the altitude must be less than half the hypotenuse.

Since 5 is more than half of 8 the triangle being described in the original problem cannot the real. Why that is the case or why one would suspect it becomes linked to ones representation of the problem.

**Approach 1 (Algebraic)**

One approach is to try and find the values of the legs of the right triangle and identify an algebraic hurdle to finding them which exposes the inconsistency of the problem. Using the pythagorean theorem and the definition of area we get two equations

Which I then rewrite as

Now this is a system of equations of a sum-product-type which can be solved in various ways but most directly by observing that must be solutions to the quadratic equation.

For this equation to have real solutions it’s discriminant must be positive

but

and thus

This is fairly direct and transparent and could have been approached using the explicit quantities c = 8, h = 5 had one not been comfortable with generalitites.

**Approach 2 (Using circumcise)**

If one is aware that every triangle has an excircle and that the excircle of a right triangle has the hypotenuse as it’s diameter one could draw the following diagram which exposes the inequality fairly directly

**Approach 3 (Using the midpoint of the hypotenuse)**

A property of right triangles is that the line segment connecting the midpoint of the hypotenuse to the point at the right angle has the same length as half the hypotenuse.

If one draws such a line it will expose a small right triangle for which the original altitude is a leg and this new segment is a line.

The stament will now follow from the hypotenuse being the longest side in a right triangle.

Okay that’s it for now,.

]]>

So the model of teaching quadratics in most of the books I’ve seen typically runs something like this.

- You start out some preliminaries about factoring techniques like the patterns and
- You learn to solve the simplest quadratic by essentially explaining the pattern
- You state the zero-product property (ZPP), that implies a = 0 or b = 0, and solve some quadratics that have already been factored like .
- You learn the factoring method for solving quadratics without a constant term by factoring and then applying the ZPP.
- You (might) learn how to complete the square
- You quickly use the method of completing the square to run though the proof of the pq-formula . (This does not use the ZPP nor the conjugate rule)
- You spend a couple of hours drilling the aforementioned methods possibly.

Now functionally this route of progression definitely works for conveying how to solve these kinds of equations, but there are still some things I don’t quite like about it. First of all the methods reduce to three separate cases which superficially seem to have nothing to do with eachother

So whenever one solves a problem like this you first have to use identification of type, which is actually a lot harder to do than people who can already do it think it is. Also pedagogically the arrow of progression isn’t clear. Step 1 as presented has nothing to do with 2. The knowledge of step 3 is not used beyond step 4, and if the only use of completing the square initially is to prove the pq-formula the student will forget about this super important pattern and will have to relearn it at uni if ever.

I’ve thus been thinking if the scheme for learning how to work with quadratics might instead be replaced with:

- Preliminaries on factoring, difference of squares, square of sum, distributive property.
- Zero product rule and applying it to factored equations (This should be complemented by graphing)
- Solving by factoring with the difference of squares and applying the zero product rule. and and sneaking in problems the type
- Solving by factoring with the distributive property
- Learning to complete the square
- Solving general quadratics by completing the square and then factoring with the conjugare rule

- Derive the pq-formula from the factoring. (Now this is admitedly a lot messier than the normal way with )

Now there are many downsides to this scheme, for example step 3 being harder than the normal way on account of the normal way of applying operations to both sides is more analogous to solving linear equatons ,which is why it’s probably not used but it would be interesting to try it (or hear about if these is a book or school system) which approaches it this way. The final step 6, the method of factoring a general quadratic directly with the difference of squares is a technique which I find would be a useful addition to the scheme as I think it neatly exposes the connection between the three different forms in which you can write a quadratic expression in an algorithmic way instead of in a more formally logical way (invoking the general factor theorem).

Finally I’d just also like to summarize the method outlined in step 2 as a full algorithm because it more or less superceedes all of those messy “6-different ways, factoring by grouping methods” which I get when I google factoring.

Full algorithm for factoring a quadratic expression over the reals

- Rearrange all the terms so that you have expression = zero
- Complete the square of the expression (ignore if there is no linear term)
- Rewrite the constant term as a square (possibly by)
- Apply the conjugate rule to the difference of two squares

]]>

Not sufficient to describe circuits with more complicated typologies such as bridges like the Wheatstone bridge but sufficient for at least engaging in some basic design.

The problem which came to mind as I was reviewing the material was how one would set out to build a circuit element with a specified resistance from a given set of elements. Usually it’s the other way around. You get the component and you compute the resistance but the inverse problem of getting a specified resistance and then designing the circuit from some pieces if of course a lot more interesting and as we shall see in this special case surprisingly straightforward.

Let us study this particular problem: **“Given a set of resistors with resistance how does one combine these to form an element with a specified fractional resistance “. **If we want an integer multiple or integer fraction or a specific resistance then things are straightforward, just chain together to get a resistance or put in parallell to and this is the smallest number of equal number of components you need.

For fractional resistances of a more general form however there are clearly very different ways you can go about designing the circuit and there are different benefits to them

**Solution one: **Construct $latex $q$ chains with resistors each and put those chains in parallel

mThe chains have resistance and putting $latex $q$ of them in paralell reduces the total resistance by a th to .

**Solution two: **Another is to first create blocks of $latex $ resistors in paralell and then put those in series.**However **both of these solution has the disadvantage that it requires a total number of resistors to complete it which for most fractions becomes very impractical.

My question was therefore if there was a simple way to get a component with this resistance but which wouldn’t require quite so many resistors. Let us just start off by making clear that it is possible. These two components you see below both have the same resistance

but the right one was built using only 3 resistors instead of 6 which is (if you count by means of manufacturing cost) is a more effective solution. And the process or computing the resistance of the latter composition betrays the coming idea

(I will henceforth just set ) Usually one would have used ^(-1)-notation for the parallell coupling but not doing that reveals the continued fraction expression associated with this construction.

The central idea can be laid out as follows **inductively**. If you want to design a component with resistance $p/q$ (where is a reduced fraction such that ) perform euclidean division and rearrange it according to

1

We now recognize the right hand side as the expression for the resistance of two resistor with resistance and in parallel. The idea can diagrammaticaly be represented by

The resistor is constructed by putting resistors in parallel and the resistor is constructed by repeating the induction step.**

This can be streamlined by first computing the continued fraction of a fraction and then move backwards to form these construction steps . Take for example the continued fraction for

which now contains a recipe for designing the composite resistor

Where the top 3-parallell component corresponds to [0;**3**,2,3], the 2-series at the bottom to [0;3,**2**,3], and the 3 in series at the bottom to [0;3,2,**3**].

Not all continued fractions will however have so small numbers in the expansion and especially when the numerator and denominator are fibonnacci numbers we’ll still end up with having to use a very large number of resistors.

What is kind of nice though is that irrational numbers which have a quickly convergent continued fraction can be approximated pretty accurately. for example has a continued fraction [3;7,15,1,292,1,1,…] which means you can use the approximation which using only 26 resistors gives 6 digits of pi which is pretty neat. A

I’ve got half a mind to build this circuit to see if it checks out but unfortunately cheap hobby resistors only have about 1% accuracy to them so you’d only really end up with 3 digits tops.

EDIT: Many of the diagrams in the original publications had errors in them which made them not agree with the text and which were corrected around one hour later.

]]>

I had gone primarily to get some stimulus and inspiration for mechanical configurations and mechanism which are simple enough to turn into elementary physics problems but which aren’t completely disjointed from applications. Like the hand crank wheel turning mechanism on a quaint car-like thing where the crank moves a rack back and forth which is linked to an elongated arm connected to the wheel joint.

The principal novelty however which I would like to point to a little was one I found in a machine that wasn’t even working. It was another crank mechanism where by turning a crank you set in motion a sequence of chains and gears which lifted some balls up to some slides of different curvature illustrating the principles of Brachistochrone problem. Everything about this machine was delightfully improvised which is why the electronics had broken down rendering the thing inoperable but what I want to talk about instead is a curious element in the crank mechanism.

It is admittedly hard to make out in this image but the odd parts were the two rubber elements which are in contact with eachither in the middle.

When the crank is pushed forward nothing of note happens. The whole axis, rubber parts and all, move together, turn the chains and everything works.

It is when you try to push the crank in the reverse direction that something novel happened. Since the machine isn’t supposed to be run in reverse reasonably there are two possible things that might happen, either there is a ratchet somewhere which prevent reverse motion entirely or you put in something similar to a freewheel that the crank becomes disengaged and you move it backwards with zero resistance.

Neither of those things happened here however. Now at first if you pulled it in the reverse there is a full stop, seemingly there is a rachet somewhere down the line, however if you try to turn the crank back harder the right rubber element somewhat dramatically begins to slip and you can turn back the crank almost freely.

This feels really really weird when this happens but also oddly familiar. A snapping you shouldn’t encounter in a purely mechanical system. The reason for how these rubber elements stuck together when the torque of the crank was low such as in the forward motion and the gentle reversed motion is that they weren’t sticking together by friction at all. There was pair of magnets embedded in the rubber keeping them together.

When the torque on these rubber elements becomes too high the magnets are pulled apart and the crank can move backward freely.

This was just so bafflingly weird this must have been an improvised fix to some problem they realized too late was present in the overall design. My guess is that the problem is that the ratchet preventing backwards motion of the system (not explicitly the crank) is located too far way from the crank somewhere down the line in a way that the reverse torque puts strain on something in the fixture, wood or plastic, which risks breaking if pulled back too hard. This is for children after all and if they can break it they will. The magnets guarantee that the torque on the axis has a manageable maximum at which the magnetic link snaps instead of something else in the machine..

]]>

The problem which I’m going to review here was just inspired by the pretty well known result that two consecutive Fibonacci numbers are relatively prime where a proof by the Euclidean algorithm is delightfully elegant but once you’ve shown that one in an educational setting I only think it fair to have available some problems showcasing the same method and which are of equivalent difficulty.

Finding the greatest common divisor of two triangle numbers only superficially applies the Euclidean algorithm and its division into two cases makes it necessary to identify the two cases first.

The two distinctive cases are when the index of the largest of two consecutive triangle numbers is even and the other when it is odd. A triangle number being alternately defined by

.

**Case 1. The larger one has an odd index.** . Applying the Euclidean algorithm we have

After two steps we get that the greatest common divisor was and we summarize

**Case 2: The larger one has an even index.** Applying the Euclidean algorithm again we have

and we have thus found that if the larger triangle number has an even index then the greatest common divisor is .

]]>

I recently bought a cheap digital microscope and have been randomly looking up close at things to verify some of my previous explanations for various phenomena. One being how small droplets of liquid produce pretty vivid dots when they lie on the surface of the screen of your phone. The basic physics is that the droplets forms a semispherical lense which magnifies the individual pixels of the screen making them partially visible even from afar as illustrated by this image of a ~1mm droplet on a Samsung Galaxy S3 screen. and here slightly more magnified.

As I was applying the droplets however dipping various small thin things like needles I realized dropping the droplets from needles was no good because the droplet hangs on the tip of a needle in a rather curious way.

Or rather it doesn’t hang from the tip at all. This is to be compared with how droplets would hang from things which arent sharp at the tip like a pencil lead

and sharpening the pencil lead at the tip with a knife the same effect is produced

In a way this isn’t that strange given that one of the effects which retains the droplet is that the tip isn’t inside the droplet for if it was there would be no (discontinous) barrier towards decreasing the surface tension by simply sliding while with a blunt tip releasing to form a spherical droplet requires a discontinuous increase in surface area and I suppose the probability of it sticking at all would go down.

(Details on the formation of these droplets by the way is by means of dipping it pretty deep and have an originally even coating slide down and accumulate at the tip.)

Nevertheless I can’t quite make out the details of what the origin of the force that is fulling the droplet upwards is coming from, whether it is driven principally by surface tension or adhesion. Because it the needle is shaken a bit and the droplet pushed down a fraction of a millimeter it will climb back up.

]]>

So you start with the 7 basic tetris blocks as featured in the image below.So these pieces you can stick together into chunks in an amazing number of ways and each new figure naturally has some circumference. Assuming all the pieces fit rogether in one chunk the maximal circumference such a figure can have is 56 (think about it) but the question quickly becomes: **What is the least circumference such a composite piece can have?**

Here are some of the initial trial and error solutions which I played around with

but there nevertheless is a packing with even smaller circumference and it is proving the existence and minimality of this packing that will be the theme of this post.

So after some fiddling I arrived at the following close packing of tetris blocks which has a circumference of **22 units.**

It seems pretty suggestive that this is the smallest circumference you can arrive and arguing it is pretty straight forward. For this I chose to use the concept of a bounding rectangle. For any packing shape you can enclose it in a rectangle whose sides run parallell to the sides of the shape.

The circumference of the rectangle is always equal or smaller than that of the tetris shape which is I suppose easiest to prove by sending horizontal and vertical rays across the shape and counting the number of times it crosses the boundary which is at minimum 2.

We now turn our attention to the areas of the tetris blocks. Five have area 3, one has area 8 and one has area 4 which sums up to a total area of 28. What rectangles can contain such an area and do any of them have circumference smaller than 22?

Let and be supposed sides of a rectangle which has circumference smaller than 22 and still has area greater than 28. This gives us the following set of equations:

There are multiple ways of showing that there are no solutions to this problem but I’ll just do proof by case checking running though all the relevant cases. first of all constrains at least one of n and k to be greater than so we run though with corresponding k:s such that n+k<11.

(n,k): n*k

(6,1): 6

(6,2): 12

(6,3):18

(6,4):24

(7,1): 7

(7,2): 14

(7,3): 21

(8,1) : 8

(8,2): 16

(9,1): 9

None of these cases produce an area greater than 28 so the conditions are incompatible and 22 indeed is the minimal circumference.

]]>

And looking up the wiki-article on shapes of constant widths and this is apparently a thing proven in 1860. *All curves with constant width have a perimeters which is times it’s width*. and it’s called Barbier’s theorem.

I also get the feeling that this should be possible to be reworked into a pro-pi argument in the tau-vs-pi debate. Of course by the law of the equivalence between pi and tau implies it could be made into a pro-tau-argument as well but since I’ve never seen it come up in one of those debates I would like to see what happens when a tauist brings up that defining the circle constant via the diameter of a circle was a mistake since the radius is a more fundamental property and then have the piist counter that at least there is a whole class of objects with diameters and they all have perimeters which are times the diameter : P

]]>

Instead I’m rather reviewing the ‘legitimacy’ of the folding schemes and figure out what relations actually govern weather something fits together or not. The prototypical example is the tetrahedron, the polyhedron with the fewest number of faces. The explicit construction of the regular tetrahedron (all sides equilateral triangles) is easy to show directly and is found in the elements and elsewhere but you can fold generic tetrahedrons by just making 4 triangles such that every triangles has common sides with the other triangles in a natural way. Made from one piece they are then flded together to form a possibly irregular tetrahedron.

Empirically and intuitively this basically defines natural, a result in no need of further explanation. Nevertheless mathematically it almost seems coincidental. Just because two face can be glued pairwise doesn’t immediately mean all the faces should fit together collectively even though it’s about as natural a thing to postulate as the equality of the two base angles in an isosceles triangle.

The novel thing is that you can turn this 3 dimensional problem into a 2 dimensional one and use the vast machinery available to you in that space. You can note that folding an individual side is equivalent to rotating it about it’s common side with the bottom triangle where the it’s outer corner traces out a circle lying in a plane parallel to the altitude of the triangle and perpendicular to the common side.

If the three faces are to come together their corresponding circles must intersect at a single point which means 1. the planes of rotation intersect along a common line (which is perpendicular to the plane and forms the height of the tetrahedron and 2. with the altitudes of the faces and satisfying relations to guarantee a consistent height.

Especially 1. is interesting because it forms a more ‘physical explanation’ for why dropping and extending altitudes in the original net intersect at a common point which can be taken as a purely classical 2 dimensional geometric result.

The thing I find neat is that this result seems kind of arbitrary in 2D but as part as a 3-dimensional picture it makes perfect sense.

I’ll have to see what’s the best way of extending this to folding schemes with a general polygonal base.

]]>