Golden ratio and Fibonacci


I’m currently doing a course on linear algebra, and while working on a hand-in, I discovered this neat proof that the Fibonacci sequence grows exponentially.

Definition: Define two sequences:

  1. a_1 = 1, a_2 = 3, \text{ and } a_n = a_{n-1} + a_{n-2} \text{ for } n>2
  2. b_1 = 1, b_2 = 1, \text{ and } b_n = b_{n-1} + b_{n-2} \text{ for } n>2

Notice that b corresponds to the Fibonacci sequence, and a corresponds to the Lucas numbers, first index truncated.

Lemma 1: For all k>0

  1. a_k = \frac{1}{2} \left ( a_{k-1} + 5\cdot b_{k-1} \right )
  2. b_k = \frac{1}{2} \left ( a_{k-1} + b_{k-1} \right )

Proof. Shown only for 1.1, the proof is similar for 1.2. Proven using strong induction on k.

  • Base-case: 
    Notice that \frac{1}{2} \left ( a_{0} + 5\cdot b_{0} \right ) = \frac{1}{2} \left ( 1 + 5\cdot 1 \right ) = \frac{1}{2} \left ( 6 \right ) = 3 = a_1.
    Which is what we wanted to show.
  • Induction: 
    We see a_k = a_{k-1} + a_{k-2}, by definition.
    We apply the induction hypothesis on k-1, and k-2 to get: a_k = \frac{1}{2} \left ( a_{k-2} + 5\cdot b_{k-2} \right ) + \frac{1}{2} \left ( a_{k-3} + 5\cdot b_{k-3} \right ). We use the definition of a_k and b_k, and get
    a_k = \frac{1}{2} \left ( a_{k-1} + 5\cdot b_{k-1} \right ). Which is what we wanted to show. \blacksquare.

Lemma 2: For all k>0\phi^k = \frac{1}{2} a_k + \frac{1}{2} b_k \cdot \sqrt{5}.

Proof. We prove this lemma using induction on k.

  • Base-case:
    We see that, \frac{1}{2} a_1 + \frac{1}{2} b_1 \cdot \sqrt{5} = \frac{1}{2} 1 + \frac{1}{2} 1 \cdot \sqrt{5} = \frac{1}{2} \left(1 + \sqrt{5} \right) = \phi. Which is what we wanted to show.
  • Induction step:
    We get \phi^k = \phi \cdot \phi^{k-1} = \frac{1}{2}(1+\sqrt{5})\cdot \left( \frac{1}{2} a_{k-1} + \frac{1}{2} b_{k-1} \cdot \sqrt{5} \right), using the induction hypothesis. Multiplying, we get \phi^k = \frac{1}{2} \left ( \frac{1}{2} \left ( a_{k-1} + 5 b_{k-1} \right ) \right ) + \frac{1}{2} \left ( \frac{1}{2} \left ( a_{k-1} + b_{k-1} \right ) \right ) \cdot \sqrt{5}. Using lemma 2, we find \phi^k = \frac{1}{2} a_k + \frac{1}{2} b_k \cdot \sqrt{5}, which is what we wanted to show. \blacksquare.

Now for the interesting part of this article:

Theorem: b_n = \Theta(\phi^n).

Proof. We split the proof into two parts; first proving that b_n = O(\phi^n), then proving b_n = \Omega(\phi^n).

  • b_n = \Omega(\phi^n).
    We see that \phi^n < \frac{1}{2} a_n \cdot \sqrt{5} + \frac{1}{2} b_n \cdot \sqrt{5}, because of lemma 2. Because of lemma 1.2, we have b_n = \frac{1}{2} a_n + \frac{1}{2} b_n, so we get \phi^n < b_{n+1} \cdot \sqrt{5}. This implies that b_n = \Omega(\phi^n).
  • b_n = O(\phi^n).
    We see that \phi^n > \frac{1}{2} b_n because of lemma 2, which implies that b_n = O(\phi^n).

Because b_n = O(\phi^n), and b_n = \Omega(\phi^n), we have b_n = \Theta(\phi^n). That is, the Fibonacci sequence grows exponentially, with base \phi. \blacksquare.

Calkin-Wilf Fractals


Suppose we have two natural numbers i,j. Define a sequence a_0 = i, a_1 = j and a_n = |a_{n-1} - a_{n-2}|. That is, we start with two initial values, and repeatedly subtract the smallest value from the largest value. Since i,j \geq 0, we know that a_n \geq 0 for all n.

Eventually, we will reach the cycle \ldots 1, 1, 0, 1, 1, 0 \ldots no matter our choice of i,j (to see why, consider the termination function \mu(i,j)=i+j and the invariant that any term must be positive).

Define

\text{chainLength}(i,j) = \text{largest } n \text{ such that } a_n \notin \{1, 0\}

We can now make pretty pictures. For each pixel (x,y) color pixel by \text{chainLength}(x,y) which can be calculated using a simple recursive procedure:

int chainLength(int x, int y){
 if(x == y || x==0 || y==0)
     return 0;
 if(x>y)
     return 1+chainLength(x-y,y);
 else
     return 1+chainLength(x,y-x);
 }

calkinwilf

Generated using the DELTA library.

Connection to Calkin-Wilf trees

The Calkin-Wilf tree is an infinite binary recursive tree, where the root node is defined as 1/1, and every node a/b has a left child \dfrac{a+b}{b} and right child \dfrac{a}{a+b}.

 

capture

The Calkin-Wilf tree is useful because it allows us to define a bijection between the natural numbers and the rational numbers, which can be used to show that the rational numbers are a countably infinite set.

Notice that if a node x/y is a left child, then x>y, and if it’s a right child then y > x. This allows us to determine the parent node of a given node.

If for a given node x>y then we know it was the left child of its parent. So we know its parent is (x-y) / y. Similarly, if y>x, then its parent is x/(y-x). Notice that at each step, we subtract the smaller number from the larger number, and we terminate the procedure when we reach the fraction 1/1.

Going back to a_n, we could interpret a pixel (x,y) as the rational number x/y. The recursive sequence \text{chainLength}(x,y) would then be equivalent to the depth of the node $latex x/y$ in the Calkin-Wilf tree.

Keep in mind that the Calkin-Wilf tree only contains rational numbers in the lowest common term, so the node 2/2 does not exist in the Calkin-Wilf tree. To circumvent this issue, we convert our input node to the equivalent rational number in lowest common terms.

Incendium – HTML 5 App


fractal22

I just realized I have had this app up for a while on my github.io without posting it here. Anyway, I made an HTML5 app which implements the algorithm detailed in my previous post on fractal animations.

Moving the mouse around on the screen changes two different parameters, with each point P corresponding to a unique configuration.

http://ssodelta.github.io/incendium/

It is written for Google Chrome, and is known to act weird on Firefox sometimes. It may also crash, but just restart the page (sry).

AIArena – Bot Tournament


AIArena is a tournament where contestants write scripts for bots which battle each other in a virtual arena. I have written a dynamic web application which allows users to submit their own bots, written in a custom imperative scripting language dubbed BOTScript. The server will itself make the bots battle each other, ranking them on the website by how many wins each bot gets.

AIArena website: http://dank.business.com/bots/

GitHub: https://github.com/SSODelta/BotBrawl

If you have ideas, suggestions, feedback, criticism or whatever, drop me a line at admin@ssodelta.com.

Sample of the DELTA Library


The DELTA library

If you look through my blog, you’ll find posts where I detail an algorithm for making fractals, or other visually appealing animations/images. To aid myself in the future, I have written a library for Java8 which contains some nice classes and operations for manipulating colors and images.

You can take a look at it at its GitHub page (contains a README.md), though I will detail with the library with further scrutiny on this blog at some point in the future (studying is keeping me busy atm):

https://github.com/SSODelta/DELTA

Often when making single images, there are some dimensions you can change to produce animations. I thought two previous posts were especially suitable for this, so I have made two very nice animations, imo:

Newton Fractals

The biggest problem with animating this fractal, was that I lost the information about which root was assigned which colors when slighting altering the parameters of the polynomials. Instead, using DELTA I have colored each root r_0 using HSV colors with hue proportional to the argument \mathrm{arg}(r_0) and the number of iterations it took for the point to converge – and saturation/value decreased with increasing magnitude |r_0|.

Regex Fractals

Here, the hue is proportional to the edit distance and the time t, and saturation/value inversely proportional to only the edit distance.

LoGiX v1.1.1


LoGiX has been updated to a new build and includes some nice new features and small bug-fixes. Here is the changelog:

Added support for {...} statements and line-breaks;
Added support for in-line comments.
Added support for comments if the output is false.
Improved loading system.

Added support for {…} statements and line-breaks;

The most nice new feature is the curly brackets. I used to have this implemented, but when I rewrote the engine entirely once, I lost feature. But it’s really cool, basically, you type a list list {a,b,c}, and then the LoGiX-engine will replace the curly brackets with each element inside it. So we can type

:> def {P,Q}

Which evaluates as:

:> def P
:> def Q

Which saves a lot of space in programs. A nice example is modus ponens, which before was written as:

:> !def P
:> !def P implies Q
:> Q?
true

Using the new notation, we can write:

:> !def P {,implies Q};Q?
true

Which is very cool! I think this is the shortest you can write modus ponens in LoGiX. It is of course possible to do these multiple places in the command, so we get:

:> def {P,Q,R} implies {A,B}
def P implies A
def Q implies A
def R implies A
def P implies B
def Q implies B
def R implies B

Added support for in-line comments.

Another new feature is the ability to do inline comments. Before, the compiler would crash if a #-symbol appeared not at the beginning of a line. But now, it treats everything after the #-symbol as a comment.

:> def P #Now we've just defined P and made a comment
def P

Added support for comments if the output is false.

And now it’s also possible to use custom messages when the output is false. Before we could do:

:> def {P,Q} implies R
:> def P or Q
:> R?its true
its true

It interprets whatever is after the question mark as the message it prints if the statement is true. However, it was not possible to do the same if the statement was false – it would always return ‘false‘. But now you can do this too:

:> def P and Q implies R
:> def P or Q
:> R?its true|definitely not true
definitely not true

An Optimal Scoring Algorithm for Avoiding Balls


A simulation is available at: https://dl.dropboxusercontent.com/u/19633784/score_test.html (check the bottom of the article for more info)

In my last posts the main topic at hand has been how to get balls to bounce around in a labyrinth of right-angled walls. However, there hasn’t been much playability as of yet, only an animation of the balls bouncing. So I’ve introduced a player entity now, which has a position (x, y) and a radius r – a player is also sphere as of yet. Now, in the game the point is to avoid the balls. You get points for doing dangerous manoeuvres, i.e. being close to the balls. We’re going to make some assumptions:

  • Being close to the ball is harder than being further away from the ball.
  • It’s harder when the ball is heading towards the player, rather than heading away from it.
  • It’s harder to avoid the ball when it has a huge velocity than when it’s going slow.
  • Let S be the score, and \textrm{Diff} the difficulty. Then \frac{d}{dt} S \propto \textrm{Diff}.

We’re going to break this down into three parts. In particular, we have a scoring function \sigma. In practice however, we’ll look at the derivative of this function, because we want to know how much to increase the score with each game tick. This can also be stylized as \frac{d}{dt} \sigma(d,a,b,v) = D(d) \cdot \theta(a,b) \cdot V(v), where D is the distance function, \theta is the direction function, and V is the velocity function. We’re going to look at the distance function D(d) first.

The distance function has a maximum value D_{max} when the player is as close to the ball as possible without dying. If we let w_b be the width of the ball, and w_p be the width of the player, then obviously the closest possible distance is \frac{w_b + w_p}{2}. We call the shortest possible distance for d=0, and therefore D(0) = D_{max}. We want the distance function to decay exponentially, so going further away by constant distance decays the score by a constant factor. In other words, we can express the distance function as D(d) = D_{max} \cdot e^{-k \cdot d}, where k is some constant.

Next up is the direction function \theta(a,b). The range of the function is [0, 1], where 1 corresponds to the situation where the ball is going exactly towards the player, and 0 is the situation where the ball is going in the exact opposite direction from the player. How do we calculate \theta?

Well, notice that we have two angles as the arguments to the function. This is because we only need two pieces of information for this – two vectors a and b. The vector a is the vector between the ball and the player. The horizontal constituent is the difference between the x-coordinates of the ball and the player, and likewise for the vertical constituent. And the vector b is the movement vector for the ball.

Okay, so now we need to calculate the dot product of a and b, because it just so happens that a \cdot b = 0 if the two vectors are orthogonal, it also so happens that a \cdot b = -1 when the vectors are pointing to each other, and a \cdot b = 1 if the vectors are pointing away from each other. Notice that the range of the dot product is [-1, 1], and we need to map this unto [0, 1], so we define the \theta-function as \theta(a,b) = \frac{(1 - a \cdot b)}{2}. This function will give 1 if the ball is going towards the player and 0 if its going away from the player.

And finally, we’ll go through the velocity function V(v). We’re going to assume it’s twice as hard to avoid a ball that’s travelling twice as fast as another ball. In other words, there is a linear correlation between the velocity and the score you should receive. In other words V(v) = C \cdot v, where C is any constant.

We can now assemble the scoring function \sigma. We see that \frac{d}{dt} \sigma(d,a,b,v) = D(d) \cdot \theta(a,b) \cdot V(v) =D_{max}\cdot e^{-k \cdot d} \cdot \frac{(1 - a \cdot b)}{2} \cdot C \cdot v.

In practive, we choose constants D_{max} = 100, k = -\log(0.75), and C = 1. So, here is the final scoring function:

\frac{d}{dt} \sigma(d,a,b,v) =100 \cdot e^{-(-\log(0.75)) \cdot d} \cdot \frac{(1 - a \cdot b)}{2} \cdot v

If you want to, you can test it out at the following link: https://dl.dropboxusercontent.com/u/19633784/score_test.html

You are the red ball, and the game freezes when you get hit by a ball. Refresh the window to try again (sorry about this solution).