Storing the Images of Surveillance Cameras in Constant Space Using Infinite Geometric Series

I was recently in London – it’s a fantastic city and I loved my time there. However, I was completely baffled by the large amount of surveillance cameras in the city. Almost everywhere you go the Queen’s watching you.

While I won’t go into a debate on whether or not surveillance is a good thing, I do want to discuss the sheer amount of data these cameras must pick up on a daily basis. I mean, several cameras on every street recording video non-stop – that must be super expensive in storage. Is there any way we can circumvent this?

The Naïve Solution

An often used solution to the storage problem is to delete all images older than X days. This keeps the storage space needed constant and ensures that the most recent (and most relevant) photo information is readily available to check if needed. The most obvious disadvantage here is that the old images get deleted completely, making forensics difficult if the crime was committedso early that the camera has deleted all the relevant images. In this post we’ll detail an algorithm which keeps track of all images the surveillance camera has saved, yet accomplishes it in constant space.

Storing the Images of Surveillance Cameras in Constant Space using Infinite Series

For the sake of this post, let’s assume the surveillance cameras take still photos every X minutes. The reason for this is that this algorithm probably doesn’t work well with existing video codecs (or at least, I wouldn’t know where to start). Now, obviously we can’t just store an indefinite amount of high-quality images on a drive with a limited storage space, so what we’re going to do is let the old images degrade over time. That is, the older the images is, the worse quality it has.

This makes sense – we assume that the most relevant images are the most recent images. After all, why would the police need to look at 3 year old surveillance photos of a random gas station? (There certainly could be incentive to look at old images, but we assume it’s the recent photos that we need to have in the best quality).

The algorithm

We’ll let the images degrade with a constant factor r \in [0, 1] for every constant amount of images. That is, the first n images are saved in quality Q = 1 where Q represents how large a percentage of the data we’re keeping for the image, so Q=0.5 means that we have reduced the space by 50%. The next n images are saved in quality Q = r, and the next n images after these are saved in quality Q=r^2.

We’ll call the first n images for the priority 1 images, the next n images for the priority 2 images etc. If we assume all images require a maximum disk space of S, then obviously the total disk space D for all priority 1 images is D_1 = n \cdot S. Likewise, the total disk space for all priority 2 images is D_2 = r \cdot n \cdot S. And in general for priority p images, we get:

D_p = r^{p-1} \cdot n \cdot S

And for the first k priorities we get the total disk space T:

T_k = D_1 + D_2 + D_3 + ... + D_k

Which can be written using sigma-notation:

T_k = \displaystyle \sum_{i=1}^{k} D_i

Expanding this, we get:

T_k = \displaystyle \sum_{i=1}^{k} r^{i-1} \cdot n \cdot S

Switching our terms, we get:

T_k = n \cdot S \cdot \displaystyle \sum_{i=0}^k r^i

To show that the total disk space is constant no matter how long we let our surveillance cameras run, we need to show that no matter how many priorities we sum up we’ll always remain bounded below some finite constant. That is, we look at the limit as we increase k:

T_\infty = n \cdot S \cdot \displaystyle \sum_{i=0}^\infty r^i

And since this is a geometric series, we get the following expression:

T_\infty = \dfrac{n \cdot S}{1 - r}

 Which is finite for n,S,r \in \mathbb{R} proving that the disk space required is constant with this approach.

An example

Let’s say each surveillance camera has a total disk space of T = 2 ~ \textrm{Gb} (2,147,483,648 bytes), and that each image is stored in the JPEG file format with size 480p (720 x 480). According to Wikipedia, a Q=1 image requires about 8.25 bits per color pixel, which means that each photo takes up S = 348 ~ \textrm{Kb}.

Assuming we take one image every minute from our surveillance cameras, and that we want the images to degrade on a daily basis (all images taken the same day are of the same quality), then n = 1440 images.

Finally, we can easily derive:

r = 1 - \dfrac{n \cdot S}{T}

Which means that  r = 0.768, which means that the images degrade roughly 23% in quality every day given this setup (obviously the parameters can vary, but this was just one example). Notice that this formula gives r \notin [0,1] if n \cdot S > T. This makes sense because it implies that the priority 1 images themselves take up more disk space than the surveillance cameras (which is of course impossible).

I’m not sure how practical this would be to implement, but I found the thought interesting at least.

The Quest for Near-Integers

Recently I’ve gotten interested in experimental mathematics – finding mathematical relations using computational methods, rather than a mathematicians insight. While I won’t discuss the philosophical ramifications of this, I do want to detail a method for finding near-integers using experimental mathematics.


A near-integer is a number which is “close” to being a integer, within some \epsilon. Finding near-integers is easy, the number 1 + 10^{-20} is almost equal to 1, but it’s not interesting. The fun part about near-integers is that a seemingly random relation somehow almost equals an integer.

For instance, \sin(11) = - 0.9999990206..., which is arguably close to being -1. While there is no rigorous definition of what is a near-integer, we will be satisfied with finding the relation which is “closest” to being an integer (more on this later).

Finding near-integers

In the first part of this post we’ll deal with the basic principles of the algorithm, and further on we’ll get more technical.

So, the basic principle is that we’ll generate a random expression, and check whether or not it’s close to being an integer. Then we’ll repeat this process for 100.000 random expressions and choose the expression which was closest to being an integer. Put in bullet points:

  1. Set iterator i = 0, and so far best expression e_{best}.
  2. Generate random expression e.
  3. If e is closer to being an integer than e_{best}, set e_{best} = e.
  4. Increment i by one.
  5. If i < 100000 go to step 2.
  6. Display expression e_{best}.

The process of generating expressions sounds simple, but it is, in fact, not that simple to generate a random expression, as we’ll discuss later on.

One of the more prominent results from my program is the following:

2 \cdot \pi \cdot (e + \pi) \cdot (e + \pi^e) \approx 927

In fact, this number is closer to being an integer than \epsilon = 2.1 \cdot 10^{-5}.

Another interesting result is:

-3 \cdot \pi \cdot (1 + e - 5\cdot \pi) \approx 113

Where \epsilon = 8.6 \cdot 10^{-5}.

For more results check the bottom of this post.

Modelling the expressions

To be able to use our clever new algorithm, we need to be able to generate expressions. But what is an expression? An expression is a mathematical statement which returns some value. In practice, we’ll model an expression as an abstract class in Java. An abstract class is a class, which contains some fundamental, abstract functions and methods that can be inherited by other subclasses.

This is useful, because it allows us to model an abstract class called Expression, and have subclasses of this, which perform specific functions. Their common function is getValue();, so all subclasses of Expression needs to be able to return a numeric value. The most basic expression is Constant, which returns a single number. An example of Constant could be the number 1, the number 2, or \pi.

Next up we could have a class called Addition, which takes two other Expression‘s a, b and returns their sum when getValue(); is called. Similarily, we have Subtraction, Multiplication, Division, and Exponentation. It’s also easy to implement other functions, but as of now they won’t be necessary.

The reason we use abstract classes is so we can have more complicated, nested expressions inside each other. So, an instance of Addition contains two expressions, which might also be other Addition classes.

To make things more explicit, let’s take a look at the structure of the expression \pi^2 + 1. This expression is an instance of Subtraction, containing the two other expressions \pi^2 (being an instance of Exponentation), and 1 (being an instance of Constant).

We can also implement other functions than getValue();. In my implementation, I have also made getLaTeX();, which returns the LaTeX-code for the expression, and getString();, which returns a more readable string representing the expression.

Generating random expressions

To make a random expression, we’ll use recursion. We have a function randomExpression(int depth); which returns a new random expression. This function has two special cases:

Depth = 0

When the depth is 0, we’ll simply return a new Constant from some alphabet \Sigma of constants. In practice we’ll use the alphabet \Sigma = \{ \pi, e, 1,2,3,4,5,6,7,8,9 \}.

Depth > 0

When the depth is larger than zero, we’ll return a new random binary expression (for instance Addition, which contains two other expressions.) with the arguments randomExpression(depth-1), randomExpression(depth-1). This ensures that we’ll have a nested expression. A special clause is added for Exponentation, which returns an instance of Exponentation, with the arguments randomExpression(0), randomExpression(0) to avoid ugly towers of exponents.

This is the basic gist of it. This will generate a random expression of some depth.

Closing remarks

While this method proves successful in finding interesting near-integers, most often it finds rather tedious near-integers. For instance, one of the near-integers I often encounter are like the following:

3 - \dfrac{\pi}{6^7}

The program claims this is a near-integer close to 3 (obviously). This is unfortunate, but there is little to be done with it without some really sophisticated methods.

Gallery of near-integers

Here is a list of all the most interesting near-integers I’ve found using this method. Some of the expressions have been simplified using Wolfram|Alpha.

Expression: 4 \cdot e^{\pi} \cdot \pi ^{1 + e} \approx 6531

Error: \epsilon = 1.4 \cdot 10^{-5}

Expression: 2 \cdot \pi \cdot (e + \pi) \cdot (e + \pi^e) \approx 927

Error: \epsilon = 2.1 \cdot 10^{-5}

Expression: -3 \cdot \pi \cdot (1 + e - 5\cdot \pi) \approx 113

Error: \epsilon = 8.6 \cdot 10^{-5}

Expression: (6 + \pi) \cdot (6 - e) \approx 30

Error: \epsilon = 1.3 \cdot 10^{-4}

Expression: 3 \cdot e \cdot (4 - \pi) \approx 7

Error: \epsilon = 1.8 \cdot 10^{-4}

Expression: e + \pi + e^{\pi} \approx 29

Error: \epsilon = 5.6 \cdot 10^{-4}


Visualizing Polynomials in the Complex Plane

Generating the Images

The source code for this project can be found at my GitHub:

There are many ways of visualizing numbers in the complex plane, as seen in my previous post on Newton’s Fractals. This post will detail another method of getting pretty pictures from equations.

The basic idea of this method is to visualize where each complex number is mapped to by some polynomial. The color will depend on the direction the complex number goes after having been through the polynomial, and the distance it travels. Alright, let’s get formal!

Like the post on Newton’s Fractals we start off with a polynomial of some degree. Call this q.

To generate a fractal, we first need to specify how large the image we need to make is, call this [w, h], where w is the width in pixels, and h is the height (also in pixels). Next up, we have to specify which numbers we’ll use, so we decide on a resolution r – we’ll only consider complex numbers a + bi, where a,b \in [-r,r].

Now, we’ll iterate over every single pixel p(x, y) in the image. Then, the associated complex number will be C =i \cdot (\dfrac{2 \cdot r \cdot y}{h} - r) + \dfrac{2 \cdot r \cdot x}{w} - r. To make it more clear why this is the case, take a look at the real and imaginary parts individually:

  • Re:  -r + 2 \cdot r \cdot \dfrac{x}{w}
  • Im: -r + 2 \cdot r \cdot \dfrac{y}{h}

Anyway, we now have a complex number C. Now, we’ll calculate D = q(C), where q was the polynomial we chose. Next up, we’ll construct a vector \vec{V} = <\mathbf{re}[C] - \mathbf{re}[D], \mathbf{im}[C] - \mathbf{im}[D]>.

From this vector, we’ll calculate the angle between a vector pointing to the right of the screen – this is the just the \mathbf{atan2} function of the vector. So, we get an angle \theta = \mathbf{atan2}(\vec{V}).

For the next part, we’ll start by calculating a value \zeta = \dfrac{|\vec{V}|}{\sqrt{w^2 + h^2}}, and two values a = \sin(\zeta), and b = \cos(\zeta).

We’ll treat these three values \theta, a, and b as the three color components of the HSV-color type. We’ll now use the same conversion process as described in my post on psychedelic animations to convert HSV to RGB.

Then we’ll paint the pixel at p(x, y) with the color information.

If we do this process for all pixels, we get impressive images:





To see an album with 78 of these images, check out my Imgur-album here:

Analyzing the images

One of the first things I noticed with these images is obviously the yellow shape in the middle of the layers of circles. What I think is interesting is that if we project the yellow pixels in the central shape over the central x-axis of the image, they don’t encounter another yellow pixel.

Said in another way, if you cut the image horizontally and overlay the two halves, the yellow pixels in the central shape don’t touch. They are disjoint, in other words.

We can also specify this (somewhat) formally. If a pixel p(x,y) is yellow (and in the entral blob), then p(x, h-y) won’t be yellow.

The sad thing is that I don’t have the slightest clue why this happens. If someone smarter than I knows why the images look like they do, feel free to use the comment section down below to enlighten me!

Making the images even cooler

If we have an image I_ {pre} of size [w, h] (the images we’ve generated), we construct a new image I_{post} of size [w, \dfrac{h}{2}]. And for each pixel p(x, y) in I_{post}, we paint it with color I_{pre}(x, y) \oplus I_{pre}(x, h-y), where I(x,y) represents the color information in image I at position (x,y).

My original idea was that this would make it evident that the yellow parts are disjoint when reflected over the central x axis, but it turns out it doesn’t.

Original Image I_{pre}:


Processed Image I_{post}:


Clearly, this does not convey much information about the central yellow  blobs, but it looks really, really cool.

Here are some more images that have been processed:

Original Image I_{pre}:


Processed Image I_{post}:


Original Image I_{pre}:


Processed Image I_{post}:


Unfortunately, I don’t have an album with lots of processed images, but if you want you can try running the code yourself.

Newton’s Fractals – Generating Images from Roots


Fractals have always amazed me. Every time I see a fractal, I can get just as excited as the first time I saw the mandelbrot set. In this post, we’ll go through a method, which can generate beautiful structures from simple functions.

Link to the source code for this project:

For a real-valued function f(x), a root is any x, which satisfies f(x) = 0. A trivial case might be x=0 for the function f(x) = x^2. Roots are also what is found with the famous solution \frac{-b \pm \sqrt{D}}{4\cdot a \cdot c} to the quadratic equation f(x) = a \cdot x^2 + b \cdot x + c.

Sometimes however, a second degree equation does not have any real solutions. That is when D < 0. A quadratic equation will, however always have two solutions in the complex plane.

The complex plane is an extension of the real number line to a plane. The major new addition is the introduction of the imaginary unit i, which is the unique number satisfying i^2 = -1. This allows us to find solutions to equations such as x^2 = -1, which otherwise wouldn’t have any real solutions. In the complex plane every single point has a unique complex number associated with it. A complex number consists of two parts, a real part and an imaginary part. This is usually stylized a + b \cdot i, where a is the real part and b is the imaginary part.

What does all this have to do with fractals?

To make a fractal, we need to assign a color code to each point in a plane, in this case the complex plane as discussed above. We decide on an initial size (w, h) and a resolution r. The size is the width w and height h of an image measured in pixels. The resolution r is the “zoom-factor”, if you will. If r=1, then we’ll deal with complex numbers a + b \cdot i in the range a,b \in [-1, 1]. And in general a,b \in [-r, r]. To make it clear how to combine roots and points in the complex plane to make fractals, we need to introduce a very important method.

Introducing, Newton’s approximation for finding roots to a real-valued function (although it extends nicely to the complex plane, fortunately). For a function f(x) and an initial “guess” x_0, a better guess will be x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}, where f'(x) is the derivative of f(x). In fact, this can be generalized to the recursive formula x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}. So how do we make fractals?

  • Have a function f(x), a size (w, h) and a resolution r.
  • Assign a unique color to each root to f(x).
  • For every point P(x, y) in the image, make an inital guess x_0 = (-r + (\frac{x}{w}) \cdot 2r, (-r + \frac{y}{h}) \cdot 2r).
  • Use Newton’s approximation method to approximate a root as long as x_{n+1} - x_n > \epsilon.
  • Color the pixel P(x, y) the color which is assigned to the root which has just been approximated.

In practice, we’ll only deal with polynomials, though, because they’re easy to calculate and to calculate the derivative of.

Using this method gives us some stunning images:

fractal_6 fractals_38 fractals_43 hd_6 hd_12

Link to an album with more than 100 HD fractals:

As a little extra feature I have also colored the points which converge faster darker as to create some variation in the images.

Inspiration for this post:

Generating Images from Regular Expressions

There is an update to this post, which includes colors:

Source code complete with Javadoc can be found at:

This post will detail a method for using regular expressions to generate images.

A regular expression is an array of characters, which represents a series of instructions, which for any input either matches or doesn’t match. For a simple example, the regular expression


Only matches the string ssodelta. This makes sense, but we can do pretty nice things with regular expressions. If we for example write


Then the regular expression either matches ssodelta or the string delta. This is because the vertical bar | represents the symbol or. We also have so called quantifiers, the most important of which is the kleene star *.


The parentheses group strings together, like regular parentheses in mathematical expressions (2+4)*3, for example. The kleene star * means zero or more, meaning that there can be zero or more delta and it will still match the regular expression. A few matches include ssodelta, ssodeltadelta, ssodeltadeltadelta and so on…

I won’t completely detail regular expression, but if want to read up on it, I suggest close-reading the wikipedia article on the matter, but the bottom line is that regular expressions are really cool. But how can they be used to generate images?

Consider a completely square image, which is divided into 4 quadrants:


We can refer to each of these four quadrants by their numbers 0,1,2,3, and we can further divide these quadrants into four numbers, and label them accordingly.


But what does this have to do with regular expresssions? Well, we can have a regular expression, which only uses the characters 0,1,2,3 and every string adresses an area in this area. For example, the string 022 represents the area marked with red:


We can decide on a resolution n, and generate all strings of length n, parse them through a regular expression, and if it matches, paint it black. For example, the regular expression (0|2)(0+1+2+3)* generates the following image (for any resolution >= 2):


This is because the regular expression only matches if the cell begins with a 0 or a 2, which is why the 0th and the 2nd quadrants are painted black, and the rest aren’t.

We aren’t limited to boring checkers though, if we feed the generator the regular expression (1+2+3)*0(1+3)*0(0+1+2+3)*:


Notice the self-similarity in the image, that’s because this image is actually a fractal.

Here is a gallery of other images generated using this technique:

also-cool-fractal bars  dots pik

Notice how often triangles appear in these images.

Although the .java file has an included Javadoc, here’s how you use it to generate images:

Generator g = new Generator(RESOLUTION, IMAGE_SIZE);
g.generateAndExport("REGULAR_EXPRESSION", "EXPORTED_FILE_PATH.png"); //Must be a .png file

If you prefer Python code, check out Jack Morris’ post on the same subject, where he implements it in Python:

Generating Mozart from Markov Chains

Source code: 

This post will detail a method of generating music from sampling MIDI-files. We’ll introduce an algebraic structure called a Markov chain, and use it to make our own MIDI-files.

A markov chain is a system of nodes connecting by edges. Each edge has an associated probability with it, which describes the probability that the system will progress to that node next round. Consider this graphical example:


This is a markov chain composed of two nodes, E and A and four edges – E to EE to A, A to A, A to E. If we start on node E, there is a 0.7 probability that we will progres to node A, and a 0.3 probability that we stay on node E. Sounds simple, right?

The basic concept is simple really, and that’s all we really need to utilize this system in our program, but there is in fact a lot of advanced mathematics regarding markov chains, but I won’t detail it.

Now, consider a system with twelve nodes, one for each musical note, and 144 edges connecting all the nodes to each other, but where do we get these probabilies from?

Introducing sampling. The program contains a module called midcrawler, which goes to a website, crawls it for .mid-files and constructs a large markov chain, which is a representation of the midi-files, it’s sampled. I have also included the feature of loading/saving these samples, so it’s not necessary to sample the entire webpage for every single generated bit. From this, it starts on a random note and starts traversing the system, generating the song for steps.

The rhythm however, is not generated. Upon using the program, you need to supply it with some rhythm information. There are some obvious problems with this technique though, it is not limited to a scale, so it may sometimes use a tritone or be very dissonant, making it sound less than good.

I used the program to sample the following page containing a bunch of Mozart .mid files: and made the following .mid file:

Personally however, I’m not able to play this MIDI file, so I converted it to an MP3 file, which everyone can play:

It doesn’t sound very Mozart-ish, but it’s very interesting to listen to, and I’m surprised that it doesn’t sound worse.

Here is an explanation of how to use the program, given the source code:

 Crawler c = new Crawler();
 Parser p = new Parser(c.getContents());
 Generator g = new Generator(p.getNoteMatrix());

Dichromacy, or How Dogs View the World

The original source code for this post is wrong as pointed out by a friendly soul in the comments section. I have since updated the images on this post to reflect that fact /ssodelta

This post was updated July 1st, 2016. The new source is based on the DELTA Colour Library. The full code for the protanopia-filter is:

public class ProtanopiaFilter implements DImageFilter {

 public Colour filter(int x, int y, Colour[][] raster) {
 Colour c = raster[x][y];
 double[] lms = c.getData();
 double Lp = 2.02344*lms[1] - 2.52581*lms[2],
 Mp = lms[1],
 Sp = lms[2];
 return new Colour(new double[]{Lp, Mp, Sp}, ColourSpace.LMS);


Most humans see a whole bunch of colors – red, green, blue, violet, maroon etc., but some people are unfortunately born color-blind, and thus have trouble differentiating hues, which regular people find easy. For example, the red-green color blindness, which I’ll assume most people have heard of. Basically, if you’re red-green color blind, this image looks uniform:

This is because the human eye interprets color using three color receptors – red, green and blue. However, in some people, one or more of these receptors don’t work as well, making it harder to tell certain colors apart. When you only have two color receptors, you have a condition called dichromacy. Some animals, however, are permanent dichromats – this includes our canine friends – dogs are dichromats. There are several kinds of dichromacy, protanopia is the most common amongst humans, and there is a general consensus that this is how dogs view the world. Consider this rainbow:


If you have protanopia (or is a dog) this rainbow would look something like this instead


Notice how similar the blue and green colors look to a dog – no wonder it can’t as easily as us spot a blue ball in green grass. Anyway, the above protanopia rainbow was rendered using a program I wrote myself. I’ll account for how to convert regular RGB color codes to their protanopia representation in the following section.

To be able to perform this conversion, we first need to convert RGB color space into LMS color space. LMS color space is another way to represent colors using their approximate wavelengths. Remember how the human eye has three receptors? Each color channel in LMS represents one of these receptors:

Human color vision and the LMS color space.

To do this, we use the following equation: (reference)


This is just simple matrix multiplication, and we now have another triplet LMS representing our pixel. Next up, we’ll convert this to protanopia vision using the following formula; L_p, M_p, S_p all represent the protanopia version of the channel:


Notice how the and S channels remain the same. If we discuss other types of color blindness, they might also change. Finally, we’ll convert back from LMS to RGB using the following formula:


I have made two classes in Java, that do this exact conversion, which I have included in the link at the top of this post – and a wrapper class for BufferedImage, which eases the conversion. Using this formula, we can imagine how dogs view different scenarios:

A colorful meadow (credit):


Beautiful tropical birds (credit):birdout

Personally, I’m glad I have full color vision, as the world is much more beautiful when you have a wide range of colors, however sometimes I’m bummed out by the fact that I’m not a mantis shrimp – they have 16 color receptors, completely dwarfing our 3 color receptors. I don’t have the brain power to even understand 4 color receptors, so 16 seems completely mind-boggling to me.

Better Animations using Trigonometry

Note: This is a follow-up post to my post on creating psychedelic animations using trigonometry.

In my last post, I accounted for a method for creating psychedelic animations based on simple trigonometry. In this post, I’ll expand upon that method using the HSV color scheme rather than the traditional RGB color scheme.

The problem with the old method with using RGB was that each of the color channels were too easily distinguishable from each other. If you recall some of the old gifs:


Although this gif can easily be called psychedelic, it’s far too easy to distuingish the red stripes – the colors don’t blend very well, so I thought to use HSV (hue, saturation, value) to determine the colors. I’ll use the same method by having three functions H(x,y,z), S(x,y,z), V(x,y,z) but using HSV instead of RGB, my hypothesis is that this’ll create far better looking gifs.

I’m using the conversion algorithm found on the wikipedia page for HSV to convert HSV to RGB, which allows me to place the pixels on an image in Java:


In the last post, I had to manually discover fun-looking functions, however this time around, I decided to implement the random cosine waves used in my post on smooth curves and games, because I theorized that random cosine waves were better at creating fun-looking functions than myself.

The first .gif I created with this method was the following (which happens to be a perfect loop):


Modelling the Motion of a Spring using Differential Equations

This is a translated excerpt from a highschool project (SRO) on this very topic.

In this post, I’ll account for a method for modelling the motion of a weight hanging freely in a spring. In this post, we’ll only consider the case, where the damping factor of the spring is 0, because that’s easier to model (in a future post, we’ll consider the case where damping factor is >0).

Looking at a weight hanging in a spring, we get the following diagram:


Since the weight is in equilibrium, the upward force from the spring must be equal to the force from the gravity. The force of gravity acting on the weight is just F = m \cdot g. According to Hooke’s Law, the force of a spring is given as F = k \cdot s, where k is some constant and s is the distance the spring has been stretched. Using this knowledge we get the identity m \cdot g = k \cdot s, which can also be expressed as m \cdot g - k \cdot s = 0.

We observe that at given time t, the upward force of the spring is k \cdot (s + x), where sis the distance to the equilibrium point, and x is the extra distance the weight has moved relative to s. This means the net force at any given time of the weight is m \cdot g - k \cdot (s + x), which is also the same as m \cdot g - k \cdot s - k \cdot x, and using our previous identity we simply get -k \cdot x.

We know the force of any given time on an object is given as F = m \cdot a. This can also be expressed as F = m \cdot \frac{d^2 \cdot x}{d \cdot t^2}. This allows us to derive that m \cdot \frac{d^{2} \cdot x}{d \cdot t^{2}} = -k \cdot x . Simplifying this expression, we get \frac{d^{2} \cdot x}{d \cdot t^{2}} + \frac{k}{m} \cdot x = 0 .

This is just a linear second-order differential equation, which we solve and allows us to get the equation describing the motion of the spring. We get s(t) = c_1 \cdot \cos(\sqrt{\frac{k}{m}} \cdot t) + c_2 \cdot \sin(\sqrt{\frac{k}{m}} \cdot t).

Great, we now have a method for describing the weight’s motion, but what about those nasty constants? We observe that s(0) = x, since the weight has been stretched a distance of x from its equilibrium, when released. This means that c_1 \cdot \cos(\sqrt{\frac{k}{m}} \cdot 0) + c_2 \cdot \sin(\sqrt{\frac{k}{m}} \cdot 0) = 0. Because \sin(0) = 0, and because the \cos(0) = 1, we get c_1 = x.

Excellent, now we’ve found c_1, but what about c_2? To do this, we look at the derivate of this function. We get \frac{d}{dt} s(t) = - c_1 \cdot \sqrt{\frac{k}{m}} \cdot \sin(\sqrt{\frac{k}{m}} \cdot t) + \sqrt{\frac{k}{m}} \cdot \cos(\sqrt{\frac{k}{m}} \cdot t). Let’s look at the initial acceleration of the weight. We get \frac{d}{dt} s(0) = - c_1 \cdot \sqrt{\frac{k}{m}} \cdot \sin(\sqrt{\frac{k}{m}} \cdot 0) + \sqrt{\frac{k}{m}} \cdot \cos(\sqrt{\frac{k}{m}} \cdot 0). Simplifying this expression we get \frac{d}{dt} s(0) = c_2 \cdot \sqrt{\frac{k}{m}}.

We now have a complete equation describing the motion of the weight:

s(t) = x \cdot \cos(\sqrt{\frac{k}{m}} \cdot t) + \frac{\frac{d}{dt} s(0)}{\sqrt{\frac{k}{m}}} \cdot \sin(\sqrt{\frac{k}{m}} \cdot t)

This is part of a high school project, which can be read here (in Danish):

Now, time for the fun part. We model this in Flash, and get the following simulation:

Here is a screenshot from the simulation:


Psychedelic Animations using Trigonometry

We will in this post showcase a method for generating psychedelic animations using the basic trigonometry functions sin(x), cos(x) and tan(x).

By animation, we mean a smoothly blending series of images of a given size. Suppose we just want to create a single image, we could define a function of two dependant variables f(x, y), where x and are the coordinates of a given point on the image. This means that the function depends on the x and coordinates, giving a smooth variation across the image.

Here is the image representing f(x, y) = x + y:


The brightness of a pixel is 256 – f(x) mod 256

This is nice. It’s a smooth image, however, it’s not very “psychedelic”. To accomplish this, we construct not one, but three functions R(x, y), G(x, y) and B(x, y) – one for each color channel. This produces the following image instead:


R(x, y) = sin(x+y)   G(x, y) = cos(x+y)  B(x, y) = sin(x) + cos(y)

This is perfect! This is trippy enough for our purposes, however, it’s not very “animated”. This is a single image, so it doesn’t really count as an animation. To make this an animation, we introduce another independant variable t (for time), so we get three functions R, G, B, dependant on three variables x, y, t. Introducing a time variable, produces animations like the follow:


Fewer colors in this than the actual version, due to .GIF file format

This is perfect! We can now construct animated psychedelic animations using math!

Here’s a link to a YouTube channel I made, which showcases a bunch of these:

Future ideas: Create the same program, but using HSB instead of RGB. My hypothesis is that this will create better, more trippy images.