# Modelling randomness with Markov chains

# Chain of states

A Markov chain is a set of states, with transitions between states determined by a transition matrix. This means that the probability of going to the next state is dependent on the current state.

In this way it's possible to bring some structure in randomness, so to speak. Watch this:

## Simple example: Heads or tails

A simple Markov chain is the old coin flip. There are two states, *heads* and *tails*, and a matrix

` ````
[
[1/2, 1/2],
[1/2, 1/2]
]
```

in which the entry *m _{ij}* is the probability that if you are in state

`i`

, you'll go to state `j`

.People tend to believe that after heads, tails is more likely. This can be modelled with

` ````
[
[1/3, 2/3],
[2/3, 1/3]
]
```

## Simulate something. A dog for instance

The coin flip doesn't need Markov. But a simulated dog may benefit. I have a dog in mind that either sleeps, eats, barks, or pees, But never barks after eating, but very likely does so before. The matrix could be someting like:

` ````
[
[0.7, 0.1, 0.1, 0.1],
[0.5, 0, 0, 0.5],
[0.5, 0.3, 0.1, 0.1],
[0, 0, 0, 1]
]
```

Note that if the dog sleeps, it's likely to sleep on. It's a relatively stable state.

## Simulate movement

Another simple use case is movement. While moving an object over a canvas, the states could be {north, east, south, west, rest}, with the rules for a robust character 'Never turn 180 degrees, and keep going' and for a whimsical character: 'Do not walk long straight lines'.

## Ring animation

In this animation I want to move the rings outward in an irregular way.

Excellent! This is right up my alley. Been experimenting with procedural everythings lately, and this is simple yet powerful, as is ideal.

Very cool idea, thanks for sharing!