- Second year
- Individual project
The 'Tetris' project is part of the Computing 2 module taught in the second year, focusing on Python implementation as well as general computing techniques such as recursion, asymptotic running times, greedy algorithms, graph traversal and more. The aim of the project was to each design an algorithm that could quickly and accurately place Tetris pieces in a given matrix.
The problem presented was to write an algorithm that fills a randomly generated n x n region (target) using specific Tetris pieces provided. The density of the target region would also be varied from 20% (0.2) up to 80% (0.8).
I chose to use a 'greedy' algorithm because it has:
Simplistic runtime analysis
A 'greedy' algorithm works by finding the local optimum at each step and hoping you will eventually end up with a global optimum solution for your problem.
Example of a 10x10 target matrix and the corresponding 'perfect' solution
First my algorithm scans through the target matrix and identifies each coordinate where there is a single cube present, and converts it into a '1' along with its corresponding coordinates.
Simplified example of coordinate system
Once the locations of all the '1's have been established it then decides which piece is appropriate for that specific location. This is the 'greedy' aspect of the algorithm, it does not consider how it may affect surrounding solutions, it just finds the best local solution.
Single piece placement for a specific location
In order to further improve the accuracy of the algorithm I implemented an optimisation loop. Coordinates where pieces have already been placed are changed from '1's to '2's to indicate that space is full. All remaining '1's indicate an empty space. They may not have been used the first time around because there was not space for a whole piece. However by placing an extra three blocks in locations where there are ‘1s’ and one excess block where there is a ‘0’ (as every shape is made up of four blocks), the accuracy is increased by two correct blocks each time. This is because for every excess block placed you receive -1, but for every correct block placed you receive +1, so +3-1= +2. The algorithm continues to optimise until there are no pieces or places remaining.
Single piece placement for a specific location. Yellow blocks represent +1, red -1.
The algorithm's performance is tested using a variety of matrix sizes and densities, from a 10x10 with 20% piece density to 1000x1000 with 80% density. The image below shows the effect of changing densities on the matrix. To give an idea of the scale of the problem, a 1000x1000 matrix with 80% density means the algorithm must individually place 800,000 pieces.
In order to understand whether it was density or size that was affecting performance, two tests were run, keeping size constant and varying density and vice versa.
Constant size, varying density
Constant density, varying size
The graphs above show that at especially low and high densities and constant size, the algorithm performs with high accuracy (86% avg.) and low runtime (min 0.05 seconds). However with constant density the algorithm's runtime increases almost linearly and accuracy is consistent around 84.5±1%. Maximum overall accuracy achieved was 90% and minimum runtime ≈ 0s.
The full code can be found on my GitHub.