The purpose of this project was to get introduced to the process of reformatting data and algorithms for execution on graphics processing units. Compared to the standard CPU, a modern GPU has around 200x as many cores, making it much more suited for highly parallel tasks. One such task is Conway's Game of Life - a cellular automaton system. In this "game", creatures (as signified by little tiles in the grid) live, die, and are born based on how many neighbors they have. This can be fully parallelized, making it more suited for GPU processing than CPU processing. In addition, it is a simple algorithm, making it ideal as an initial exercise.
The process of translating the Game of Life to the GPU is actually quite simple. We take the standard game board (an array), and turn it into a texture. We use this as input to a shader, which outputs the next game state. In the middle is where the game logic occurs. We sample the game board texture at the 8 neighboring pixels (the creature's 8 neighbors). We take the returned value and convert it to a boolean (alive/dead), and count how many of these are true to get the current creature's living neighbor count. Each creature can be processed in parallel by a different shader core, allowing hundreds of cells to be simulated simultaneously.
To make this a bit prettier, I had my 3D visualization render each cell as a cube. If there was no creature living in the space, it was drawn in red. If it contained a living creature, it was drawn in green. While it's not fancy, it definitely looks more interesting than the simple 2D visualization. (I would have loved to make it more interesting/flashy, but I didn't have enough time - too much work for other classes.) The cubes were drawn using glDrawArraysInstanced, so all of them could be drawn in one function call, and the 3D visualization's vertex shader properly positioned them based on instance ID. (This is also how it figured out whether the creature there was alive or dead.)