This project aimed at mastering the new tessellation shaders in GLSL 4.0+, as well as developing at least one interesting post-process shader. Videos at the bottom.
First, we create an icosahedron. This gets sent to the gpu in patches (20 total patches of 3 verts each - one per face). GL_Patches are the only geometry type that tessellation shaders can take in. Then a tessellation shader works its magic. If you want more information, go here because it does a far better job at explaining it than I would. After the mesh has been tessellated, another shader takes the newly constructed vertices and decides where they should go (otherwise they just sit in the same plane as their original patch). In this case, I did it with two different methods. The first was to do it via a texture:
There were a number of ways of visualizing the computed data, but I made only a single material. It's actually an environment map used as a part of the ambient component, done with spherical harmonics lighting. If you look at the images above, you'll notice that all faces with the same normal look the same (all my normals are per face - I didn't want to do a second pass for per-vertex normals). That's because the only component is SH, which gives a very nice though unrealistic visualization.
The gem of this project, however, is VSAO/SSAO (a combination of View Space Ambient Occlusion and Screen Space Ambient Occlusion). Here's an example of the difference with it applied (mouse-over to change image):
And one of the checker ball, which is easier to see:
Now, for the way the AO works, and why it's not simply SSAO. Normal SSAO uniformly samples in a "sphere" around every pixel by sampling from the depth buffer. This version of SSAO uses a position buffer in addition to a normal buffer. Instead of sampling from the depth buffer and attempting to recreate the correct position, we sample from the position buffer and use the real view-space distance between the two points. The process here essentially does the same as normal SSAO. What we do is shoot rays out at random directions using a random normal map we sample from:
There's dynamic LOD. The further away a triangle is from the eye point, the less tessellated it will be. It was by comparison so unimportant (and was so trivial to implement) that I'm not showing anything from it. I may also have not taken screenshots because it's difficult to tell that it's happening (I made sure that you're far enough away that the tessellation change isn't noticable).