The following is my progress writing a Raytracer in C++ using OpenGL to render.
Most of the time in the beginning is spent making vector, math, and matrix libraries. This is for basic operations I’d have to use a lot, for example cross products and dot products. Then I started with two shape classes, a plane and a sphere. To simplify things, I used the sphere but left the plane out until I knew my rays sent from the camera were actually hitting anything. So I sent about with a single sphere at the origin and put the camera back a few units in Z. Then ran a hit test on each pixel from 0,0 (bottom left) to the top right of the image.
After several iterations I finally got an actual visual. Couple problems, first there is supposed to be only one sphere. The other looks something like surface acne, which is usually caused my a floating or rounding error somewhere in the code. In my case I had an int where I needed a float.
80% of the time I spent coding the raytracer was to get to this point. Reminds me of the HAL 9000. Now that I had the basics down, I needed to press on. I made three light classes, a point, a spot, and a directional/parallel. I found while debugging that directional lights were the easiest to use, so I put a few in the scene and would worry about composition and “good lighting” (using a key, fill, and back light) later.
Next I needed to create two shader classes, a diffuse and phong, which would handle the display of defused light, specularity, and reflectance across a surface. The absense of a light source and a shader is why I only received yes/no results for my first successful hit test, and why the image looks like a circle not a sphere.
My latest render has a plane, four spheres, too much reflectivity, poor lighting, and some very awkward composition. The spheres look a bit like Pokéballs.
I’ll post updates as I go, once I get something nice and cleanly written, I’ll take a stab at Physically Based Rendering.