Skip to main content

OpenGL Graphics Pipeline

Graphics Pipeline is an abstract model that describes sequence of steps needed to render a 3D scene.

0. Core Concepts and Vocabulary 

rendering - Generate two-dimensional images of 3D scenes

shading - The darkness of an object not in direct light 

shadows - the silhouette of one object's shape on the surface of another object

frustrum - Region contained within the truncated pyramid shape outlined in white indicates the space visible to the camera.

pixel - specify colors using triples of floating-point numbers between 0 and 1 to represent the amount of red, green, and blue light present in a color; a value of 0 represents no amount of that color is present, while a value of 1 represents that color at full intensity

  • raster - rendered scene via an array of pixels (picture elements) which will be displayed on a screen, arranged in a 2D grid
  • resolution - the number of pixels in the raster, the more it is the higher the quality 
  • precision - the number of bits used for each pixel as each bit has two possible values (0 or 1), the number of colors that can be displayed 

buffer (data buffer/buffer memory) is a part of a computer's memory that serves as temporary storage for data while it is being moved from one location to another.

  • frame buffer - Pixel data is stored in a region of memory. A framebuffer may contain multiple buffers that store diferent types of data for each pixel.
    • color buffer - located in frame buffer which stores RGB values. Need this at minimum. Alpha value can also be stored 
    • depth buffer - located in frame buffer, which stores distances from points on scene objects to the virtual camera. Depth values are used to determine whether the various points on each object are in front of or behind other objects (from the camera’s perspective), and thus whether they will be visible when the scene is rendered.
    • stencil buffer - store values used in generating advanced effects, such as shadows, reffections, or portal rendering.

1. Application Stage

Initializing the window where the rendered graphics will be displayed.

  • Reading data required for the rendering process and sending to the GPU, such as
    • vertex attributes, describes appearance of geometric shapes rendered, stored as in vertex buffer objects (VBO)
    • images to be applied to surfaces, stored in texture buffers 
    • source code for vertex shader and fragment shader programs, sent to GPU to be complied and loaded.
  • Loop that re-renders the scene repeatedly, like 60 fps
  • Monitoring hardware for user inputs, handled by the CPU
  • Vertex Array Objects, manages the associations and whether they are turned on and off, between attributes data stored in VBOs and attribute variables in the vertex shader program

2. Geometry Processing

Determining the position of each vertex of the geometric shapes to be rendered, implemented by a program called the vertex shader

mesh - a collection of points (vertices) that are grouped into lines or triangles to make a shape of a geometric object 

  • vertex - a point with a data structure holding properties or attributes that are specific to rendering.
    • 3D position of the corresponding point. Mandatory
    • color to be used when rendering the point. Optional 
    • texture coordinates (or UV coordinates) - indicates a point in an image that is mapped to the vertex. Optional
    • normal vector - indicates the direction perpendicular to a surface, used for lighting calculations. Optional 

Vertex shader is applied to each of the vertices to determine the final position each point being rendered, which is typically calculated from a series of transformations: 

  • model transformation - the collection of points defining the intrinsic shape of an object may be translated, rotated, and scaled so that the object appears to have a particular location, orientation, and size with respect to a 3D world.
    • world space - coordinates expressed from this frame of reference are said to be in world space 
    • virtual camera - camera with its own position and orientation in the virtual world.
    • view transformation - In order to render the world from the virtual camera’s point of view, the coordinates of each object in the world must be converted to a frame of reference relative to the camera itself.
    • view space (camera/eye space) - coordinates after view transformation
      • projection transformation - clipping points outside the view space 
        • clip space - points outside the view space 
        • perspective projection
        • orthographic projection

In addition to these transformation calculations, the vertex shader may perform additional calculations and send additional information to the fragment shader as needed.

3. Rasterization

Determining which pixels corresponds to the geometric shapes rendered 

4. Pixel Processing

Determining the color of each pixel in the rendered image, involving a program called the fragment shader