Getting started with WebGL – Part 1

Introduction

 

This introduction requires no prior knowledge in geometry or maths, but still requires a good 15min of your time so treat yourself with a cup of coffee and let’s get started!

WebGL enables us to perform 3D rendering in an HTML page without the use of plug-ins.

A WebGL program consists of a JavaScript code part and a “shader code” part written in GLSL (OpenGL Shading Language). GLSL is a custom language executed on the GPU/graphic card directly, instead of the CPU like Javascript.

GLSL is a bit old compared to some more modern languages and so this introduction is here to familiarize yourself with the concepts. We will describe all the terms with a simple example: an orange square on a black background.

photo1-article1

 

Drawing Pipeline in GLSL

To start with, let’s explain what is the drawing pipeline in WebGL.

Webgl renders in several steps. Each step is fed with the result of the step before, therefore being a pipeline :

 

sans-titre

To drawFor drawing the orange square in WebGL, we have to:

  • Describe the square in JS.

A square is composed of 4 vertices. A vertex (singular of vertices) is an angular point of a polygon. A vertex in 3D has three coordinates (x, y, z) in space.

  • Turn the square to triangles using a vertex shader.

A vertex shader is written in GLSL and deals with each vertex (x, y, z) provided by JS and gives it a position (a, b) on the screen.

Out of the vertex shader, we describe our image only with triangles, flatten in a vector format. In other terms, it turns a complex stuff in 3d, into a 2d vector image/representationSVG, made only with triangles.

In our example, the square is transformed into two triangles in screen space.

photo3-article1

The vertex shader has to be written by the programmer himself, there is no vertex shader by default.

  • Turn the triangles into pixels

Rasterization is the task of taking an image described in a vector graphics format (shapes) and converting it into a rasterized image (pixels or dots) for output on a screen, a VR Headset, or, in our case on a canvas. An analogy would be to convert a SVG image into a PNG image.

The rasterization creates, for each pixel, a “fragment”. A fragment is a funky term for a single potential pixel, meaning it isn’t a on-screen pixel yet. Compared to a real pixel, it has a few extra information, in particular a depth. To really get printed as a on-screen pixel, this fragment still has to pass some prefixed processes (depth, stencil) that are out of scope for this introduction but you can read about if you are interested. In Webgl, the rasterization process is predefined, and we cannot write our custom code for it.

  • Give a color to each pixel

A fragment shader is some code written in GLSL, that deals with each fragment and gives it a color. It takes a single potential pixel, still called a fragment (x,y) as an input and produce a single fragment with a colour as an output. Fragment shaders are also referred to as pixel shaders.

Like the vertex shader, the fragment shader has to be written by the developer.

 

Conclusion

We defined the GL pipeline:.

  • First, we make a projection of  the 3d scene into a 2d vector-based picture. This is the vertex shading stage, that we have to code.
  • Then this image is rasterized, ie turned into different pixels.
  • Finally, the fragment shader determines the color of each pixel.

Now that we have explained the WebGL drawing process, with no math, in our next article, we will write the vertex shader and the fragment shader that can render our square.