# The Forest - three.js/WebGL versions

I made a demonstration of part of The Forest using the 3-dimensional library three.js which is able to use faster hardware (GPU = Graphics Processing Unit) that is now present in most devices, even phones.

The demonstration can be seen here. It may take a while to load because of getting the three.js library. I have made it so that the scene and map are both displayed. You may have to zoom down your browser display (Ctrl-) to see them side by side.

I don't consider this to be the way forward for The Forest. That is mainly because there is a bottleneck in feeding new terrain data to the GPU whenever the observer moves. And this is why there is only a fixed area available in the demonstration. Performance is also really very little different from the plain Javascript version, even though I have spent some time studying how the three.js library works most efficiently.

There may be a better way in future by using the GPU not for graphics at all but for making the terrain generator process multiple ground points in parallel...

## Update 23/12/23 - WebGL or vanilla Javascript?

I have recently been experimenting with using WebGL to implement my terrain generator. That means using the GPU for non-graphical calculations.

I am making progress and my demonstrator does basically work. There have been several hurdles to overcome but I like a challenge. I will list some of the puzzles I have solved aong the way. None of the various documents I have studied to try to learn WebGL were at all clear about these points. In fact most did not even mention them.

• ### GPUs do parallel processing

I did know this and I guess most developers do, but it is barely mentioned in tutorials. Yet it is absolutely fundamental for understanding why some data must be in buffers and other data must not.

In my case I want to calculate height at every position on a map 800 x 600 (pixels, 1 per metre). So each of those positions will be fed in as an attribute buffer. These are my 2D vertex positions. Of course the GPU has nothing like that number of processors. If it has N cores I expect it to read N positions from the buffer at a time, to process in parallel.

I am just considering height for now. If the test is successful I will go on to cover the other aspects of my terrain.

• ### My parameter arrays of length 5 cannot be set as uniform data

Some arrays are needed at every vertex and so are not set in buffers and passed as attributes. Arrays which are not of the right length to be one of the `uniform` types (vec2, mat4, etc) must be declared and set within the shader source code:

``````
float AH [5];
float BH [5];
AH [0] = 0.0;  AH [1] = 13.0; AH [2] = 21.0; AH [3] = 22.0; AH [4] = 29.0;
BH [0] = 27.0; BH [1] = 26.0; BH [2] = 21.0; BH [3] = 11.0; BH [4] = 1.0;
```
```

The GLSL compiler is not capable of using an array literal, unlike JS. Every element has to be individually assigned as in the example here.

• ### Array indices have to be constants

This was a surprise. I did not see it mentioned in tutorials. I needed the following code in my vertex shader. AH and BH are constant parameter arrays as described above. PROFILE is a constant array of 256 values.

``````
float ht = 0.0;
for (int i = 0; i < 5; i++)
{
float j = (AH [i] * xg + BH [i] * yg) / 128.0;
int k = int (mod (j, 256.0) / 256.0);
ht += PROFILE [k];
}
```
```

(xg and yg are ground coordinates, offset from pixel positions by passing ground centre as `uniform vec2 uCentre`.)

Now, the shader compiler can cope with the `[i]` for the AH and BH arrays because it can transform the loop into a sequence of 5 sets of statements with fixed `i` values. But the `[k]` cannot be dealt with like that.

When I discovered this it felt like a final brick wall for my project. Then, after a while, I thought about using a texture to provide the PROFILE array. It worked. I processed my PROFILE array in JS to create an `ImageData` object (256 x 4) as part of the set-up before running WebGL. I passed that in as a `uniform uTexture`. Then the loop became

``````
float ht = 0.0;
for (int i = 0; i < 5; i ++)
{
float j = (AH [i] * xg + BH [i] * yg) * recip128;
float k = mod (j, 256.0) / 256.0;
vec4 profK = texture2D(uTexture, vec2(k, 1.0));//rgba
ht += profK.r;
}
```
```

Heights can be greater than 256. So to get the result out I had to calculate MSB and LSB (most and least significant bytes). I put those as 2 components of RGBA for passing as a `varying` to the fragment shader, which then only had to set it as `gl_FragColor`. I had WebGL writing to an off-screen canvas and from there I used `context.readPixels()` to get the JS-readable array before reading the time.

## Results: comparing WebGL times with plain Javascript

I timed 1,000 loops of the WebGL code and then the same for a plain JS version (as used in The Forest). The timings were only to the point where height values are stored in an array that can be accessed from JS. I moved the ground position between loops so I got an average performance. After each time measurement I displayed the results just in order to see progress.

On my laptop the average times (to get the whole 800x600 array of heights) were 36.5ms for WebGL and 32.5ms for JS. This is a Samsung Galaxy 360 with 13th gen Intel core i7 1360-P and Intel Iris GPU. I was using Firefox browser.

I uploaded the code to my web site so I could repeat the test on my android phone. The results were really surprising: 30.8ms for WebGL, 12.1ms for JS. The phone is Samsung Galaxy S22. I was using Samsung's own browser and this result shows it to be significantly more efficient than Firefox. My phone is much faster than my laptop for processing vanilla Javascript!

My conclusion is that it is not worth trying to use WebGL for my terrain generator. I expect non-graphical processing to be more effective in WebGPU but I will wait until that is available in most of the browsers listed as WebGPU-capable on MDN.

## Reference documents

I found these and a few others online. All of them are of course very much about 3D graphics and do not describe using the GPU for non-graphical computations.

• The best tutorial material I have found for WebGL is webgl.brown37.net but it does not cover all of the points I have described above.

• WebGL Fundamentals has a lot of material but it is less well organised, not so easy to use for reference.

• I did use the Khronos WebGL 1.0 Reference Card but that is extremely concise, as is the rest of the WebGL material on the Khronos site.

• Not forgetting MDN which is always useful both for tutorials and reference.

For programmers, here are the shaders I wrote for this experiment. If you can see where they might be made more efficient I'd be glad to know.

``````
const VERTEX_SOURCE = `
precision highp float;
precision mediump int;
uniform vec2 uCentre;
uniform vec2 uHalfSize;// of canvas
uniform sampler2D uTexture;
attribute vec2 aPosition;
varying vec4 vTerrain;
void main()
{
float AH [5];
float BH [5];
AH [0] = 0.0;  AH [1] = 13.0; AH [2] = 21.0; AH [3] = 22.0; AH [4] = 29.0;
BH [0] = 27.0; BH [1] = 26.0; BH [2] = 21.0; BH [3] = 11.0; BH [4] = 1.0;
float x = aPosition.x;
float y = aPosition.y;
float xg = x + uCentre.x;
float yg = y + uCentre.y;
float recip128 = 1.0 / 128.0;
float ht = 0.0;
for (int i = 0; i < 5; i ++)
{
float j = (AH [i] * xg + BH [i] * yg) * recip128;
float k = mod (j, 256.0) / 256.0;
vec4 profK = texture2D(uTexture, vec2(k, 1.0));//rgba
ht += profK.r;
}
float lsb = mod (ht, 256.0);
float msb = floor (ht / 256.0);
vTerrain = vec4(msb, lsb, 0.0, 255.0); // for now - .b will be veg type
gl_Position = vec4(x / uHalfSize.x - 1.0, y / uHalfSize.y - 1.0, 0.0, 1.0);
gl_PointSize = 1.0;
}`;

const FRAGMENT_SOURCE = `
precision highp float;
varying vec4 vTerrain;
void main()
{
gl_FragColor = vTerrain;
}`;
```
```