emblemparade.com

OpenGL ES 3.0 vs. OpenGL ES 2.0

OpenGL ES 3.0 is a terrific update to the standard that brings many important desktop OpenGL features to embedded devices.

Unfortunately, it’s not yet widely available, so you definitely want your mobile apps to be able to fallback to 2.0. Even more unfortunately, 2.0 remains a very quirky implementation, and I’ve encountered various pitfalls in supporting both versions in the same application.

Matrices

Unlike OpenGL ES 3.0, OpenGL ES 2.0 doesn’t support row-major matrixes, only column-major matrices: the 3rd argument in “glUniformMatrix” must be GL_FALSE. One solution is to go column-major throughout your code. The other is to translate them, just for OpenGL ES 2.0:

GLfloat *v = ...
GLfloat t[] = {
    v[0], v[4], v[8], v[12],
    v[1], v[5], v[9], v[13],
    v[2], v[6], v[10], v[14],
    v[3], v[7], v[11], v[15]
};
glUniformMatrix4fv(location, 1, GL_FALSE, t);

GLSL

Different GLSL version support is always a pain: rather than just extending the language, Khronos keeps making breaking changes to the standard. Adding support for ES brings in even more more quirks: OpenGL ES 3.0 uses “300 es”, which is mostly equivalent to OpenGL’s “330”. OpenGL ES 2.0 uses “100”, which is mostly equivalent to OpenGL’s “110”. Confused yet? Was it really so hard for the standard to be consistent?

For the most part, you can share shader code between desktop and embedded GL as long as you get these versions coordinated. But, it seems that ES has two oddities:
It doesn’t define a default precision for many types. So, make sure to add something like this, or you’ll get a GLSL compilation error:

precision mediump float;

If you’re not doing MVP matrix math (for example, when doing 2D rendering), don’t forget to make sure that “w” is initialized to 1 in your vertex shader:

gl_Position.w = 1;

I found out the hard way that apparently OpenGL ES might do some matrix transformations to your vertex shader output, even if you haven’t set up anything to do so yourself. My guess is that this might have something to do with supporting different screen orientations, which would result in different aspect ratios, requiring the driver to do some on-the-fly corrections. Bottom line, if “w” isn’t 1, you’re going to get some really crazy, hard-to-debug results. It’s probably a good idea to do this for desktop OpenGL, too. Who knows what some crazy driver somewhere expects?

GL_TEXTURE_2D_ARRAY

This important feature simply doesn’t exist in OpenGL ES 2.0. So, how can you create a texture atlas?

The only real alternative is to use one big regular ol' 2D texture, and pack the internal textures inside of it. A simple solution is to just order the internal textures vertically, such that the final big texture’s width would be the biggest width of all the internal textures, while its height would be the sum of all internal texture heights. Of course, there might be more efficient techniques for packing the internal textures. For the shaders, you’ll just have to make sure to transform your texture coordinates accordingly. You can’t use “texture2DArray”, only “texture2D”. This means supporting an entirely different texture-loading code path, as well as a different shader. But there’s simply no choice.

If you use this approach, take note that OpenGL ES 2.0 devices have rather constraining uppers limit on texture size. If you hit it, you will get a GL_INVALID_VALUE when calling “glGetError” after “glTexImage2D”. The solution would be to either use multiple textures, or reduce the resolution of your textures.

Dec 31, 1969, 18:00 CST