I’ve been reading the OpenGL Shading Language (hereafter GLSL) specifications for the last few hours. As we’re starting to integrate vertex and fragment shaders into Empyrean, I need to make sure I understand all of the concepts so I can fit them into our rendering architecture in a thought-out way. One of my previous concerns was that GLSL would be supported on a significantly fewer number of GPUs than ARB_vertex_program and ARB_fragment_program (hereafter ARBvp and ARBfp, respectively). After some initial research at the OpenGL Hardware Registry, I have discovered that my concern is somewhat unwarranted. GLSL fragment shaders are supported on pretty much the same cards as ARBfp: Radeon 9500+ and GeForce 5200+. GLSL vertex shaders, on the other hand, are slightly more restrictive than ARBvp. ARBvp requires a Radeon 9000+ or a GF2 MX or better. GLSL vertex shaders need a Radeon 9500 or GF2 GTS. That’s really not too bad though. The convenience and consistency of GLSL is worth the loss of a few older video cards. (The fact that older Radeons don’t support GLSL at all concerns me some, but maybe newer drivers will fix that. And few people have older Radeons anyway.)
Plus, now I only have to support one type of shading language. :) For a student developer, that’s a godsend. Only companies like id Software have the resources to write four different rendering paths to maximize hardware support.
So far everything I’ve talked about is in regards to Windows… GLSL is totally hosed on Mac (as is OpenGL as a whole) and I’ve been hearing of problems on the various Linux drivers too. In fact, my Linux box should in theory support it, but doesn’t. Doesn’t expose the extension strings, at any rate. But drivers will get better. It’s not like there is a fundamental problem here.