Well, 10 months have passed. WebGL is catching on, Native Client has been launched, Unreal Engine 3 targets Flash 11, and Crytek has announced they might target Flash 11 too. Exciting times!
On the GPU front, we're in a good place. With WebGL, iOS, and Flash 11 all roughly exposing shader model 2.0, it's not a ton of work to target all of the above. Even on the desktop you can't assume higher than shader model 2.0: the Intel GMA 950 is still at the top.
However, shader model 2.0 isn't general enough to offload all of your compute-intensive workloads to the GPU. With 16 vertex attributes and no vertex texture fetch, you simply can't get enough data into your vertex shaders do to everything you need, e.g. blending morph targets.
How can we target native platforms, the web, and Flash at the same time?
Native platforms are easy: C++ is well-supported on Windows, Mac, iOS, and Android. SSE2 is ubiquitous on x86, ARM NEON is widely available, and both have high-quality intrinsics-based implementations.
As for Flash... I'm just counting on Adobe Alchemy to ship.
If you'd like, take a look at the raw results.
- It's bizarre that Chrome and Firefox disagree on whether typed arrays or not are faster.
- Firefox 9 clearly has performance issues that need to be worked out. I wanted to benchmark its type inference capabilities.
- Emscripten on Firefox 7 and 9 still has issues, but Alon Zakai informs me that the trunk version of SpiderMonkey is much faster.
In the future, I'd love to run the same test on Flash 11 / Alchemy and Native Client but the former hasn't shipped and the latter remains a small market.
One final note: it's very possible my test methodology is screwed up, my benchmarks are wrong, or I suck at copy/pasting numbers. Science should be reproducible: please try to reproduce these results yourself!