Hi
@Calgon, great question..
As far as I know, we cannot do a lot of things on our OpenGL
platform that the WebGL (Shadertoy) platform does
- We are limited to using one mounting texture to render a shader onto
- We cannot process Cube textures
- We cannot display anything that differs from a final simple gl_Fragcolor entry
- So far I have not been able to use framebuffers to simultaneously run
the same shader over different textures using texture2D syntax
- additional to all this, if it was possible to run the extra channels needed..the amount
of processing required on our OpenGL platform would cause so much lag and stuttering
of model movement as to render it unusable.
@TheEmu may be able to demonstrate examples which show otherwise.
The problem seems to be that a shader's render screen position gets fixed
from the get go, so unlike a sprite that you can move or make transparent
by animation clauses in the scene (.scn) coding, any rendering that the
shader does is limited to the gl_FragCoord position & iTime fixed settings
initially set up.