Hi, I started thinking recently about animated textures and just how much are they being used in games, more specifically in SR4. I'm curious as to how these different cases are being handled technically in the engine and what are the limitations, if any. I'm working on a little project unrelated to SR4, trying out various engines but I got curious how these animated surfaces and textures are being done in games at all.
For example, in SR4, there are monitors on the ship in the real world which are animated in a loop and in low fps. Are these actually video files or are they perhaps a sequence of individual images, a sprite sheet or other? Drastic compression artifacts are clearly visible, but these might be related to video or image compression, so it could be either of them.
Then there's the "glitchy", "upward stream" effect on all walls and surfaces in the virtual city, appearing randomly and periodically, or a technically similar effect every time you collect a data cluster and the whole city is "splashed" by a wave of a blue squares texture. I suppose this is just a second material/texture that's being applied on every surface and then moved/scrolled alongside an axis? I guess this is the most common way of animating textures (scrolling layers of the sky, water etc.), but I'm not really sure so I'm looking for some confirmation.
And there's also a texture projection on surfaces where the data clusters are standing (a blue "light" projected on a floor or wall, with darker lines slowly flowing outwards). I suppose this is, again, a different effect from the previous two and was just wondering what's the basic technical idea behind creating the effect.
As for some limitations, regarding the first case with the monitors, whether it's a video or an image sequence, how would the positioning and applying go on the very polygons? More precisely, are you able to position this video or image sequence anywhere on a mesh, or is it more of a "one object, one sequence" or "one polygon, one sequence" restriction? I've mostly seen cases in some engines where you can change the texture, display another texture or video of one single polygon or object, but only its complete texture (for example, if in SR4, a whole monitor mesh has one texture, you would have to change the complete texture to make an animation, not just the "frontal monitor display" polygon of the mesh).
I've read different things about these points, so I guess I'm just wondering how you guys at Volition are handling these things, as a more direct input from people in the industry.
Hope I didn't mess up the correct forum to post this on, thanks!
For example, in SR4, there are monitors on the ship in the real world which are animated in a loop and in low fps. Are these actually video files or are they perhaps a sequence of individual images, a sprite sheet or other? Drastic compression artifacts are clearly visible, but these might be related to video or image compression, so it could be either of them.
Then there's the "glitchy", "upward stream" effect on all walls and surfaces in the virtual city, appearing randomly and periodically, or a technically similar effect every time you collect a data cluster and the whole city is "splashed" by a wave of a blue squares texture. I suppose this is just a second material/texture that's being applied on every surface and then moved/scrolled alongside an axis? I guess this is the most common way of animating textures (scrolling layers of the sky, water etc.), but I'm not really sure so I'm looking for some confirmation.
And there's also a texture projection on surfaces where the data clusters are standing (a blue "light" projected on a floor or wall, with darker lines slowly flowing outwards). I suppose this is, again, a different effect from the previous two and was just wondering what's the basic technical idea behind creating the effect.
As for some limitations, regarding the first case with the monitors, whether it's a video or an image sequence, how would the positioning and applying go on the very polygons? More precisely, are you able to position this video or image sequence anywhere on a mesh, or is it more of a "one object, one sequence" or "one polygon, one sequence" restriction? I've mostly seen cases in some engines where you can change the texture, display another texture or video of one single polygon or object, but only its complete texture (for example, if in SR4, a whole monitor mesh has one texture, you would have to change the complete texture to make an animation, not just the "frontal monitor display" polygon of the mesh).
I've read different things about these points, so I guess I'm just wondering how you guys at Volition are handling these things, as a more direct input from people in the industry.
Hope I didn't mess up the correct forum to post this on, thanks!