Shader-Based Parallel Rendering
The screenshots in this page are done using FlowVR Render, a set of modules and filters on top of FlowVR for distributed rendering. Some of the shaders used here are described in the Softwares - Shaders page.
Coupling VTK and FlowVR for parallel iso-surface extraction
A VTK-based parallel iso-surface extraction program can process in real-time the results of a time-varying fluid simulation. FlowVR is then used to combine and redistribute the results to the rendering nodes powering the display wall. Using shaders in this case helps as the scene can be described by a handful of shaders instead of hundreds of fixed-functions pipeline parameters (material properties, lights, texture combining, ...).
Volume rendering on a display wall.
Using per-pixel loops, multiple dependent texture accesses, and 32-bits floating points computations available in recent graphics cards it is possible to implement high-quality raycasting-based volume rendering on the GPU. Using 16 Geforce 6800 Ultra we can interactively render datasets up to 512x512x512 with pre-integrated transfer functions on the GrImage display wall.
Real-time reconstructed model texturing
This model is reconstructed in real-time using the algorithm presented in J. Allard, E. Boyer, J-S. Franco, C. Ménier, B. Raffin: Marker-less Real Time 3D Modeling for Virtual Reality. IPT 2004, May 2004, Ames, USA.
A shader is used to combine the video images from all cameras to texture the rendered model. The blending coefficients depends on the angle between the smoothed surface normal and the direction of each camera. This computation is done per pixel, allowing for seamless image composing. The first image shows the contribution of each camera.
The data sequence used is from the MOVI/IMAG Project.
Back to Gallery