WARNING: This website has moved to http://codrt.fr/allardj/. Please click here to see the most recent version of this page.
Gallery - Visualization Shaders
flowvrrender.png

Shader-Based Parallel Rendering

The screenshots in this page are done using FlowVR Render, a set of modules and filters on top of FlowVR for distributed rendering. Some of the shaders used here are described in the Softwares - Shaders page.

Coupling VTK and FlowVR for parallel iso-surface extraction

A VTK-based parallel iso-surface extraction program can process in real-time the results of a time-varying fluid simulation. FlowVR is then used to combine and redistribute the results to the rendering nodes powering the display wall. Using shaders in this case helps as the scene can be described by a handful of shaders instead of hundreds of fixed-functions pipeline parameters (material properties, lights, texture combining, ...).

example1-vtk.jpg
example2-vtk.jpg
fluid3d-vtk-f000.png
fluid3d-vtk-f300.png
fluid3d-vtk-f600.png
fluid3d-vtk-f900.png
fluid3d-vtk-grimage1.jpg
fluid3d-vtk-grimage2.jpg

Volume rendering on a display wall.

Using per-pixel loops, multiple dependent texture accesses, and 32-bits floating points computations available in recent graphics cards it is possible to implement high-quality raycasting-based volume rendering on the GPU. Using 16 Geforce 6800 Ultra we can interactively render datasets up to 512x512x512 with pre-integrated transfer functions on the GrImage display wall.

volren-cube.png
volren-fire.png
volren-vtktext.jpg
volren-shader.jpg
volren-preint.jpg
volren-grimage.jpg

Real-time reconstructed model texturing

This model is reconstructed in real-time using the algorithm presented in doc J. Allard, E. Boyer, J-S. Franco, C. Ménier, B. Raffin: Marker-less Real Time 3D Modeling for Virtual Reality. IPT 2004, May 2004, Ames, USA.

A shader is used to combine the video images from all cameras to texture the rendered model. The blending coefficients depends on the angle between the smoothed surface normal and the direction of each camera. This computation is done per pixel, allowing for seamless image composing. The first image shows the contribution of each camera.

vtexture1.png
vtexture2.png
vtexture3.png
vtexture5.png

The data sequence used is from the MOVI/IMAG Project.

split

Back to Gallery

Last modification: 2005-Jun-08 00:06
validate ID/IMAG