Visualization Technique

The N-Body simulations contain mainly three ingredients: dark matter, gas, stars. In advanced simulations, one can find several layers of boundary particles and in the more fancy simulations the black holes.

What we want to see? Probably most important is the distribution of particles, density field, relative positions of satellites in host haloes, gas distribution: cool/hot cores and flows. The gas distribution in the filaments can be too complex. With the standard methods such as projection of density, we lose the depth perception and the flow features are contaminated by far away projected objects. The solution of this will be to use perspective projections for 3D space. Another problem is coming from the methods how to color and blend the particles if we would like to see internal structure of objects.

We have developed special visualization algorithm, which allows us to explore inner structure of objects formed in N-Body simulations.

Visualizing the particle data.

It is easy to visualize point distributions. We have a vector of 3D points (particles) which is describing the position of particles and a 1D vector of a local density. The density can be estimated by one of the methods: CIC, TSC, KD-Tree, SPH, or even by AMR. We definitely suggest, before starting visualization, to calculate SPH density. This provides the density per particles and particles size.

The density or the logarithm of density can be mapped to RGBA by transfer functions. For each channel we can use a different function. This will allow us highlight full information hidden in the distribution.

Of course, we would like to see the internal structure of the density field. To this end, we sort the particles by density before drawing and the densest particles will be drawn at the end.

In the OpenGL is possible to disable 3D depth test but to keep perspective projection. This allow us during the 3D transformations to see the inner part of objects like in a projected slice, but the perception of depth allows us to separate which structures are in distant plane, and which are nearby. With different combinations of depth test and transparency, we can highlight all features in the simulated data.

The current version of the PMViewer can handle 1 vector variable and 2 scalar fields. Usually the vector variable is 3 positions(x, y, z) this variable is mapped to the points coordinates. The scalar fields are mapped to the RGBA color space of points based on transfer functions. We have one function per color channel. The give set of the transfer functions are defining the color table. In the RGBA color space we have RED GREEN BLUE(RGB) and ALPHA(transparency) channel. The Alpha channel usually mapped to the particle density. The colors can be used for the second scalar field.
If we will use density for RGB and A channels we will have a density maps. If we will use Alpha channel for the density and RGB for the some scalar filed, then the visualization method will produce density weighted scalar field. The second way is very informative for the density weighted temperature fields, where we can visually trace the cool flows and cool cored in the galaxies.

Visualizing the Grid based data.

The AMR and Grid based simulations can be visualized if each grid cell will be converted to the SPH particle. For example for AMR simulations the quick visualization can be done if we consider each AMR cell as particle with HSML where HSML is the half of the cell size. The scalar quantities owned by cell can be used for colors and the RHO=mass/(2 HSML)^3 is a good approximation for the visualization purposes. If results are not satisfying one can use more accurate particle assignment using kernel based interpolation.

These methods are in the PMViewer's TODO list.

Latest version of the PMViewer supports following rendering modes:

Particle density mode:

In this mode each particle is rendered by points colored by density. The RGBA goes as density.

Particle temperature weighted density mode:

In this mode each particle is rendered by points colored by temperature but transparency is mapped to density. In this mode the RGB goes as temperature and Alpha channel goes as density.

SPH rendering mode:

The SPH or kernel based density estimators are providing the size of the kernel. This size are using to render each particle as a colored sprite.

Based on smoothing kernel we are generating the sprite:

kernel

Then the generated unit texture will be following:

alpha channelcolors

Where left is the Alpha channel, and right is the RGB channel. For RGB channel we use the transfer functions generating the RED color table.

Each sprite is rendered in that way that it faces to the scene camera as shown in the next figure:

billboarding

This is so called BillBoarding of the texture.

In the SPH mode each particle sorted according the density; the densest particles are rendered in the last. When we render the particles we are disabling the depth test in the OpenGl and enabling the Alpha blending:

                    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
                    glClearDepth(1.0f);
                    glDisable(GL_DEPTH_TEST);
                    glEnable(GL_BLEND);
                    glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
                    glEnable(GL_ALPHA_TEST);
                    glAlphaFunc(GL_GREATER, 0);
                    glEnable(GL_TEXTURE_2D);
                    glBindTexture(GL_TEXTURE_2D, m_texture[0]);
                     ....
                    DrawPoints();
                     ....

GPU based billboarding.

Billboarding is a technique that adjusts an object's orientation so that it "faces" some target, usually the camera. For more information please visit Billboarding Tutorial from the www.lighthouse3d.com website.

The billboarding is computationally expensive procedure.

In PMViewer we have implemented two methods CPU based and GPU based billboarding.

The OpenGL extension GL_POINT_SPRITE_ARB allows us to use a custom-drawn texture for each particle, rather than the traditional OpenGL round antialiased points, and each fragment in a point has the same texture coordinates as every other fragment. Another extension GL_VERTEX_PROGRAM_POINT_SIZE_NV from NVIDIA is allow us to assign individual points size. If GL_VERTEX_PROGRAM_POINT_SIZE_NV is disabled, the size of points is determined by the glPointSize state. If enabled, the point size is determined per-vertex by the clamped value of the vertex result PSIZ register.

In the GLSL shading language its looks like:

vertexShader
 
        // with ATI hardware, uniform variable MUST be used by output
        // variables. That's why win_height is used by gl_FrontColor
        attribute float a_hsml1;
        uniform float win_height;
        uniform vec4 cameralocin;
        void main()
        {
        vec4 position=gl_ModelViewMatrix*gl_Vertex;
        vec4 cameraloc=gl_ModelViewMatrix*cameralocin;
        float d=distance(vec3(cameraloc),vec3(position));
        float a_hsml=gl_Normal.x;
        float pointSize=win_height*a_hsml/d; // <- point diameter in
                                            //pixels (drops like sqrt(1/r^2))
        gl_PointSize=pointSize;
        gl_TexCoord[0]=gl_MultiTexCoord0;
        gl_Position=ftransform();
        gl_FrontColor=vec4(gl_Color.r,gl_Color.g,gl_Color.b,gl_Color.a);
        }
pixelShader
 
        uniform sampler2D splatTexture;
        void main()
        {
        vec4 color = gl_Color * texture2D(splatTexture, gl_TexCoord[0].st);
        gl_FragColor = color;\n"
        }
Here we use glNormal "x" component to keep the kernel size (HSML) of SPH particle.

Sending particles to GPU

In the PMViewer we use 3 floats to store particles positions, 4 floats for RGB colors and alpha channel, eg: RGBA color space is used.

The size per particles HSML we pass to GPU via GL_NORMAL_ARRAY. Currently we are using only "x" component of it. The other two components will be used in the future releases to pass GPU different parameters.

First of all we are sending data to GPU by binding data to Vertex Buffer Objects (VBOs):

void PutOneArrayToGPU(unsigned int m_vbo, float *hArray, unsigned int num)
   	{
   	glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
   	glBufferData(GL_ARRAY_BUFFER,  sizeof(float) * num, hArray, GL_STATIC_DRAW);
   	int size = 0;
   	glGetBufferParameteriv(GL_ARRAY_BUFFER, GL_BUFFER_SIZE, &size);
   	if ((unsigned)size != (sizeof(float) *num))
   		{
   		fprintf(stderr, "WARNING: Pixel Buffer Object allocation failed!\n");
   		fprintf(stderr, "TurningOff the GPU accelerated rendering\n");
   		flag_GpuRender=false;
   		}
   	return flag_GpuRender;
  	}
The glBindBuffer is available only if the GL version is 1.5 or greater.

For example to bind the HSML we should do:

    glGenBuffers(1, &m_vboHSML);
    if(!PutOneArrayToGPU(m_vboHSML, hHSML, 3*m_numParticles))
        std::cerr<<"hHSML error: "<<3*m_numParticles<

The full rendering process is following:

        void DrawPointsByGPU()
        {
        glEnableClientState(GL_VERTEX_ARRAY);
        glBindBuffer(GL_ARRAY_BUFFER, m_vboPos);
        glVertexPointer(3, GL_FLOAT, 0, 0);

        glEnableClientState(GL_COLOR_ARRAY);
        glBindBuffer(GL_ARRAY_BUFFER, m_vboColor);
        glColorPointer(4, GL_FLOAT, 0, 0);

        glEnableClientState(GL_NORMAL_ARRAY);
        glBindBuffer(GL_ARRAY_BUFFER, m_vboHSML);
        glNormalPointer( GL_FLOAT, 3*sizeof(float), 0);

        glDrawArrays(GL_POINTS, 0, m_numParticles);

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        glDisableClientState(GL_NORMAL_ARRAY);
        glDisableClientState(GL_COLOR_ARRAY);
        glDisableClientState(GL_VERTEX_ARRAY);

        };
The rendering methods are implemented in the
class CRenderParticles
in the RenderParticles.cpp and RenderParticles.h files

:::Hot News:::
The PMViewer is
10 years old.
Happy Birthday!!!

Looking for SPMViewer?
Get recent version from here.

More images you can get in the private gallery:
AIP

© 2009 A.Khalatyan,
Last modified: December 28 2009 14:24:20.