I am not sure whether this is really a new technique. I have not found anything like it so far in literature, but I have not read everything ever written either, so I cannot be sure. So my question is: is this a new technique?
This technique works as follows:
-render scene to a rendertexture that contains the distance to the camera at every pixel
-render scene normally, without volume
-render volume, ignoring the zbuffer, by raytracing a sphere in the pixel shader and comparing the resulting thickness of the sphere with the depth in the depth texture
-this might be combined with lighting on the spheres and with texturing
Just to be sure it is clear: I am NOT doing a raymarch through a voxel grid.

These images show the result:
Large particle rendered with volume. Note how the red bars fade out as they enter the volume:

The particle, rendered traditionally, so without volume. Note the sharp edges where the particle boxes intersect:

The raytraced volumetric sphere:

The same volumetric sphere, with lighting:

The one above, combined with a texture:

Moving the camera inside the volume also works correctly, so when close to the red bars, thee become more visible. As the technique is currently implemented, moving beyond the centre of the volumetric particle does not work correctly. A fix for that is planned, but not implemented yet.

A real-time adjustable demo (including source code) can be found here:
Volumetrics demo version 3 (ZIP, 4.0mb)
Ow, by the way, I am working on this for my thesis for the Computer Science Master "Game & Media Technology" at Utrecht University. The first half of my thesis was Interior Mapping, which also used raytracing in the pixel shader (which is the topic of my thesis).
I am looking forward to hearing any ideas for this that you folks might have! (And I am hoping it really is a new technique.)