A while back I published a tutorial describing a screen space technique for approximating motion blur in realtime. The effect was simplistic; it took into account the movement of a camera through the scene, but not the movement of individual objects in the scene. Here I'm going to describe a technique which addresses both types of motion. But let's begin with a brief recap:
A Brief Recap
Motion pictures are made up of a series of still images displayed in quick succession. Each image is captured by briefly opening a shutter to expose a piece of film/electronic sensor. If an object in the scene (or the camera itself) moves during this exposure, the result is blurred along the direction of motion, hence motion blur.The previous tutorial dealt only with motion blur caused by camera movement, which is very simple and cheap to achieve, but ultimately less realistic than 'full' motion blur.
For full motion blur, the approach I'll describe here goes like this: render the velocity at every pixel to a velocity buffer, then subsequently use this to apply a post process directional blur at each pixel to the rendered scene. This isn't the only approach, but it's one of the simplest to implement and has been used effectively in a number of games.
Velocity Buffer
In order to calculate the velocity of a point moving through space we need at least two pieces of information:- where is the point right now (a)?
- where was the point t seconds ago (b)?
Technically the velocity is (a - b) / t however for our purposes we don't need to use t, at least not when writing to the velocity buffer.
Since we'll be applying the blur as a post process in image space, we may as well calculate our velocities in image space. This means that our positions (a and b) should undergo the model-view-projection transformation, perspective divide and then a scale/bias. The result can be used to generate texture coordinates directly, as we'll see.
To actually generate the velocity buffer we render the geometry, transforming every vertex by both the current model-view-projection matrix as well as the previous model-view-projection matrix. In the vertex shader we do the following:
uniform mat4 uModelViewProjectionMat; uniform mat4 uPrevModelViewProjectionMat; smooth out vec4 vPosition; smooth out vec4 vPrevPosition; void main(void) { vPosition = uModelViewProjectionMat * gl_Vertex; vPrevPosition = uPrevModelViewProjectionMat * gl_Vertex; gl_Position = vPosition; }
smooth in vec4 vPosition; smooth in vec4 vPrevPosition; out vec2 oVelocity; void main(void) { vec2 a = (vPosition.xy / vPosition.w) * 0.5 + 0.5; vec2 b = (vPrevPosition.xy / vPrevPosition.w) * 0.5 + 0.5; oVelocity = a - b; }
For now, I'm assuming you've got a floating point texture handy to store the velocity result (e.g.
GL_RG16F
). I'll discuss velocity buffer formats and the associated precision implications later.So at this stage we have a per-pixel, image space velocity incorporating both camera and object motion.
Blur
Now we have a snapshot of the per-pixel motion in the scene, as well as the rendered image that we're going to blur. If you're rendering HDR, the blur should (ideally) be done prior to tone mapping. Here are the beginnings of the blur shader:uniform sampler2D uTexInput; // texture we're blurring uniform sampler2D uTexVelocity; // velocity buffer uniform float uVelocityScale; out vec4 oResult; void main(void) { vec2 texelSize = 1.0 / vec2(textureSize(uTexInput, 0)); vec2 screenTexCoords = gl_FragCoord.xy * texelSize; vec2 velocity = texture(uTexMotion, screenTexCoords).rg; velocity *= uVelocityScale; // blur code will go here... }
texelSize
later on.What's
uVelocityScale
? It's used to address the following problem: if the framerate is very high, velocity
will be very small as the amount of motion in between frames will be low. Correspondingly, if the framerate is very low the motion between frames will be high and velocity
will be much larger. This ties the blur size to the framerate, which is technically correct if you equate framrate with shutter speed, however is undesirable for realtime rendering where the framerate can vary. To fix it we need to cancel out the framerate:uVelocityScale = currentFps / targetFps;
The next step is to work out how many samples we're going to take for the blur. Rather than used a fixed number of samples, we can improve performance by adapting the number of samples according to the velocity:
float speed = length(velocity / texelSize); nSamples = clamp(int(speed), 1, MAX_SAMPLES);
velocity
by texelSize
we can get the speed in texels. This needs to be clamped: we want to take at least 1 sample but no more than MAX_SAMPLES
.Now for the actual blur itself:
oResult = texture(uTexInput, screenTexCoords); for (int i = 1; i < nSamples; ++i) { vec2 offset = velocity * (float(i) / float(nSamples - 1) - 0.5); oResult += texture(uTexInput, screenTexCoords + offset); } oResult /= float(nSamples);
That's it! This is about as basic as it gets for this type of post process motion blur. It works, but it's far from perfect.
Far From Perfect
I'm going to spend the remainder of the tutorial talking about some issues along with potential solutions, as well as some of the limitations of this class of techniques.Silhouettes
The velocity map contains discontinuities which correspond with the silhouettes of the rendered geometry. These silhouettes transfer directly to the final result and are most noticeable when things are moving fast (i.e. when there's lots of blur).One solution as outlined here is to do away with the velocity map and instead render all of the geometry a second time, stretching the geometry along the direction of motion in order to dilate each object's silhouette for rendering the blur.
Another approach is to perform dilation on the velocity buffer, either in a separate processing step or on the fly when performing the blur. This paper outlines such an approach.
Background Bleeding
Another problem occurs when a fast moving object is behind a slow moving or stationary object. Colour from the foreground object bleeds into the background:A possible solution is to use the depth buffer, if available, to weight samples based on their relative depth. The weights need to be tweaked such that valid samples are not excluded.
Format & Precision
For the sake of simplicity I assumed a floating point texture for the velocity buffer, however the reality may be different, particularly for a deferred renderer where you might have to squeeze the velocity into as few as two bytes. Using an unsigned normalized texture format, writing to and reading from the velocity buffer requires a scale/bias:// writing: oVelocity = (a - b) * 0.5 + 0.5; // reading: vec2 velocity = texture(uTexMotion, screenTexCoords).rg * 2.0 - 1.0;
The solution to this is to use the
pow()
function to control how precision in the velocity buffer is distributed. We want to increase precision for small velocities at the cost of worse precision for high velocities.Writing/reading the velocity buffer now looks like this:
// writing: oVelocity = (a - b) * 0.5 + 0.5; oVelocity = pow(oVelocity, 3.0); // reading: vec2 velocity = texture(uTexMotion, screenTexCoords).rg; velocity = pow(velocity, 1.0 / 3.0); velocity = velocity * 2.0 - 1.0;
Transparency
Transparency presents similar difficulties with this technique as with deferred rendering: since the velocity buffer only contains information for the nearest pixels we can't correctly apply a post process blur when pixels at different depths all contribute to the result. In practice this results in 'background' pixels (whatever is visible through the transparent surface) to be blurred (or not blurred) incorrectly.The simplest solution to this is to prevent transparent objects from writing to the velocity buffer. Whether this improves the result depends largely on the number of transparent objects in the scene.
Another idea might be to use blending when writing to the velocity buffer for transparent objects, using the transparent material's opacity to control the contribution to the velocity buffer. Theoretically this could produce an acceptable compromise although in practice it may not be possible depending on how the velocity buffer is set up.
A correct, but much more expensive approach would be to render and blur each transparent object separately and then recombine with the original image.
Conclusions
It's fairly cheap, it's very simple and it looks pretty good in a broad range of situations. Once you've successfully implemented this, however, I'd recommend stepping up to a more sophisticated approach as described here.I've provided a demo implementation.
Great post John! I'm in the process of implementing per-object motion blur right now, and found this post as a resource.
ReplyDeleteRegarding the problem of silhouettes, I'm trying to imagine what it would be like to do a prepass of the velocity buffer by blurring it WITH the velocity buffer itself. Then using that resulting blurred velocity buffer to blur the image. I've only skimmed the paper describing the velocity buffer dilation technique, but I guess it's probably similar to that?
Hi Andrew. "Velocity Dilation Methods" on p2 of the McGuire paper notes some limitations of using straightforward convolutions on the velocity buffer, mainly that these tend to cause overblur in the result as well as that any componentwise blurring of vectors will affect their direction.
DeleteThe implementation in the McGuire paper finds a "dominant" local velocity and uses that in the blur reconstruction: I'd say that it's worth taking the time to understand and implement, as the results are much superior to other techniques I've seen!
Thank you so much for this cleanly written, concise article!
ReplyDeleteIn regards to your demo, I'm curious what you are using to do font rendering and slider bars.
ReplyDeleteI wrote it myself - it's pretty raw, I just use it to tweak variables at runtime and render the profiling data. It uses OpenGL, renders to the backbuffer after the main render pass. If you take a look at the "framework" source folder in any of the demos you can see how it works.
DeleteVery good explanation!
ReplyDeletebut, about precision:
>// writing:
> oVelocity = (a - b) * 0.5 + 0.5;
> oVelocity = pow(oVelocity, 3.0);
Imho, this code means that we have low precision on negative speed values, and hi precision on positive values.
I've modified your code a little bit and get correct result:
// writing:
oVelocity = pow(abs(a - b), 1/3.0)*sign(a - b) * 0.5 + 0.5;
// reading:
vec2 velocity = texture(uTexMotion, screenTexCoords).rg;
velocity = pow(velocity*2-1, 3.0);
This comment has been removed by the author.
ReplyDeleteEpic tutorial!
ReplyDeleteBut, for the love of God, to spare yourself 5 hours of debugging,
make sure you set oResult /= float(nSamples) alpha to 1 after division. Otherwise alpha
blending makes some nasty errors!
Thank you for great lesson!
I would like to thank you for the efforts you have made in writing this article. I am hoping the same best work from you in the future as well.blurry imag
ReplyDeleteThanks for Update it
ReplyDelete3D Animation Services in Bangalore
product design studios in Bangalore
NONTON BOKEP GRATIS GUYS
ReplyDeleteTANPA PULSA DAN KUOTA > situs bokep terbaru
bokep abg indo
bokep jepang terbaru
situs bokep barat
bokep skandal indo
film bokep korea terbaru
bokep viral indo
NONTON FILM TERBARU, TERLENGKAP Dan TERPOPULER
GRATIS JUGA !!! > nonton film
리뷰 Archive - 【카지노사이트】바카라사이트,파라오카지노,샌즈카지노,SM카지노
ReplyDeleteBubble Shooter Games 2 has lots of bubbles, pro color and ball shooting games in 2022. Bubble Shooter Classic is best played with pro friends. Match 3 to pop origin bubbles. This new game is first launched in 2022 game.
ReplyDeleteI read this post your post so nice and very informative post thanks for sharing this post.
ReplyDeleteOrganic Cotton T Shirt Manufacturer in india
https://bit.ly/3SOXw6l
Nice Post
ReplyDeleteVideo Production Services in Delhi
https://bit.ly/3CKOXnS
Enhance your Amazon FBA strategy with insights from a seasoned consultant. Amazon FBA consultant
ReplyDeleteFor businesses, web development is not just a luxury—it's a strategic imperative. A well-crafted website serves as a powerful marketing tool, attracting leads, converting customers, and driving revenue growth. Whether it's through e-commerce functionality, lead generation forms, or interactive multimedia content, web development fuels business expansion and unlocks new opportunities for success in the digital marketplace subscription based design services
ReplyDeleteWith the widespread use of smartphones and tablets, mobile optimization has become a non-negotiable aspect of web design. Mobile-friendly websites not only provide a better user experience but also improve search engine rankings, as search engines prioritize mobile-responsive websites in their algorithms saas integration platform By ensuring seamless experiences across devices, businesses can reach audiences wherever they are and capitalize on the growing trend of mobile browsing.
ReplyDeleteAmazon Global Selling enables sellers to reach international markets by listing products on Amazon’s global marketplaces fba consulting It provides tools and support to help sellers navigate international shipping, customs, and localization.
ReplyDeleteThanks a lot for giving us such a helpful information. You can also visit our website for UPES project report guide
ReplyDelete