A paper by Peter Litwinowicz of Apple, Inc.

Presented by Sean Dunn on 3/10/1999
CS563


Sections:

[Return to the CS 563 Homepage]


Introduction

This paper presents a new technique for the automatic processing of moving images to create a flowing, painted, impressionistic effect.

Past Techniques

Haeberli's Technique (static images only)
User controls orientation, size, color of the strokes using combinations of interactive and non-interactive input.

Interactive Input
Setting cursor location
Pressure and velocity

Non-Interactive Input
Gradient of the original image
Secondary images

Simple Motion-Improvements to Haeberli's Technique Keeping the same strokes from frame to frame, changing the color and direction of the strokes as necessary.
- This gives it an undesirable "shower-glass" effect

Placing the strokes randomly - The effect is too jittery

The work presented in this paper is based on modifying Haeberli's technique to produce temporally coherent animations.


Other Work In The Field

Hsu (1994) created a system for producing animations using “skeletal strokes”, which are a way of applying arbitrary images to brush strokes. The animation was key framed by the user and provided no automatic interaction.

Meier (1996) created a system for transforming 3D geometry into animations with a painted look.
- 3D objects are animated, and “particles” on the surface of each object are tracked. The particles are then projected to 2D and sorted by Zdepth. These provide the positions for 2D brush strokes, the surface normals from the original object determining the orientation of the stroke.
- The user specified brush size and texture, and whether the brush size varied across an object.
- Although video input was not used for this technique, it showed that temporal coherence of brush strokes was both interesting and important.


The Painting Process

This presentation will now discuss:

- This Paper's Rendering Technique

- The Orientation Algorithm

- The technique used to move brush strokes from frame to frame to product temporally coherent animations


Rendering Strokes

Stroke Generation

Brush strokes are defined by a center point (cx, cy), a given length, a given brush thickness (radius), and a given orientation (theta).

The positions of pixels are stored as floating point numbers for subpixel positioning. For now, we can set theta to a constant 45 degrees. The color for each stroke is a bilinear interpolation of the pixels it is covering. The ordering that the strokes are drawn in is randomized to give it a more handpainted effect, by cutting down the spatial coherence of the image.

Random Perturbations

Adding random variations to each stroke is important in creating a hand- crafted look. Length, radius, (r,g,b), intensity, and theta are translated and scaled by random amounts, in ranges specified by the user. These amounts are then stored in a per-stroke data structure. For each frame, new random values are NOT generated, to save the image from getting too jittery a look.

Clipping and Rendering

To render a brush stroke, an antialiased line is drawn through the center point of the stroke in the proper orientation. To preserve the general detail of the original image, it is “grown” out from its center until it reaches an edge in the image.

1. An intensity map (I = .30*r + .59*g + .11*b) is generated for the image
2. The intensity map is Gaussian blurred
3. The gradient of the blurred image is calculated, and it is Sobel filtered
4. Strokes are grown from their center point, and stopped if the maximum
length of the strokes is achieved, or if the magnitude of the gradient at that point (Sobel value) decreases in the direction of the stroke 5. An antialiased line segment is drawn between both grown ends.


Orienting Brush Strokes

In order to approximate the metaphor of brush strokes occurring in the same direction for a constant color, we can orient brush strokes normal to the gradient of the intensity image. Since the gradient shows us the direction of most change, normal to this direction is the direction of least change.

The problem is that there are gradient values that are near 0, and thus should not be used in the image, since they may be noise anomalies. It would be nice if we could instead smoothly interpolate through the non-0 values of a constant image region.

To do this, we just get rid of all the pixel points that are near 0. We then go through and replace them with smooth interpolations of the surrounding pixels. The discard threshold is set by the user. Because the data is not uniformly spaced, a linear interpolation is not useful. Instead, a thin-plate spline is used for its attributes of being smooth for non-uniform data.

Finally, when the stroke center (cx,cy) is placed in the image, the gradient at that position (Gy,Gx) can be bilinearly interpolated. A direction angle can now be computed as:

where theta is the previously set random angle variance. 90 is added to makethe direction normal to that of the gradient. This makes the strokes appear "glued" to the objects as they move, which is much closer to the way color flow would behave in an animated painting.


Temporal Coherence

Input to this process is a video clip with no a priori information about pixel motion in the scene. While the first frame can be rendered using the previous methods, we must move the brush strokes from one frame to the next by calculating the optical flow of the mages.

A description of the optical flow algorithm is beyond the scope of this paper and presentation. Just know that it is based on a cross-correlation between parts of an image that are similar. It assumes that there is no occlusion and that light levels are constant. While this is not ideal for most video, the effects of these wrong assumptions produce a pleasing effect nonetheless.

The vector field that is created is used as a displacement field for the centers of the paint strokes. Since we want the whole image to be covered in paint strokes, if the displacement of paint strokes causes the density of strokes to become sparse, that is, they do not cover the whole image, then we need to add extra paint strokes.

This is done by performing a Delaunay triangulation of the displaced stroke centers in the new image. This will give us a very pleasing set of triangles that will cover and connect all points in the image. After this is performed, each triangle is subdivided into smaller triangles if the original triangle is greater than a maximal area. The new subdivided points become the center points of the new paint strokes. By the same token, if two strokes are too close together because of bunching over time, then the stroke that is deeper in the image should be discarded. The new strokes are randomly inserted among the old strokes.


Conclusions

A method for producing painting-like animations from video clips was presented.

- Brush strokes are clipped to edges in the original frame to maintain edge detail.
- A scattered data interpolation technique is used to interpolate the gradient field in areas where the magnitude is near zero.
- A brush stroke list is maintained and manipulated through the use of optical flow fields to enhance temporal coherence.

On a 180 Mhz Macintosh 8500, this technique averaged 81s per frame, processing an average of 120,000 paint strokes per frame.


References

[1] Litwinowicz, P., "Processing Images and Video for an Impressionist Effect," SIGGRAPH Proceedings 1997, pp. 151-158.

[Return to the CS 563 Homepage]