This project builds on Residual (2022) by investigating how images processed with the Floyd-Steinberg dithering algorithm are perceived in temporal sequence, that is, animation. The subject matter is, once again, clouds, but now they are viewed as they move across the sky.
A camera placed on my front porch (same location from which the still photo was taken that served as the source image in Residual) was aimed at the sky overhead on a day in which fairly heavy clouds were interspersed with glimpses of blue sky. A photo was taken every second over a period of about 45 minutes, producing a sequence of approximately 2700 images.
The image sequence was processed in two steps. First, each image was separated into cyan, magenta, and yellow channels, each channel was dithered, and then the dithered channels were stacked and blended back into a single 3-bit color image. This is the same process that was used for the still image in Residual. The key parameter of interest in this process was the resolution of the dither process, that is, how many pixels in the source image were mapped onto a single square in the output image. A lower, coarser resolution, with larger squares in the output image, is achieved by dividing the source image up into 8- by 8-pixel blocks for dithering, for example, than by dividing it up into finer 4- by 4-pixel blocks.
I was interested in how coarsely the image sequence could be dithered and still be legible, that is, recognizable as an animation of clouds moving across the sky. The goal was to rely on the perceptual system’s ability to organize a dynamic visual scene into discrete objects (in this case, discriminating the clouds as figure from the sky as background). The perceptual system can often achieve this goal more readily when viewing a dynamic rather than a static scene by exploiting motion cues. Many years ago, psychologists identified a set of “gestalt” principles of perceptual organization, which are assumptions made by the visual system to segregate objects in a scene from one another and from a common background. One principle, common fate, explicitly relies on motion cues: In a dynamic scene, elements that move together are assumed (and therefore perceived) to be parts of the same object, rather than parts of discrete objects that happen to be moving along the same path. With this in mind, my goal was to dither the images so coarsely that it would be difficult to perceive the clouds clearly in any single still image from the sequence, but relatively easy to perceive the clouds in the animated image sequence.
The second step was to assemble the sequence of dithered images into an animated video file. This can be done readily using the open-source Blender software. The only significant decision at this step is selection of a frame rate. I wanted a slow frame rate that would present each image long enough so that the individual dithered squares could be seen clearly, but not so slow that the animation that resulted was overly jerky.
After a lot of experimenting, I settled on a dithering resolution of 8- by 8-pixel blocks, and a frame rate of 5 frames per second. In my viewing experience, at least, the successive squares in the dithered images are clearly visible when the video is viewed from close up, and the clouds emerge fairly clearly when the video is viewed from farther away. Because the original sequence was shot at one image per second, the animation frame rate of 5fps produces an animation in which the clouds are moving at 5 times their original speed.