Frames: How We Create Video

When diving into the video side of creative coding, a lot of principles from the early days of film still carry over to how we work digitally.

In early cinema, and still to this day in some workflows using analog cameras, film worked on the idea that we see in a flip book. If the video is live action, we take a rapid fire set of photos on film stock, and then play them back at a speed that makes it look realistic.

In animation, this is done by drawing a character or object on a frame, and then tracing it to a next piece of paper and then adjusting the character ever so slightly, similar to what would happen in the film version above.

When we work with digital video, the principles are carried over. A video on your computer is mostly made up of a long string of images that are put together, and then have a soundtrack running alongside it. We often don’t see it that way though – if we are watching a video on YouTube or we pulled a video file off of our camera, computers are great at trying to keep everything simple so you just see a video.

In Pure Data, video is accomplished using the GEM library. In Max, it’s done using Jitter. And Processing does video right out of the box.

In the following tutorial for Pure Data, a film is set up fairly easily without having to specify frame rates or anything. And then you can see how to manipulate what frame is being played back.

Outside of these applications, and in ‘regular’ video editing, frame rates become a big topic of conversation. That’s because cameras can record video at various rates, and when other departments add things to a film (such as sound), these other people need to be aware of what the frame rate is so that everything stays in sync.

In experimental media, frame rates are yours for the playing around with.