Dispatch from DOCAM 2: tools and strategies

This second dispatch from DOCAM (Documentation and Conservation of the Media Arts Heritage, http://docam.ca) focuses on new tools and strategies for keeping new media works from premature aging and death. Here’s a thin sliver of presentations that stuck in my mind and relate to the theme of commissioning and collecting variable media.

1. Practical tools

Anne-Marie Zeppetelli (Musee d’art contemporain de Montreal) was among several presenters to show forms created for copyright, loans, and the like. Click on Tools in the menu at lower right:


When asked how museums can accomplish the necessary documentation and preservation given the lack of funding and staff time, Anne-Marie recommended exploiting the museum’s immediate needs to drive long-term fixes. You have leverage whenever a work is acquired, exhibited, or loaned. (Commissioners, take note!)

2. Experimental interfaces

DOCAM’s own Andrea Kuchembuck presented a spiffy Flash-based interface to documentation gathered for David Rokeby’s Machine for Taking Time:


Alain Depocas showed a scalable technological timeline that compared the lifespan of various electronic components:


Richard Gagnier and Alexandre Mingarelli showed a decision tree to help make conservation decisions (in French):


3. Virtual versions

Two projects plumbed the mutual reinforcement of presentation and preservation you get by virtualizing a project:

Vincenzo Lombardo of the University of Torino showed a remarkably complex 3d virtual model of the Electronic Poem, an ephemeral architectural project for the 1958 World Fair that happens also to have been the subject of a book by Mark Tribe. This collaboration among le Corbusier, Varese, and the young Xenakis corralled vast video projections and hundreds of sound channels into a building-sized tent based on hyperbolic curves.

Vincenzo related two anecdotes that corroborated how the effort that went into this simulation paid off. The first came from an elderly user who happened to the last remaining eyewitness of the actual building, who delightedly described the VR version as “very realistic.” (The audience laughed when seeing the smile on his face when he pulled off the headset.) The second came when the documentation team was stumped by a vintage photo that couldn’t be matched to any perspective available in the virtual model–until they realized that the photo found in the archive had been printed in reverse, at which point it was an exact match.

Though most of the material he presented originated in archives, I asked Vincenzo about the potential to gather photos for such models from Flickr and other participatory media outlets, citing Blaise Aguera y Arcas’ Photosynth demo:

Vincenzo responded that the Torino project drew on some images and works available online, and that his team saw crowdsourcing as a valuable strategy.

In a more scalable solution that should be considered for any commission of Internet art, SFMOMA’s Jill Sterrett worked with local media expert Mark Hellar to safeguard Lynn Hershman’s Agent Ruby, a chat interface built in Flash over a Java-based artificial intelligence. Mark migrated it from an aging Gnu/Linux box in the basement to a virtualized server on the museum’s up-to-date server hardware. (The Guggenheim did the same a few years ago for the Internet art commissions I commissioned in 2002.) Mark pointed out these advantages of virtualization:

– Enhanced security.

– Reduction of hardware.

– Works exist independently.

– Can create a clone or sandbox.

– Easier to upgrade hardware.

– Can take historical snapshots.

The ability to take historical snapshots is like time travel for variable media works. Agent Ruby has a learning algorithm based on Artificial Intelligence Markup Language, so her responses change over time. Also the references brought to the chat probably changed over the years as the contributions of viewers precess from questions about Bush to questions about Obama and so forth. Mark and I spoke about the potential revelations that might come from a time-based linguistic analysis of the chat log.

In a future dispatch I’ll describe some of the tools that Forging the Future unveiled at DOCAM, including the third-generation Variable Media Questionnaire and a way to network together many of the tools presented there called the Metaserver.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top