-
Daniel Volmar
Daniel Volmar is a historian of science and technology residing in Durham, North Carolina. He received his PhD from Harvard University, where his dissertation discussed the computerization of military air-defense systems during the Cold War. Other topics of publication have included defense research-and-development policy, US nuclear command-and-control, and maintenance issues in software and aviation. His interest in video games comes from their relationship to real-time military control systems and a regular playing of them.
-
Abstract
Unlike other video-game systems, the PC has no standard unit, and games can behave very differently depending on how the individual machine is built and configured. When a game fails, it is not always obvious who is responsible: the designer, the user, or the producer of any one among many system components. Such failures are especially difficult to mediate in cases of poor performance, where the game does operate, but in a manner that disrupts the player’s subjective sense of interactivity. PC users have learned to rely on informal benchmarks that measure the frame rate, or graphical performance, of high-end games to troubleshoot problems, understand system capabilities, and agitate for fixes from developers and manufacturers.
This habit emerged in the early-to-mid 1990s, a time of exceptional instability in the PC market. Waves of new multimedia products inspired games that made over-ambitious use of them. Players expressed great aggravation at developers like Origin Systems, whose titles were difficult to get running even on cutting-edge hardware. But the relatively judicious games of id Software, which were furthermore transparent about their performance, found use as benchmarks and diagnostics, establishing, in themselves, what it meant for a game to run well.