Downscaling Metamorphosis
Wombat (game critic)
Kim Inbai’s Metamorphosis is a three-dimensional work consisting of four main components. First, there is a stainless-steel base placed on the floor. It is constructed to provide support, through the form of a hollow cylindrical shape with a closed base, for a rod (also stainless-steel) that appears to be around two meters long and fits perfectly into the cylinder. The rod is inserted through two structures resembling propellers, fixing each of them at a particular height. The higher propeller, located at around eye level for a standing viewer, has three blades as does the second propeller beneath it. Other than that, it is difficult to find any visual commonalities between the two. In contrast with the upper propeller, where the comparatively sleek and streamlined blades are curved in completely identical shapes, the lower propeller has unevenly textured blades that simply jut out in long straight lines. The directions of the two propellers’ blades are noticeably out of sync—a misalignment that gives the sense of minute movement due to the synergy with the rod that hangs slightly in the air. It is reminiscent of Leonardo da Vinci’s helicopter,1) retaining some of its vigorous rotational movement after having just landed from its flight.2)
As this shows, Metamorphosis is not limited in terms of observation. Yet the closer we observe it, the more we notice how certain smaller components that become visible on closer scrutiny seem to ironically slip away from the realm of the visible. “Polygon markings” is the prime example. When looking carefully at the blades (especially those on the top propeller) one sees the pattern of a grid structure called, in the artist’s words, “polygon markings.” As can be inferred from the word “polygon,” the pattern is derived from the 3D modeling work that served as a preliminary stage for the propellers’ production. Kim deliberately selected a low number of polygons to create a kind of (metaphorically) “low-resolution” model, after which it was 3D-printed with polylactide (PLA) materials. The resulting structure was then coated with a mixture of liquid resin and glass fibers, the color of the resin seeping through the loosely clustered polygons, causing the pattern to be visible. In this way, the polygon markings are an index with a very clear physical origin. At the same time, they are an interface. How are they an interface? To reach that discussion, we first need to talk about upscaling.
If you’ve seen the film Blade Runner, you'll probably remember the scene where the character of Rick Deckard analyzes photographs in order to track down escaped replicants. To do this, he puts the pictures in a 'state-of-the-art' reader and repeatedly “zooms in” until coming across some crucial image. What is actually happening here, however, is less a “zoom in” and more a form of real-time image upscaling (a technology actively used in gaming today).3) There are two bases for this, the first being that the source material is a printed photograph. When printing even the highest-resolution image file on a piece of paper the size of a person’s palm, the developed photograph’s information values are inevitably quite limited. If that printed image is then magnified several dozen times, what we see is not any sort of crucial evidence—it’s just random noise. Another basis is the fact that Deckard keeps ordering the reader to “enhance.” He is clearly spelling out that what he wants is not a magnification of the image, but an enhancement of its quality, in order to “bring back” information previously lost.
At its root, image upscaling amounts to embellishing information values. In more technical terms, upscaling is a process that involves artificially increasing the number of pixels to make up for shortfalls in a relatively low-resolution source—thus adjusting the image to suit a high-resolution screen. If we don’t feel especially bothered viewing a 1080p source video on a 4K TV, that’s probably because the smart TV’s upscaling algorithm is doing its job. Many more extreme cases can be found in the field of gaming, which represents the front lines for this sort of technology. Nvidia, a graphic card company that has recently become better known for its artificial intelligence (AI) hardware, has used artificial neural network-based learning to develop methods of “predicting” the positions of individual pixels in the next frame. Thanks to this, they have gone beyond upscaling, to market technology that actually generates the frame itself.
The fascinating thing is a certain presumptuous attitude found throughout these sorts of explosive trends—what I will refer to as the “obsession with eliminating the marks of pixels.” These techniques, which involve inventing pixels that bear no direct relationship to the source (or images rendered only with the video card’s rasterization performance), have had the aim of not showing the sort of blurry images where pixels are visible. Ironically, in a screen made of countless pixels, the pixel itself is the one thing you must not show. With the observing eye filtering out the screen and arriving at the illusory image, it is only when we see the marks of pixels on our screen—in other words, when something isn’t working right—that we perceive the screen itself as an assemblage of pixels: “Essence,intonedHeidegger(channelingAristotle),isrevealedinaccident .”4)
So perhaps we can say something similar about the polygon markings? In this case, however, there is something we must not overlook, namely the fact that the model created in polygons is a vector image. Unlike a raster image made up of pixels, a 3D model made up of polygons is not affected by resolution. No matter how many or how few polygons there are or how much the model is magnified on a screen, one will never encounter a “blurred image” such as the kind seen when pixels are exposed. So if such a thing as a “polygon marking” is possible, it would not be visible on a screen. Instead, there is a three-dimensional downscaling process produced by using a very small number of polygons together with the 3D printer—an imperfect representational device.5) Here, the specific grid pattern that appears on the blades is defined as the polygon marking. Just as the marks of pixels reveal the screen, the polygon markings reveal the artwork. In this case, the work stops being “something” with a particular shape capable of fully projecting some profound meaning. Like a screen that does not show an image “as-is,” the work metamorphoses in strange ways, emphasizing its concrete materiality.
The polygon markings are thus an anti-interface interface. Much like the evidence of pixels negating the screen’s content, the polygon markings drastically reduce the decibels of all the different discourses that might easily be accessed via the artwork. The private meanings and unconscious symbols are scarcely heard now, like murmurs in the distance. The polygon markings are reestablished as a particular point of contact for the work. Leaving behind the noise of that murmuring, Metamorphosis pauses briefly as if it were to take flight again at any moment—not as a reference or metaphor, but as an indexical essence bearing “marks”; as an entity showing that “contact” begins where “connection” ends; and as an interface that rejects smooth, frictionless operation and “touching”.
**
1) https://www.wikiart.org/en/leonardo-da-vinci/design-for-a-helicopter
2) For those who might solemnly maintain that Da Vinci’s helicopter sketches have never been tested in reality, I regret to inform you that an aerospace engineering team at the University of Maryland did so just last year. https://www.cnet.com/science/this-drone-flies-using-da-vincis-530-year-old-helicopter-design/
3) This state-of-the-art reader, equipped with upscaling functions, boasts the kind of astonishing performance someone living in 1982 (Blade Runner’s release year) might have expected to see in the year that the movie was set in (2019).
4) John Durham Peters, The Marvelous Clouds, trans. Lee Hee-eun (Seoul: Culturelook, 2018), 68.
5) A 3D printer employs real-world materials and mechanical processes to create three-dimensional objects. Therefore, “printing out” a vector image—which is a mathematical ideal based on coordinates—always results in an object that “appears perfect”.
As this shows, Metamorphosis is not limited in terms of observation. Yet the closer we observe it, the more we notice how certain smaller components that become visible on closer scrutiny seem to ironically slip away from the realm of the visible. “Polygon markings” is the prime example. When looking carefully at the blades (especially those on the top propeller) one sees the pattern of a grid structure called, in the artist’s words, “polygon markings.” As can be inferred from the word “polygon,” the pattern is derived from the 3D modeling work that served as a preliminary stage for the propellers’ production. Kim deliberately selected a low number of polygons to create a kind of (metaphorically) “low-resolution” model, after which it was 3D-printed with polylactide (PLA) materials. The resulting structure was then coated with a mixture of liquid resin and glass fibers, the color of the resin seeping through the loosely clustered polygons, causing the pattern to be visible. In this way, the polygon markings are an index with a very clear physical origin. At the same time, they are an interface. How are they an interface? To reach that discussion, we first need to talk about upscaling.
If you’ve seen the film Blade Runner, you'll probably remember the scene where the character of Rick Deckard analyzes photographs in order to track down escaped replicants. To do this, he puts the pictures in a 'state-of-the-art' reader and repeatedly “zooms in” until coming across some crucial image. What is actually happening here, however, is less a “zoom in” and more a form of real-time image upscaling (a technology actively used in gaming today).3) There are two bases for this, the first being that the source material is a printed photograph. When printing even the highest-resolution image file on a piece of paper the size of a person’s palm, the developed photograph’s information values are inevitably quite limited. If that printed image is then magnified several dozen times, what we see is not any sort of crucial evidence—it’s just random noise. Another basis is the fact that Deckard keeps ordering the reader to “enhance.” He is clearly spelling out that what he wants is not a magnification of the image, but an enhancement of its quality, in order to “bring back” information previously lost.
At its root, image upscaling amounts to embellishing information values. In more technical terms, upscaling is a process that involves artificially increasing the number of pixels to make up for shortfalls in a relatively low-resolution source—thus adjusting the image to suit a high-resolution screen. If we don’t feel especially bothered viewing a 1080p source video on a 4K TV, that’s probably because the smart TV’s upscaling algorithm is doing its job. Many more extreme cases can be found in the field of gaming, which represents the front lines for this sort of technology. Nvidia, a graphic card company that has recently become better known for its artificial intelligence (AI) hardware, has used artificial neural network-based learning to develop methods of “predicting” the positions of individual pixels in the next frame. Thanks to this, they have gone beyond upscaling, to market technology that actually generates the frame itself.
The fascinating thing is a certain presumptuous attitude found throughout these sorts of explosive trends—what I will refer to as the “obsession with eliminating the marks of pixels.” These techniques, which involve inventing pixels that bear no direct relationship to the source (or images rendered only with the video card’s rasterization performance), have had the aim of not showing the sort of blurry images where pixels are visible. Ironically, in a screen made of countless pixels, the pixel itself is the one thing you must not show. With the observing eye filtering out the screen and arriving at the illusory image, it is only when we see the marks of pixels on our screen—in other words, when something isn’t working right—that we perceive the screen itself as an assemblage of pixels: “Essence,intonedHeidegger(channelingAristotle),isrevealedinaccident .”4)
So perhaps we can say something similar about the polygon markings? In this case, however, there is something we must not overlook, namely the fact that the model created in polygons is a vector image. Unlike a raster image made up of pixels, a 3D model made up of polygons is not affected by resolution. No matter how many or how few polygons there are or how much the model is magnified on a screen, one will never encounter a “blurred image” such as the kind seen when pixels are exposed. So if such a thing as a “polygon marking” is possible, it would not be visible on a screen. Instead, there is a three-dimensional downscaling process produced by using a very small number of polygons together with the 3D printer—an imperfect representational device.5) Here, the specific grid pattern that appears on the blades is defined as the polygon marking. Just as the marks of pixels reveal the screen, the polygon markings reveal the artwork. In this case, the work stops being “something” with a particular shape capable of fully projecting some profound meaning. Like a screen that does not show an image “as-is,” the work metamorphoses in strange ways, emphasizing its concrete materiality.
The polygon markings are thus an anti-interface interface. Much like the evidence of pixels negating the screen’s content, the polygon markings drastically reduce the decibels of all the different discourses that might easily be accessed via the artwork. The private meanings and unconscious symbols are scarcely heard now, like murmurs in the distance. The polygon markings are reestablished as a particular point of contact for the work. Leaving behind the noise of that murmuring, Metamorphosis pauses briefly as if it were to take flight again at any moment—not as a reference or metaphor, but as an indexical essence bearing “marks”; as an entity showing that “contact” begins where “connection” ends; and as an interface that rejects smooth, frictionless operation and “touching”.
**
1) https://www.wikiart.org/en/leonardo-da-vinci/design-for-a-helicopter
2) For those who might solemnly maintain that Da Vinci’s helicopter sketches have never been tested in reality, I regret to inform you that an aerospace engineering team at the University of Maryland did so just last year. https://www.cnet.com/science/this-drone-flies-using-da-vincis-530-year-old-helicopter-design/
3) This state-of-the-art reader, equipped with upscaling functions, boasts the kind of astonishing performance someone living in 1982 (Blade Runner’s release year) might have expected to see in the year that the movie was set in (2019).
4) John Durham Peters, The Marvelous Clouds, trans. Lee Hee-eun (Seoul: Culturelook, 2018), 68.
5) A 3D printer employs real-world materials and mechanical processes to create three-dimensional objects. Therefore, “printing out” a vector image—which is a mathematical ideal based on coordinates—always results in an object that “appears perfect”.
©2023 Inbai Kim