Simply put, algorithms are logico-mathematical automatons. They depend first and foremost on a function that remains unaltered (pre-set by the one who conceives the code, otherwise known as ‘the writer’). Apart from the function, an algorithm is endowed with a number of variables, which are selected from a pool of possibilities established in the writing phase and to which said function will be applied. These will provide the input, i.e. that which is going to be processed. And of course, to each input its own output – an entity that is anticipated and yet never completely known.
What is interesting about algorithms and functions in general is this ambiguity that characterizes them: the fact that the outcome is at the same time known and unknown. It is known insofar as the outcome is always already contained in the code: it is set by the code, limited by it, determined by it. Be the results as wild and as unexpected as they may be, they would not be possible if the code hadn’t provided the conditions of possibility for them. But at the same time they cannot be known in advance. At the end of the day, an algorithm is created precisely in order to deal with outcomes that cannot be foreseen. If they could be fully known in advance what sense would it make to come up with an algorithm in the first place?
A simple illustration
I’m thinking of a funnel now. A funnel is ‘coded’ (one might find the word ill-used here, but let’s think about it with a modicum of lenience) to let a liquid pass through it on its way to another location. The location doesn’t matter yet. What matters about it is that the liquid poured through the funnel will have to reach this destination, this location. Now, the width of the funnel’s neck will determine the amount of liquid that can pass through. No matter how hard one strives, one will never be able to pour more than the ‘code' of the funnel’s neck allows. One can pour less, of course, but that just proves the point: freedom, in a code, is limited to a lesser input. One can’t do more than the code allows.
|Source: Superb Wallpapers|
Of course, not all algorithms are this simple. In fact, most of them are not. One can imagine possible complications even in the case of the funnel model. Twist the path of the funnel’s neck and you have more time required for the liquid to pass through. I don’t know why anybody would do this but it’s a possibility, a variation of the algorithm. Place a sieve somewhere along the way and the same liquid will reach its destination in an altered state (filtered, cleaned of impurities, thinned out etc.) Vary the thickness of the liquid and you’ll have varied times. And so on, and so forth.
Complications are endless. But it becomes clear from this example, I hope, that algorithmic realities are dependent on the code or function of the given situation. If such is the case, it would be easy to extrapolate the logic of algorithms to situations that are not mathematical in nature. Let’s say writing, since we’re always at it. The writing of a short story has its own algorithmic determinants. Length is one. Length provides the essential difference between a short story and, say – a novel, or a novella, or a trilogy; so that a short-story writer will never go beyond a certain word limit without the risk of moving into a different territory: the territory of lengthier genres. Content is another piece of code that matters in the case of a short story. Content differentiates a short story from a scientific treatise, from an almanac, from a shopping list. The crossing of species is possible, of course, but it does not prove the algorithm wrong; on the contrary, it confirms its strength. Then shape too matters: it separates a short story from a dramatic piece, from a film script, from a poem. We can go deeper and deeper, in search for other determinants, equally important: genre, audience, language, loyalty to a certain tradition, allegiance to a certain ideology.
Many other things aside, what is very important in an algorithm is its departure from the subject. Although they are written and employed by humans, their outcomes can’t be changed by the user in ways that are not always already existent in the code. The amount of liquid passing through a funnel (thickness, viscosity, velocity and other things considered) will never be changed by human volition. The genre limitations that determine the shape and length of a short story cannot be transgressed by an individual writer without his/her transgression degenerating into something else, something un-short-story-like.
What on earth...
In digital environments, algorithms are far more complex than a funnel or a short story. And yet, they serve similar functions.
See Google Earth.
To get there, let’s bring into the matter Clement Valla. Collector and curator of digital artefacts, Valla gathers, among other things, Google Earth images. He sometimes calls these things ‘postcards.’ What he is interested in are images that appear to contradict our understanding of what planet Earth looks like, or should look like. His ‘postcards’ depict shrivelled, bent, twisted-and-turned features of Earth surfaces. Contemplating his collection is like contemplating a bunch of Dali paintings, in which materiality is destabilized: clocks melt, figures take new shapes, constructions are deconstructed, structures are destructured. A glitch is immediately assumed as a likely cause for all these mutilations: a disorder of the code, an ailment of the scripted function. A human being could not have produced such fantastic distortions; so all must be in the algorithm.
|Sourced: Jace D|
|Source: Clement Valla|
The assumption is right. But only partially. Yes, these distorted images are the result of the algorithms playing behind Google interfaces. But, as Valla puts it, they’re not mistakes. They are not the result of a sick algorithm. On the contrary.
“[T]hese images are not glitches. They are the absolute logical result of the system. They are an edge condition – an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They are seams which reveal a new model of seeing and of representing our world – as dynamic, ever-changing data from a myriad of different sources – endlessly combined, constantly updated, creating a seamless situation.”
As explained by Valla, what Google’s texture-mapping algorithm has managed to do is fundamentally alter our ways of seeing and interpreting surfaces through photographic representations. Snapshots, i.e. distinct images separated from other similar images by means of the very frame that encloses them, are no longer the working principle here. While the Google algorithm does make use of previously created snapshots, it assembles them in ways that obliterate the seams (i.e. the frames). As a consequence, what we see on a Google Earth map is a continuous, fluent representation of a space that is in itself continuous and fluent, only misconstrued by the snapshot model. In fact, Google’s algorithm not only fixes a technical problem familiar to cartography (the conception of a map that is continuous, seamless) but also a representational problem: the mental effort required to understand that behind discrete segments of time and space taken with a photographic camera exists a world that is essentially continuous.
Back to the old chestnut of representation
Now, insofar as the apparent glitches are concerned, and which make the world look so different from ‘reality,’ one must keep in mind that Google operates in a relatively new territory, where the digital archive reigns supreme.
“The images produced by Google Earth are quite unlike a photograph that bears an indexical relationship to a given space at a given time. Rather, they are hybrid images, a patchwork of two-dimensional photographic data and three-dimensional topographic data extracted from a slew of sources, data-mined, pre-processed, blended and merged in real-time. Google Earth is essentially a database disguised as a photographic representation.”
Note that this doesn’t make Google Earth algorithms more accurate representations of the world. On the contrary. As Valla points out, there’s no night in the world conceived by Google Earth. And that should suffice to make the point clear. What’s more, selectivity (which implies obliteration and exclusion) is very much at work in Google Earth, just as it is in any man-handled representational systems. The algorithms choose their data according to the code that stands at their foundation. Of the numerous images uploaded to be processed through the code, only those are selected which comply with the criteria specified in the algorithm’s script. Just as a writer selects what he/she wants to write (and the success of their art depends precisely on this principle of selectivity), Google Earth too does away with what’s at odds with its algorithms. And just as in the case of the writer, here too the conclusion is disappointing; as disappointing as any conclusion drawn about any form of representation:
“In these anomalies we understand there are competing inputs, competing data sources and discrepancy in the data. The world is not so fluid after all.”
If this sounds familiar it’s because we’ve always devised the wrong mechanisms for the interpretation of the world; wrong not as in mistaken, but wrong as in impotent. What digital algorithms of the Google Earth type reveal is a process that starts off with a human badge on it only to lose advantage on the way towards the outcome. Since the algorithm does the work (even the anomalies collected by Valla are the product of a machine-run program), it looks as though we’ve won a battle in the war of objectivity. But that’s wrong to say. Wrong yet again. Algorithms, automated as they may be, are still the product of human minds. But what beautiful things they can create.
|Source: Clement Valla|