Tuesday, 22 September 2015

How SEO transforms writing

Every time we write we write to someone. This is true whether that someone is somebody else or just us in a later instantiation, when, with a different intention and a different demeanor, we return upon the text to revise it, to read it again – like strangers. This presence of the other that reads has been made even more obvious in the digital age. Now, there’s no more writing for oneself, if there ever was one.

Source: The Platypus Directive
Let’s look at it this way. Even when the privacy settings on your social media platforms are turned to ‘Private,’ we must not overlook the fact that ‘privacy’ is highly deceiving. We should have learned this lesson already. Remember ‘Like’ that doesn’t mean ‘enjoy’? ‘Friend’ that doesn’t mean ‘pal’? ‘Tweet’ that involves no bird? They’re all part of the patois of the day and we kind of understand where everything really stands in the picture. We only pretend to use the word ‘friend’ in its original meaning. We play the game of arbitrariness rather well. We pay it back to the source.

Your text is read by an algorithm

But this is only semantics. What I mean to say about digital writing is that when you write on your blog, on Facebook, on Twitter, on reddit, on any other digital platform, you cannot save yourself from the gaze of the other. The other is there all the time. Considering what I said earlier, this shouldn’t come as a surprise. All’s old and boringly familiar. But what’s new is that the other doesn’t come about as an actual reader, a person you might be able to identify in a crowd. The primary reader of your text is an algorithm. It is the machine that does the crunching of numbers, the perusing of texts, and, of course, the ascertaining of meaning.
This awakens the contemporary writer to an interesting reality. Not only are they writing to produce content, they also write to produce audience. In other words, they become entrepreneurs who pitch their product to a market. But this pitching is made to please first and foremost the algorithms that run the show.

Source: Lisa Kurt
It is commonly said among SEO specialists that the value of your text doesn’t matter if you’re invisible. Indeed, in the logic of the digital universe, one makes sense only insofar as one is reachable. But reachability is established through algorithms running in the background. They decide what is and what isn’t interesting, what is and what isn’t professional. Google has gone so far as to regulate language. Poorly written texts, which, let’s face it, have been bothering us big time, are kept at bay by Google algorithms that comb through content in search for mistakes. Of course, this doesn’t mean you’re in a one-mistake-and-you’re-out situation. It takes a little more than just a missed comma for Google to give you the boot.

Conform or remain invisible

But grammar isn’t everything in this game. When it comes to correctness, algorithms are far more sensitive than the occasional grumpy grammarian stomping their feet at the sight of a disagreement between a noun and a verb. Since they are logico-mathematical entities that function on the premise that the input is always valid (i.e. within predetermined parameters), it becomes understandable why an algorithm reacts bitterly when it encounters weird or unacceptable formulations.
At the end of the day, in order for mathematics to work we need to renounce the idea that the distinction between natural objects is relevant. 1+1=2. But what is the first 1 and what is the second 1? What do they designate in the real world? And what does the final result mean, if anything? 1+1 may very well be one apple and one orange, but who cares? We dismiss the very possibility of this distinction to matter. On the other hand, and also because of how mathematics works, we can’t perform an operation such as   ҉   + 1, simply because   ҉   doesn’t belong in the class of calculable elements designated by mathematics. It is not a number. So unless we give it a numerical value, it cannot play this game.
By association, we can think of the   ҉   in the above proposition as the equivalent of a sentence that doesn’t match the patterns written into the software. (I’m not going to go into what can be done to accommodate eccentricities. Rules that break rules exist everywhere, and so do algorithms that allow for abnormal propositions.)
But the point? The point is this: algorithms (and I’m talking about the ones designed to control textual matters) shape the outlook of content. The writer who is a user of such algorithms will find, sooner or later, that he/she must conform to the algorithm if they want to cross the threshold drawn by these invisible robots between writing and display.
Because display is what matters. Not the display of letters on a screen, but the display of content made accessible to the other.

SEO and clairvoyance

Algorithms don’t just automate assent (by pointing out to forthcoming audiences the worthiness of a given text). They also anticipate the writer’s next move. Since conforming to the algorithm is the only way about, the algorithm, through its prescriptive properties, makes the appearance of a text foreseeable. Once you get your head around SEO matters you understand why a URL looks the way it looks, why some keywords appear insistently throughout the text, why titles have to be this long and this many, why some parts have to be highlighted, why links matter, and why there is a need for social media visibility.

Source: Tactix Marketing
Search engines search the internet for content. They do so by mining information present in the HTML script. HTML, whose role is to order the chaos of the digital world, precedes content. It is there before the text. And this is another way of putting the question of precedence.
Let’s be frank, SEO is all about pleasing the search engine, which establishes worthiness via authority. It’s precisely the notion of authority that’s the most intriguing, because consensus is the function of a statistical result.
Content optimized for the search engine is exactly what its name indicates: an effort to answer the pressure exercised by the search engine, i.e. by software designed to crunch the numbers no matter what. So that a groundbreaking piece of epistemology, the best novel of this generation, the most illuminating analysis, the best solution to a million problems, amounts to very little if the search engine doesn’t perceive it to be worth pitching. In other words, it will remain invisible.

The viral aspect of content

In order for all of the above to become detectable, visibility has to grow exponentially. And with this statement we slide into the territory of viral content. Spreading about depends on factors external to content, but which writers can stimulate by including in their content elements likely to cause contagion. The first and foremost of these factors, maybe the only one that truly counts, is none other than out good old friend, the algorithm. Because it’s the algorithm that discovers the text in the first place. Growth of popularity depends on how other users share content. While there seems to be agency here (when I choose what to post online I am communicating a personal decision), the expression of this agency is made through a piece of software.

Source: Forbes
It’s the Like button I’m talking about here. It makes apparent one fundamental thing about software: that when we use it we don’t bring our free will to light. On the contrary, we admit to our conformity to the algorithm. When we choose anything, we help the software bring its function to fruition. We are an element in the system, a cogwheel in the apparatus, an operative factor in the code.
Then there’s the even more mundane realization that only the already-popular becomes more popular. That’s because the algorithm takes shortcuts. Once a piece of content is deemed worthy of interest, a search engine will push that piece up in its ranking system. That’s what happens when we come across certain results when we search for a keyword: why some results come first, while others trail behind, in pages so distant they’re the guarantee of total failure.
With all this in sight, it’s clear, I hope, that the writer who cares about the fate of their content will have to bend to the new rules. Make sure to repeat a keyword but not too many times. Make sure to leave snares for the search engine, to catch the spiders that crawl the web. Make sure to check your text for mistakes. Make sure to send reminders, to share, to encourage interaction with your content, to catch the eye of those who can boost your traffic. We all do that. We all do SEO, whether professionally or just out of instinct. Not because of a suddenly awakened entrepreneurial spirit in us, but because the algorithm demands it. It does.
So write well!

Wednesday, 16 September 2015

Algorithms, traces, and solitary work

Digital algorithms and software raise fundamental questions about writing. And so it should be, since most things don’t look the same when you turn to the logic of digits. For a start, the environment in which inscription takes place is no longer that of a trace immediately noticeable.


Source: The Renegade Writer
A text written using the keyboard of a computer appears to a viewer as an inscription already finished. That’s because the erasures that come with versions and drafts are no longer perceivable, the way they were in environments dominated by the work of pen and paper. Pencil corrections, and even those made by typewriters, stay on paper; they travel along with the text. Visually, they are inextricably part of it. Their presence is proof of the text’s evolution.

The archaeological gesture of tracing

This is not to say that digital writing dismisses the possibility of tracing. It’s only that traces are not immediately visible in digital composition. They don’t stay on the screen as such, not like the marginalia on a page pre-occupied by what is considered to be ‘the primary text.’ If they do stay somewhere, this somewhere is a place where the material traces, in order to be seen, must be dug out, unearthed in a gesture that is archaeological in nature.
Archaeology is about digging-in-order-to-find. It is about dealing with the underground and with the undergrowth. And as such, one would be tempted to say that even the tracing of analogue texts (marks on paper) is subjected to the processes of unearthing. Which is very true. But also incomplete. Because analogue writing shows the signs of a draft without requiring an effort of visualisation. This is why a digital text always appears as completed, even when it is work-in-progress. On the computer screen, all signs look definitive. They look as if they had no past and no future. To put it differently, an analogue text is diachronic (it flows, it progresses along a continuum that is permanently discernible), while a digital one is synchronic. Its stasis is caused by the absence of versions, insofar as versioning doesn’t take place on the screen. More precisely, the surface of writing is moved somewhere else. It is not the screen that plays the role of this surface but the electronic apparatus that registers the impressions of one’s fingers and of one’s intentions. And that apparatus remains, in most cases, unseen.

A form of writing that is always elsewhere

Metadata, which is precisely an assortment of traces left by a digital text, brings about the very possibility of this gaze that sees into a text’s past. But the tracing of digital signs requires a technological apparatus of its own. The reading of code is not the same thing as the reading of a short story or of a shopping list. Code exists beyond the surface. Code is brought to the screen only if the writer/reader is directly implicated in the writing/reading of a line of code. But otherwise writing and reading take place under the surface of composition. What I mean here is writing that is other than code-writing. The simple (in digital terms) composition of a short text on a computer screen requires the work of software, which comes prior to the compositional act. From the keyboard that transforms mechanical, electrical, and digital processes into letters to the word processor that enables the transformation of keystrokes into images on a screen, the technical aspects of composition remain largely unnoticed and unacknowledged, but not unimportant because of that.

Source: Penn State
As with all technologies, the functioning of a writing apparatus becomes apparent when it ceases to work as programmed. The business-as-usual standard does not provide a model for the acknowledgment of technological processes. But what’s truly important is that business-as-usual presupposes a subject who thinks he/she is working alone.
A subject who works alone is a subject who doesn’t need the presence of external factors to tell them how the work needs to be carried on. This, though, can only happen when the technology on which the subject is reliant functions without interruptions, i.e. when the subject forgets that there’s technology around, believing they worked alone, without actually doing so.

There’s an ideology behind something that works

Well-functioning technologies are, for this reason, of the ideological order. Only an ideology without hiccups can persuade a subject of its absence, so as to work efficiently beyond (or under) the surface, unseen, unnoticed, unacknowledged. It’s important for an ideology to remain invisible and thus to persuade by means of its apparent absence. The subject of ideology is a subject convinced that they are not ideology-driven; that they are free.
The same goes with technologies in general, and the digital ones in particular. In our case, it is crucial that code stay in a territory that’s largely unacknowledged, or where access is permitted only to specialists. (Code-writers are the technocrats of the digital age.)
It is interesting to note that, precisely because technology (as the Other) presents itself as non-present, the subject goes about doing business-as-usual as though they were working alone. They don’t share the tasks of writing with anybody else. They dwell, for this reason, in a symbolic time and space that are anachronistic when regarded from the perspective of, say, Foucault’s theory of the author as a function rather than a real person. Prior to Foucault, authors did not cross the threshold of individuality. They performed their tasks unhindered by any acknowledgments of the Other. Foucault brought external factors into the picture. He brought the Other to the centre of writing. After him, the apparatus can no longer be thought of as something to do things with. It is something that contains the very act of doing, and the doing subject at the same time. A writer writes within an apparatus of which he/she is a cogwheel of sorts. Not that writers are less special, but they are special in a different way: a way that acknowledges the multiplicity that characterizes their very work.
Anyway, the conclusion is that it’s kind of impossible now to think of a writer as someone who can work alone.

Function is found in dysfunction

But writing-as-if-technology-did-not-exist is an illusion. We all know how important apparatuses of writing are in the process of composition. Let’s think no further than the moments when we seek a power plug for our laptops, or the simple gesture of pressing the power button on the writing machine before anything else can happen. These simple gestures are often forgotten, and their role in the generation of text is ignored. That’s for two reasons.
1. As mentioned above, technology works best when it doesn’t seem to work. This apparent not-working obliterates technology, and thus propels it towards well-working.
2. We forget the simple gestures of digital writing because we are already accustomed to the logic of the other technology that predetermines writing: the technology of pen and paper.


Source: Nation States
The work done by means of pen and paper is only slightly different. It’s only different in that it employs analogue technology. But that only means one thing: that it is not technology-free. The fundamental similarity is that, like digital technologies, pen-and-paper involves techné, which is at the same time craft and trick. The trick of the pen and paper is that they obliterate their dependence upon one another and, more importantly, of the writing subject on both of them at the same time. Once again, in order to gauge the depth of this illusion all one needs to envisage is an interruption of business-as-usual. A pen that’s run out of ink or a pencil whose tip is broken are rendered un-operational exactly like a laptop whose battery has run flat. Dysfunction lays bare the ideological foundations of function. All it takes is for a piece of technology to cease working as expected in order for it to become fundamental. If it cannot facilitate, it impedes. And impediment is outside the scope of the good functioning of ideological reassurance. That is why a good algorithm is an algorithm that yields symmetrical results. Once this condition is fulfilled, the user is likely to give in to the argument of efficiency, and so the algorithm is likely to be left to work alone.

Monday, 7 September 2015

Databases and a "poetics of record retrieval”

Soft cinema is Lev Manovich’s baby. The concept is relatively simple. A mass of data is collected into a database: photography, video, written texts, a voice reader that transforms the written text into audio, and other tiny digital artefacts. The data thus amassed is played out by means of a series of algorithms that make a number of selections and then narrativise the sequences (i.e. put them into a continuum). If there are more complex descriptions of this process, there must be specialists out there (Manovich himself is one) who can better explain the above. Suffice it to say here that ‘soft’ in ‘soft cinema’ means ‘played by a software.’



So when we’re talking soft cinema we’re talking archives; and when we’re talking archives we’re talking a lot of data. In order for an archive to be functional it has to be massive. It has to be inaccessible otherwise than through enforced selection. In other words, an archive is justified by an incapacity, which is a negative attribute translated into loss of agency. It is only because of the vastness of an archive that we can speak of selectivity of the type proposed by soft cinema. And that’s the major point about the algorithmicity of this process, or of any other process that requires coded intervention, machine involvement.

Of other algorithms, yet again

As mentioned last week and pointed out a bit earlier, in order for an algorithm to be needed, an inability of the human subject must be made apparent. Mathematics as a whole was created when humans realized, relatively late in their history, that they needed more than ten fingers to calculate things from their immediate universe. This realisation of the embarrassing impotence of our being caused the need for reliable formulae: formulae that could yield the same results every time they were put to work.
The ‘+’ sign will always operate addition; we can bet our lives on it. But that happens not because the world is prone to such additions, but because an agreement was reached at some point by inventive human beings that parts of the world can be classed together so as to separate say cows from horses, stones from sticks, and so on. That this logic fails us can be proven by the classic anti-Boolean anecdote: if I count the cows on a field I can conclude that I have a group of say ten cows altogether. And that’s fine. That satisfies my need to know how many. But the result will not be able to tell me how many black cows there are in the group or how many of the total are healthy, how many are pregnant, indeed how many are male and how many female. Of course, all these classifications/clarifications are possible too. But in order to reach their own results they need to be calculated by means of different operators or by different criteria of selection. And even then, further complications are possible. Of all the black cows, how many are of this breed and how many of that? Of this breed, how many have been raised in the town of X and how many in the town of Y? And so on, and so forth. In other words, the simple addition of all visible animals on a field yields very little information that is truly useful to a curious, practically-minded individual such as the man/woman who roams the earth.

The softness of soft cinema

Once again, as in the case of Google Earth, in order for the algorithm to work (even the simple additional function can provide a satisfactory result here) a certain type of selection needs to be made possible. It’s precisely here that Lev Manovich’s soft cinema becomes significant, and where it becomes, in fact, a variant of the Google Earth algorithms. And just as in the case of the addition operator that was made necessary because it was impossible to tell how many cows there were in the field without adding them one by one, soft cinema is said to have been made necessary by the immense amount of data existent in the world.

Source: softcinema.net
A lot of Manovich’s material is collected from personal archives. But it wouldn’t be hard to see how the principle can be applied to the whole sea of ungraspable zettabytes of information produced and consumed by means of the internet.
So there’s something new-mediatic in the air, and surely enough, Lev Manovich has the right words to talk about it. He describes his project as an attempt at drawing a portrait of modern subjectivity, at a time when the work of things like Google appears to offer a pertinent model for the functioning of humans.
“If for Freud and Proust modern subjectivity was organized around a narrative – the search to identify that singular, private and unique childhood experience which has identified the identity of the adult – subjectivity in the information society may function more like a search engine. In Texas [one of the films released on the Soft Cinema DVD] this search engine endlessly mines through a database that contains personal experiences, brand images, and fragments of publicly shared knowledge.”
The algorithm that selects elements from the database and returns them as a special kind of visual output is a perfect illustration of Vilém Flusser’s technical image, which is no longer a representation of the world but a representation of a representational machine. In the case of ‘database art’ in general and soft cinema in particular, what is being represented is the database itself.
The database, a collection of pre-existent data, presents itself to the subject as something that should have been apparent all the way: as an aesthetic experience. But what is important in this case, as opposed to just any kind of handling of archive files, is the presence of an automaton, of a digital algorithm. And as a result of this presence of an invisible operator, the films made by the software look very little like our traditional understanding of cinema. The narrative aspect, which is very much present in Lev Manovich’s films, is not determined by story segments but by segments of information selected according to their affiliation to a given filter. A traditional story is put together by linking episodes that contain in them potential for action. The assemblage of soft cinema, on the contrary, works in a way that resembles, according to Manovich, the assembly line in a factory.
“A factory produces identical objects that are coming from the assembly line at regular intervals. Similarly, a film projector spits out images, all the same size, all moving at the same speed. As a result, the flickering irregularity of the moving image toys of the nineteenth century is replaced by the standardization and uniformity typical of all industrial products.”
Or of all technical images, to return, yet again, to Vilém Flusser.

Factory vs software

What Manovich is trying to say is that his model is not so different from the way traditional cinema works. And yet, his films do look odd. And that’s because there’s a difference between the assembly line in a factory and the assemblage of a soft cinema product. That difference lies, once again, in selection. The assembly line does not select its material; it assembles what has already been separated, individualised, and decided upon. The software, on the other hand, does precisely the pre-production work. It does the selection by tapping into the pool of data existent in the databases the algorithms have access to. The algorithms select and put together information that is contained in the so-called metadata: the data about the data present in the archive.

Source: manovich.net
In one case, the algorithm is made to select images from places Lev Manovich has never visited as an adult. That’s a filter, right there. Many more such filters are at work in soft cinema, all based on what Manovich suggests are the whims (if one can use the word to describe an automaton) of an algorithm or other:
“The clips that the software selects to play one after another are always connected on some dimension – geographical location, type of movement in the shot, type of location, and so on – but the dimension can change randomly from sequence to sequence.”
Through this element of randomness, the soft-cinematic experience is expected to counteract the one acquired through watching traditional films. But what kind of randomness is this? Randomness sounds strange in a system that is controlled by a formula. The algorithm cannot simply work against itself (against its principles of selection and operation, against its filters). So a different filter will have to be implemented: the one that asks the machine to put aside everything that is not contained in the filters (also known as ‘the uncategorized’). And so, the remainders prove to be anything but non-entities. They exist in a category of their own: the un-filtered, the un-categorized. And it’s from that class that they can be selected; yet selected not at random but via pre-set operations.
That’s why a soft film looks like pre-production: because it is pre-production. It is the collection of data that precedes the montage. The montage that one witnesses in a soft cinema artefact is very crude; it does not reach as far as a final cinematic product. Its operations stop precisely at the level where the collected images are about to turn into a film.
What this is likely to point out is, again, the pre-eminence of the algorithm. The fact that the final product isn’t our traditional film but a series of images that change seemingly at random is proof that the algorithm can work alone; that it can surprise the subject; that it can provide a kind of experience where the machine does the work while the human being sits and watches.

Wednesday, 2 September 2015

Algorithms: A timid elucidation via Google Earth

Simply put, algorithms are logico-mathematical automatons. They depend first and foremost on a function that remains unaltered (pre-set by the one who conceives the code, otherwise known as ‘the writer’). Apart from the function, an algorithm is endowed with a number of variables, which are selected from a pool of possibilities established in the writing phase and to which said function will be applied. These will provide the input, i.e. that which is going to be processed. And of course, to each input its own output – an entity that is anticipated and yet never completely known.


What is interesting about algorithms and functions in general is this ambiguity that characterizes them: the fact that the outcome is at the same time known and unknown. It is known insofar as the outcome is always already contained in the code: it is set by the code, limited by it, determined by it. Be the results as wild and as unexpected as they may be, they would not be possible if the code hadn’t provided the conditions of possibility for them. But at the same time they cannot be known in advance. At the end of the day, an algorithm is created precisely in order to deal with outcomes that cannot be foreseen. If they could be fully known in advance what sense would it make to come up with an algorithm in the first place?

A simple illustration

I’m thinking of a funnel now. A funnel is ‘coded’ (one might find the word ill-used here, but let’s think about it with a modicum of lenience) to let a liquid pass through it on its way to another location. The location doesn’t matter yet. What matters about it is that the liquid poured through the funnel will have to reach this destination, this location. Now, the width of the funnel’s neck will determine the amount of liquid that can pass through. No matter how hard one strives, one will never be able to pour more than the ‘code' of the funnel’s neck allows. One can pour less, of course, but that just proves the point: freedom, in a code, is limited to a lesser input. One can’t do more than the code allows.

Source: Superb Wallpapers 
Of course, not all algorithms are this simple. In fact, most of them are not. One can imagine possible complications even in the case of the funnel model. Twist the path of the funnel’s neck and you have more time required for the liquid to pass through. I don’t know why anybody would do this but it’s a possibility, a variation of the algorithm. Place a sieve somewhere along the way and the same liquid will reach its destination in an altered state (filtered, cleaned of impurities, thinned out etc.) Vary the thickness of the liquid and you’ll have varied times. And so on, and so forth.
Complications are endless. But it becomes clear from this example, I hope, that algorithmic realities are dependent on the code or function of the given situation. If such is the case, it would be easy to extrapolate the logic of algorithms to situations that are not mathematical in nature. Let’s say writing, since we’re always at it. The writing of a short story has its own algorithmic determinants. Length is one. Length provides the essential difference between a short story and, say – a novel, or a novella, or a trilogy; so that a short-story writer will never go beyond a certain word limit without the risk of moving into a different territory: the territory of lengthier genres. Content is another piece of code that matters in the case of a short story. Content differentiates a short story from a scientific treatise, from an almanac, from a shopping list. The crossing of species is possible, of course, but it does not prove the algorithm wrong; on the contrary, it confirms its strength. Then shape too matters: it separates a short story from a dramatic piece, from a film script, from a poem. We can go deeper and deeper, in search for other determinants, equally important: genre, audience, language, loyalty to a certain tradition, allegiance to a certain ideology.
Many other things aside, what is very important in an algorithm is its departure from the subject. Although they are written and employed by humans, their outcomes can’t be changed by the user in ways that are not always already existent in the code. The amount of liquid passing through a funnel (thickness, viscosity, velocity and other things considered) will never be changed by human volition. The genre limitations that determine the shape and length of a short story cannot be transgressed by an individual writer without his/her transgression degenerating into something else, something un-short-story-like.

What on earth...

In digital environments, algorithms are far more complex than a funnel or a short story. And yet, they serve similar functions.
See Google Earth.
To get there, let’s bring into the matter Clement Valla. Collector and curator of digital artefacts, Valla gathers, among other things, Google Earth images. He sometimes calls these things ‘postcards.’ What he is interested in are images that appear to contradict our understanding of what planet Earth looks like, or should look like. His ‘postcards’ depict shrivelled, bent, twisted-and-turned features of Earth surfaces. Contemplating his collection is like contemplating a bunch of Dali paintings, in which materiality is destabilized: clocks melt, figures take new shapes, constructions are deconstructed, structures are destructured. A glitch is immediately assumed as a likely cause for all these mutilations: a disorder of the code, an ailment of the scripted function. A human being could not have produced such fantastic distortions; so all must be in the algorithm.

Sourced: Jace D
Source: Clement Valla
The assumption is right. But only partially. Yes, these distorted images are the result of the algorithms playing behind Google interfaces. But, as Valla puts it, they’re not mistakes. They are not the result of a sick algorithm. On the contrary.
“[T]hese images are not glitches. They are the absolute logical result of the system. They are an edge condition – an anomaly within the system, a nonstandard, an outlier, even, but not an error. These jarring moments expose how Google Earth works, focusing our attention on the software. They are seams which reveal a new model of seeing and of representing our world – as dynamic, ever-changing data from a myriad of different sources – endlessly combined, constantly updated, creating a seamless situation.”
As explained by Valla, what Google’s texture-mapping algorithm has managed to do is fundamentally alter our ways of seeing and interpreting surfaces through photographic representations. Snapshots, i.e. distinct images separated from other similar images by means of the very frame that encloses them, are no longer the working principle here. While the Google algorithm does make use of previously created snapshots, it assembles them in ways that obliterate the seams (i.e. the frames). As a consequence, what we see on a Google Earth map is a continuous, fluent representation of a space that is in itself continuous and fluent, only misconstrued by the snapshot model. In fact, Google’s algorithm not only fixes a technical problem familiar to cartography (the conception of a map that is continuous, seamless) but also a representational problem: the mental effort required to understand that behind discrete segments of time and space taken with a photographic camera exists a world that is essentially continuous.

Back to the old chestnut of representation

Now, insofar as the apparent glitches are concerned, and which make the world look so different from ‘reality,’ one must keep in mind that Google operates in a relatively new territory, where the digital archive reigns supreme.
“The images produced by Google Earth are quite unlike a photograph that bears an indexical relationship to a given space at a given time. Rather, they are hybrid images, a patchwork of two-dimensional photographic data and three-dimensional topographic data extracted from a slew of sources, data-mined, pre-processed, blended and merged in real-time. Google Earth is essentially a database disguised as a photographic representation.”
Note that this doesn’t make Google Earth algorithms more accurate representations of the world. On the contrary. As Valla points out, there’s no night in the world conceived by Google Earth. And that should suffice to make the point clear. What’s more, selectivity (which implies obliteration and exclusion) is very much at work in Google Earth, just as it is in any man-handled representational systems. The algorithms choose their data according to the code that stands at their foundation. Of the numerous images uploaded to be processed through the code, only those are selected which comply with the criteria specified in the algorithm’s script. Just as a writer selects what he/she wants to write (and the success of their art depends precisely on this principle of selectivity), Google Earth too does away with what’s at odds with its algorithms. And just as in the case of the writer, here too the conclusion is disappointing; as disappointing as any conclusion drawn about any form of representation:
“In these anomalies we understand there are competing inputs, competing data sources and discrepancy in the data. The world is not so fluid after all.”
If this sounds familiar it’s because we’ve always devised the wrong mechanisms for the interpretation of the world; wrong not as in mistaken, but wrong as in impotent. What digital algorithms of the Google Earth type reveal is a process that starts off with a human badge on it only to lose advantage on the way towards the outcome. Since the algorithm does the work (even the anomalies collected by Valla are the product of a machine-run program), it looks as though we’ve won a battle in the war of objectivity. But that’s wrong to say. Wrong yet again. Algorithms, automated as they may be, are still the product of human minds. But what beautiful things they can create.

Source: Clement Valla