Soft cinema is Lev Manovich’s baby. The concept is relatively simple. A mass of data is collected into a database: photography, video, written texts, a voice reader that transforms the written text into audio, and other tiny digital artefacts. The data thus amassed is played out by means of a series of algorithms that make a number of selections and then narrativise the sequences (i.e. put them into a continuum). If there are more complex descriptions of this process, there must be specialists out there (Manovich himself is one) who can better explain the above. Suffice it to say here that ‘soft’ in ‘soft cinema’ means ‘played by a software.’
So when
we’re talking soft cinema we’re talking archives; and when we’re talking
archives we’re talking a lot of data. In order for an archive to be functional
it has to be massive. It has to be inaccessible otherwise than through enforced
selection. In other words, an archive is justified by an incapacity, which is a
negative attribute translated into loss of agency. It is only because of the
vastness of an archive that we can speak of selectivity of the type proposed by
soft cinema. And that’s the major point about the algorithmicity of this
process, or of any other process that requires coded intervention, machine
involvement.
Of other algorithms, yet again
As mentioned
last week and pointed out a bit earlier, in
order for an algorithm to be needed, an inability of the human subject must be made
apparent. Mathematics as a whole was created when humans realized, relatively late
in their history, that they needed more than ten fingers to calculate things from
their immediate universe. This realisation of the embarrassing impotence of our
being caused the need for reliable formulae: formulae that could yield the same
results every time they were put to work.
The ‘+’ sign
will always operate addition; we can bet our lives on it. But that happens not
because the world is prone to such additions, but because an agreement was
reached at some point by inventive human beings that parts of the world can be
classed together so as to separate say cows from horses, stones from sticks,
and so on. That this logic fails us can be proven by the classic anti-Boolean anecdote: if I count the cows on a
field I can conclude that I have a group of say ten cows altogether. And that’s
fine. That satisfies my need to know how many. But the result will not be able
to tell me how many black cows there are in the group or how many of the total
are healthy, how many are pregnant, indeed how many are male and how many
female. Of course, all these classifications/clarifications are possible too.
But in order to reach their own results they need to be calculated by means of
different operators or by different criteria of selection. And even then,
further complications are possible. Of all the black cows, how many are of this
breed and how many of that? Of this breed, how many have been raised in the
town of X and how many in the town of Y? And so on, and so forth. In other
words, the simple addition of all visible animals on a field yields very little
information that is truly useful to a curious, practically-minded individual such
as the man/woman who roams the earth.
The softness of soft cinema
Once again,
as in the case of Google Earth, in order for
the algorithm to work (even the simple additional function can provide a
satisfactory result here) a certain type of selection needs to be made
possible. It’s precisely here that Lev Manovich’s soft cinema becomes
significant, and where it becomes, in fact, a variant of the Google Earth
algorithms. And just as in the case of the addition operator that was made
necessary because it was impossible to tell how many cows there were in the
field without adding them one by one, soft cinema is said to have been made
necessary by the immense amount of data existent in the world.
A lot of
Manovich’s material is collected from personal archives. But it wouldn’t be
hard to see how the principle can be applied to the whole sea of ungraspable zettabytes of information produced and consumed
by means of the internet.
Source: softcinema.net |
So there’s
something new-mediatic in the air, and surely enough, Lev Manovich has the
right words to talk about it. He describes his project as an attempt at drawing
a portrait of modern subjectivity, at a time when the work of things like
Google appears to offer a pertinent model for the functioning of humans.
“If for Freud and Proust modern subjectivity was organized around a narrative – the search to identify that singular, private and unique childhood experience which has identified the identity of the adult – subjectivity in the information society may function more like a search engine. In Texas [one of the films released on the Soft Cinema DVD] this search engine endlessly mines through a database that contains personal experiences, brand images, and fragments of publicly shared knowledge.”
The
algorithm that selects elements from the database and returns them as a special
kind of visual output is a perfect illustration of Vilém Flusser’s technical image, which is no longer a
representation of the world but a representation of a representational machine.
In the case of ‘database art’ in general and soft cinema in particular, what is
being represented is the database itself.
The
database, a collection of pre-existent data, presents itself to the subject as something
that should have been apparent all the way: as an aesthetic experience. But
what is important in this case, as opposed to just any kind of handling of
archive files, is the presence of an automaton, of a digital algorithm. And as
a result of this presence of an invisible operator, the films made by the
software look very little like our traditional understanding of cinema. The
narrative aspect, which is very much present in Lev Manovich’s films, is not
determined by story segments but by segments of information selected according
to their affiliation to a given filter. A traditional story is put together by
linking episodes that contain in them potential for action. The assemblage of
soft cinema, on the contrary, works in a way that resembles, according to
Manovich, the assembly line in a factory.
“A factory produces identical objects that are coming from the assembly line at regular intervals. Similarly, a film projector spits out images, all the same size, all moving at the same speed. As a result, the flickering irregularity of the moving image toys of the nineteenth century is replaced by the standardization and uniformity typical of all industrial products.”
Or of all
technical images, to return, yet again, to Vilém Flusser.
Factory vs software
What
Manovich is trying to say is that his model is not so different from the way
traditional cinema works. And yet, his films do look odd. And that’s because
there’s a difference between the assembly line in a factory and the assemblage
of a soft cinema product. That difference lies, once again, in selection. The
assembly line does not select its material; it assembles what has already been
separated, individualised, and decided upon. The software, on the other hand,
does precisely the pre-production work. It does the selection by tapping into
the pool of data existent in the databases the algorithms have access to. The
algorithms select and put together information that is contained in the
so-called metadata: the data about the data present in the archive.
In one case,
the algorithm is made to select images from places Lev Manovich has never
visited as an adult. That’s a filter, right there. Many more such filters are
at work in soft cinema, all based on what Manovich suggests are the whims (if
one can use the word to describe an automaton) of an algorithm or other:
Source: manovich.net |
“The clips that the software selects to play one after another are always connected on some dimension – geographical location, type of movement in the shot, type of location, and so on – but the dimension can change randomly from sequence to sequence.”
Through this
element of randomness, the soft-cinematic experience is expected to counteract
the one acquired through watching traditional films. But what kind of randomness
is this? Randomness sounds strange in a system that is controlled by a formula.
The algorithm cannot simply work against itself (against its principles of
selection and operation, against its filters). So a different filter will have
to be implemented: the one that asks the machine to put aside everything that
is not contained in the filters (also known as ‘the uncategorized’). And so,
the remainders prove to be anything but non-entities. They exist in a category
of their own: the un-filtered, the un-categorized. And it’s from that class
that they can be selected; yet selected not at random but via pre-set
operations.
That’s why a
soft film looks like pre-production: because it is pre-production. It is the collection of data that precedes the
montage. The montage that one witnesses in a soft cinema artefact is very
crude; it does not reach as far as a final cinematic product. Its operations
stop precisely at the level where the collected images are about to turn into a
film.
What this is
likely to point out is, again, the pre-eminence of the algorithm. The fact that
the final product isn’t our traditional film but a series of images that change
seemingly at random is proof that the algorithm can work alone; that it can
surprise the subject; that it can provide a kind of experience where the
machine does the work while the human being sits and watches.
No comments:
Post a Comment