A CONVERSATION

WITH CUAUHTÉMOC MEDINA


Cuauhtémoc Medina is a curator and art historian. PhD in History and Art Theory by Essex University and by Universidad Autónoma de México. Has been researcher in Instituto de Investigaciones Estéticas in UNAM and the first Associate Curator of Latin American art in Tate Modern Collections and Chief Curator of Manifesta 9 and the 12th Shanghai Biennale.  Since 2013 is the Chief Curator of Museo Universitario Arte Contemporáneo (MUAC). He has been awarded with the Walter Hopps Award for Curatorial Achievement de la Menil Foundation, and among his books are Abuso Mutuo. Ensayos e Intervenciones sobre arte postmexicano (1992-2013) and Una ciudad ideal: la Olinka del Dr. Atl 


Every investment is also an ideology. In A Forest, Max de Esteban confronts us with the vision of the world of speculators who invest in the development of Artificial Intelligence technologies, one of the most dynamic productive forces, and with the greatest implications for the future. The rawness of the dystopian imaginary that surfaces in this true fiction announces a future in which it won’t only be impossible to attribute a monopoly on individuality to humans, but that also logically implies the abolition of the idea of democracy itself through the predictive powers of algorithms.

A Power Emancipated from the Human

Perhaps the most eluded promise of recent art is that it would offer us an alternative form of knowledge. The series Infrastructures, which Max de Esteban has been producing since 2016, is the exception. These long-term investigations constitute an odd excursion into the discourses, illusions and apparatuses of the new powers that have emerged from the union of economics and technique under the late capitalism.

The substance of this series is a field research into the undoubtedly narcissistic conceptual production utilized by elites in their attempts to direct the movement of capital and to create new technologies of power. De Esteban has constructed this ethnology of the global elites through a variety of rhetorical and aesthetic packagings as a negation of the routine of conventional documentaries. The combination of the unbelievable nature of their discourses and data, and the emotional intensity of their images, is difficult to categorize. Perhaps we could turn to Gustav Courbet’s concept of a “real allegory” to emphasize the way in which De Esteban understands that the accumulation of facts offers us a figuration that no imaginary or symbolic language is currently capable of giving us.

A Forest (2019), in this sense, is not only an immersion into the rhetoric of the investors who have backed the Artificial Intelligence revolution: it is a treatise on the metapolitical condition of technologies that have constituted a new class of opacity. This video is an early accounting of how capitalism and technology have engendered a Leviathan, no longer constituted by homunculi, but by a thought freed from the pretension of still being human.

Cuauhtémoc Medina


The Ego Is the Algorithm We Enter Into the Machine Like DNA

A Conversation Between Max de Esteban and Cuauhtémoc Medina

Cuauhtémoc Medina (CM): I would like to start with something rather basic and say that this video provokes me an absolute terror. What I perceive is the idea of an eminently gothic piece, like some kind of posthuman The Shining, in which the solitudes you reveal suggest a persecution. Did this staging of your investigation into artificial intelligence’s infrastructure demanded the theme of the forest as a labyrinth and as a place of wilderness?

Max de Esteban (MdE): What you’ve just said is quite true. This video has two parts: one that’s more formal and then another with what we could say is more discursive content. The formal part is based around the idea, very popular these days among certain thinkers such as Franco “Bifo” Berardi, that we’re in a stage of double abstraction, the abstraction of the abstraction. My starting point was that the video should show the situation we’re heading towards, a present-future instant. And note that, in the video, independently of the sensations I’ll discuss later, the first thing you ask yourself is if these images are real or computer-generated, which would be relatively easy to do. The treatment of the image has characteristics, vibrations and glitches that would make a careful observer doubt if the images are real or not. The same thing happens with the voice, which is frequently interrupted by crackles and out-of-place metallic noises, becoming fragmented. Again, we now have artificial intelligence platforms that can take a text and define the profile of the speaker, as if they understood it, and then read it aloud with a human intonation and emotion. I’ll let the spectator answer that question.

In a way, artificial intelligence magnifies the tension between what is real and what isn’t, and very soon it’s going to become practically impossible to distinguish between which objects have an indexical nature (in reference to a classical photography concept derived from Peirce’s semiotics) and which objects are recreations.

The forest, in fact, comes from a very simple idea. One of the most important investors in artificial intelligence, who gave a lot of his time for my investigation, has his headquarters in Palo Alto. He surrounded the building with a forest of gigantic sequoias that weren’t there before, he had them transplanted. To me, this was a good allegory for creation, for how these people are inventing reality. His will (and his money) create an artificial reality by transporting sequoias that measure some twenty meters, and not just one but, I don’t know the precise number, maybe around six hundred, to simulate a natural forest.

The forest also has a double nature. On the one hand, it evokes the structure of the labyrinth: we find ourselves before a technology that is taking us somewhere unknown, that raises many questions. The video, which floats through the forest like a drone, doesn’t follow a path, we don’t quite know where it’s taking us. The recording was also made in winter, when the tree branches appear as rhizomatic neural structures, resembling those marvelous early drawings of neurons by Ramón y Cajal. This all seemed very evocative. The winter also allowed us to show a deep, dense fog, giving it that tone of mystery. As with all important infrastructure, the general public knows very little about artificial intelligence and even less about the implications of what we agree to when we accept a site’s terms and conditions. When Google or Facebook ask you to accept them, really nobody stops to read them because they’re simply unintelligible. A thick fog prevents us from understanding. We’re prevented from transparently understanding what we’re accepting. That is the formal part of the video, and the reason why there’s a gothic feeling in which, I agree, it seems that Frankenstein is waiting just around the corner. And many Frankensteins are waiting just around the corner of artificial intelligence, that’s the tragedy.

There’s also the matter of content. This scripted monologue is the result of many hours of speaking with investors in artificial intelligence that I’ve known. What’s interesting is identifying how they think: Which are the social values they’re promoting and which are the ones they’re trying to do away with? What are the political implications of what they’re doing? Unlike Wall Street traders, who are widely criticized, technology investors are much more sophisticated, but also much more dangerous. Wall Street traders are relatively easy to understand: They want to earn as much money as possible without going to jail. Investors in technology, on the other hand, want to change us and change the world. They constitute an army of people with unimaginable resources who are also activists, because they have a philosophy, a way of seeing the world that they eloquently advocate and that is supported by the credibility of major researchers, scientists, universities. The video constantly mentions professors from Stanford, Northwestern… They have the data, the knowledge; that is, they act from a scientific position that legitimates their creations. This is also very gothic: It is science that creates Frankenstein.

CM: There’s a moment that I would say is the fulcrum not just of the argument in terms of content, but also in theatrical terms, and it’s almost narcissistic. When the narrator congratulates himself on having found the phrase the summarizes everything: “We are driving the process: Venture Capital does. Changing the world by creating the future.” We have to look at this in relation to the long history of philosophical dialogue as the most traditional and long-lasting device for exposing dangerous ideas. I don’t remember any exemplary monologues right now, but I was thinking about the way in which you present it: By suppressing the protagonist it becomes tremendously bleak. One has the feeling that there must be some necessary yet impossible reply, that the absence of dialogue is constitutive to the situating of the monologue. How did you arrive at this form?

MdE: What you’re saying is very perceptive. For my previous video, also scripted, I interviewed four Wall Street traders, and I participated a lot: to be more precise, I spoke little but interrupted a lot, contradicted them, added a voice of skepticism to what they were saying and then they reacted. I came across this kind of interaction in all my conversations with financial traders and it was totally distinct from my conversations with investors in artificial intelligence. They had a discourse that did not allow for interruption, it was a discourse that was in love with itself. You used the word “narcissistic,” which is appropriate. These investors speak from a position of authority in which, if you don’t agree with their irrefutable theses, you’re either unintelligent or misinformed. A little in the arrogant style of Zuckerberg’s testimony to the U.S. Senate, which shocked the world. These are people who live in circles in which everyone speaks the same language and they feel that the world is going too fast to waste time on giving explanations. I remember a question I asked in one of these conversations. I said, “Hey, if you’re so rich” (this was at the end of a conversation) “why don’t you support the arts?” This man lived in San Jose, California and was worth five billion dollars. “Why don’t you give a little money to art institutions, put together an interesting collection?” His response was, “Art is too slow, too inconsequential.” His vision was that he can’t waste time on bullshit, he’s here to change the world. This absolute disdain for everything they don’t identify with is what made me conclude that it had to be a monologue, that it couldn’t be a dialogue.

CM: So he becomes the voice of the system behind the system. A very successful form, in terms of rhetoric, takes up the first half of the monologue, supplying you with evidence to suppress the temptation to resist. Things as major as the eradication of stockbrokers and bankers because algorithms have taken their place, or the facial recognition device used in Shanghai that can predict potential criminals through characterology, gestural behavior, etc. These are the fantasies that Philip K. Dick offered us in Minority Report: the time is not far off when it will be enough to put us in front of a camera for it to tell us whether we’re going to end up in jail or not. All this prepares us for the absolutely bestial metapolitical moment in the second half of the video. It isn’t presented as a conspiracy, but as the presentation of the inevitable, total collapse of the democratic system.

MdE: For this video, I interviewed a total of ten investors in artificial intelligence operating at different levels, all of them important. They range from the guy who currently manages the largest investment fund in the world to people very active in niche applications of artificial intelligence. My conclusion is that imagining a conspiracy is a mistake. I don’t want to fall into that classical, attractive trap in which there’s a new technological bourgeoisie, centralized and powerful, that has an occult character and a plan for subjugation, because that’s not the case. Foucault, in his lectures at the Collège de France in the 1970s, definitively debunked this theory with regards to the classical bourgeoisie. What is present is the power that materializes through groups of “specialists”: financiers, executives, jurists, bureaucrats… And each group shares three fundamental characteristics: a common language, a way of speaking; a very specific, erudite knowledge; and, lastly, an ideology. In some cases, the ideology of investors in artificial intelligence borders on delirium, while in others it’s a little more “normal”, but there’s a generalized consensus that humanity and the world we know today will be radically transformed through this technology, and that we can’t imagine how far this transformation will go. And, of course, that they will be the agents of this change.

Within this collective, there’s a fierce internal competition, and what’s interesting is precisely how this technological infrastructure that keeps them connected optimizes group results and, in some ways, obstructs access for those of us who do not belong. This is why it’s so necessary for us to open up a window on what’s going on within these groups. There’s no master plan, there’s no conspiracy to conquer the world, but there is the conviction that they lead social, cultural and political change through an unstoppable technology, alien to the need for consensus and dialogue with the rest of society.

CM: This is a portrait of an animistic ruling class, in the sense that they are not positioning themselves as classic sovereigns, but as those who accompany or are the agents of a process. Not as a deus ex machina, but as the servants of processes that transcend them. On this point, the anonymity of the nucleus of power becomes quite clear, because it’s accompanied by the absolute opacity of Deep Learning and the implications of this obscurity, as well as the impenetrability and impossibility of not only representing it, but also monitoring and verifying it. At one point, the monologue describes it as a black box because it’s not possible to verify how it reaches its conclusions. A type of agent has been generated whose lack of responsibility is intrinsic.

MdE: To me, this is the fundamental point of the project and it’s connected to the photo series of artificial intelligence selfies. We asked an operating platform to make images of itself and the outcome was surprising, beautiful and terrible.

A Chief Technology Officer (CTO) I worked with, who was particularly cultured, told me that a Deep Learning platform is like a newborn child: the child already has an ego, but the ego is nothing, the ego is the algorithm that we enter into the machine like DNA. Through external inputs, this ego starts constructing an individuality, which is why no two artificial intelligence platforms are identical. The algorithm feeds on different experiences, images, texts at an extremely rapid rate and, after a time, you have a machine that’s very specialized, that does one thing very well, but is very clumsy for everything else. In effect, this CTO had come up with his own theory that was halfway between Locke and Sartre: the idea that the ego doesn’t exist, that there’s nothing there when you go looking for it, that it’s empty, that it’s only a platform upon which individuality is constructed through experience. And the question is: What would happen if this machine ends up becoming a discursive, conscious being? We know that it is very difficult to define consciousness outside of face-to-face interlocution, outside of Levinas’s famous relationship with the Other. If the Other suddenly interpells you about you, a series of moral categories to which you are not subject are loaded, nor are non-human animals, nor machines, and it may happen  (remember Levinas’s famous argument that ontology follows ethics) that perhaps we find ourselves faced, for the first time, with the possibility of a being that we have created but that is “ununpluggable.” Intelligent non-human life would have finally come to Earth not landing in a UFO, but in the form of a machine of our creation.

CM: This is the dilemma of individuation that Kubrick and Arthur C. Clarke illustrated in 2001: A Space Odyssey. The computer identical to HAL on Earth can no longer verify it, because HAL is now an individual and its opacity implies that face which is only a red light. It would seem a lesson by Levinas in that it implies the possibility of violence. I find it stimulating that you chose Goya’s engraving “I Saw It,” from The Disasters of War, as a leitmotif of your infrastructures. You use this title for a text on the principles of your practice: an engraving in which there’s a line of people who watch a type of witnessing that is not a witnessing.

MdE: It’s very important to me. It springs from a decalogue I wrote when I began the general project on the infrastructures of capitalism. My decalogue doesn’t aim to give a lecture on what art should be, it’s just a personal form of clarification so I don’t get lost, which is why I called it “The Very Simple Principles of an Art Practice.”

Goya’s “I Saw It” has a thousand possible readings. To be provocative, my opinion is that it is an earlier, brilliant version of Klee’s mediocre Angelus Novus drawing, which Benjamin converted into a symbol of progress. In Goya, progress is the French Army invading Spain, bringing with it the Napoleonic Code, the first academies, science, the ideas of the Enlightenment… and, nevertheless, desolation as well.

If you substitute the figure of the angel with that of the common people, as in Goya, they are the “angels of history” and Benjamin’s thought becomes much more potent. Thus paraphrased, it would read: “The engraving shows the people fleeing from something that paralyzes them. Their eyes are opened wide and their mouths stand opened. The Angel of History must look just so. Their faces are turned towards the past. Where we see the appearance of a chain of events, they see one single catastrophe.” Much better, isn’t it? For me, this engraving is the best expression of what the contemporary instant means: a frozen moment in which the future has not yet arrived, but is already here.

In artistic investigations, it is very common to come across artists who don’t know what they’re talking about. Or rather, who don’t understand the fundamental techniques of the issue or don’t feel that it’s properly theirs. Many others approach the subject with extraordinary honesty. I want to give one example of someone who I believe follows the dictum of “I Saw It”: Marcelo Expósito, who is about to have an exhibition at the MUAC. Expósito’s work is tough, but it’s by someone who was there, who has lived what he’s explaining, and you can tell, which is why there’s so much interest in his work.

In general, and especially when we’re speaking of technical or scientific issues, the distance between the artist and the motif is so great that it becomes, in the best of cases, a formal piece, almost journalistic, and in the worst, a banality, a façade. My objective, so I don’t get lost, is to only speak of things that I know, that I have experienced directly and that I feel comfortable discussing with specialists. This is why “I Saw It” is essential to my work.

There’s another fundamental point in my decalogue that has become almost banal with repetition, and it’s that art is a form of knowledge. It sounds nice and it has become a slogan that everyone repeats, but the reality is that it’s a very complex idea, because it forces you to think about what you understand by ‘knowledge.’ I don’t want to digress, but I would like to explain two ideas on knowledge that have assisted me in my artistic practice. Towards the end of his life, Ortega wrote an unpublished text on Leibniz that opened with a quite portentous phrase, especially for artists: “Knowledge, whether formal or informal, is always based on the contemplation of something according to a principle.” So the text begins. Isn’t it formidable? Note that, on the one hand, it says that contemplation is the necessary precondition for knowledge, one of the two, and that the other is a principle. Something to contemplate. It’s beautiful!

There’s another text, by Deleuze on Nietzsche, that also gives me a lot to think about. It says, more or less: “Both science and philosophy (as well as art) are symptomatological and semiological systems.” And what’s most interesting for artists is what follows: “A phenomenon or an object is a sign, a symptom that finds its meaning in a present force, and after a time changes its meaning in accordance with the force that appropriates it.” Think about this in the context of my infrastructures: they are signs and symptoms, but more importantly, they are the forces that appropriate them, the genealogy of these forces.

These ideas of others are instruments for thinking about how art can be a relevant form of knowledge in the twenty-first century. And it’s not obvious, really. I’m sure that there are a thousand other definitions, but I look for those that suit me best; when I think about infrastructures in my project, I dedicate time to contemplation and study through interpretive hypotheses. And I look for signs, symptoms and forces struggling to appropriate those objects that are infrastructures.

CM: One element of your decalogue[1], something that seems so banal that one doesn’t think about it, is your argument that art is a collective effort. You put forward a list of collaborations, which includes W.J.T. Mitchell, but you don’t clarify your way of collaborating. Could you discuss this? Given that you pose the condition of art as the production of knowledge.

MdE: Yes, yes, I see where you’re going with this, because until recently I think it wouldn’t have been a concern. It’s true that now there’s a movement, Ruangrupa at Documenta fifteen is an excellent example, in which collective creation is pure and unyielding. I’m not there, of course not. And I don’t aim to be. I must recognize that there are things in collective art that are frankly interesting, but they’re really only a handful. In general, I’m not saying that they’re bad, but it’s hard for me to relate to them and find the interest. Perhaps I’ll change my mind with the next Documenta.

It’s also true that art today, and above all the most successful art, which isn’t necessarily bad, continues to need brands. I don’t have any interest in the artist-brand, a concept I consider to be obsolete and useless. One way of boycotting it is with the heterogeneity of my artistic production. I don’t have a sign or a style that identifies me: “Look, that’s a Max de Esteban.” It’s very difficult, you have to know my work very well to know that a piece is mine.

On the other hand, I’m supported in my work by many people and it seems right to acknowledge the team that has worked on a project. I think about science a lot. What’s going on in science? It’s increasingly difficult to give a Nobel Prize and so they end up giving it to two or three people, and soon they’ll give it to twenty! Last year, they gave the Nobel Prize in Chemistry to two extraordinary people, the two women who developed CRISPR-Cas9: a gene modification system that will revolutionize the world and will become an infrastructure that I’ll have to bite into someday, but I don’t know how yet. Well, when you read the book by Jennifer Doudna, one of the two laureates, you realize that, aside from being a great scientist, she’s essentially a project manager: she does very little science-science. What she does is manage an enormous budget, hundreds of scientists on three continents; she conducts the orchestra. What I’m trying to say is that knowledge is so complex nowadays that the romantic idea of the scientist doing calculations in their living room or the artist making wonderful little objects in a loft has become a little pathetic, even if it keeps selling well in the art world.

I’m somewhere in the middle. I don’t want to give up the final responsibility for the project, it’s on me if it doesn’t come out right or doesn’t interest anyone, but there’s a lot of people who have assisted me, and if everything goes well, they deserve to be acknowledged if they so desire. I don’t want to pretend that this is a collective that engages in joint invention just because that’s the fashion, because that’s not true either. I dedicate one hundred percent of my time to this and the people who assist me only part of their time, albeit significant parts, indispensable for these projects.

CM: Allow me to conclude with a question that, I admit, is excessive, but it’s a question that arose after watching your video for the fifth time. Where do you think your video leaves our residual resistance? Despite what’s argued in the text by Mitchell at the end (the obligation to create technology and accompany it politically), I feel a little crushed when I rewatch your video.

MdE: I want to contradict you, because I’m an optimist and I’m going to tell you why. I know the managers of capitalist infrastructure well: financial traders, investors in cutting-edge technology, specialists in international taxation… and one of the things you realize is that these systems are deeply fragile. A determined, multinational political will would be enough to substantially modify their functioning. Their omnipotence is a myth created by the infrastructures themselves.

For example, I recently finished a project on the infrastructure of international taxation. It’s a very important infrastructure because it determines the levels of inequality that are socially acceptable under a capitalist regime. But the President of the United States, Joe Biden, announced that it was all over, and now we’re talking about a minimum global tax rate for the first time in history, international agreements to avoid the most shameful abuses. I’m afraid that Biden’s plan is minor, something that will sincerely affect few companies and will not collect much money, but the simple fact of establishing something that was unthinkable just yesterday shows that political will still has the capacity to win out over economic will.

We can’t forget that Bernie Sanders had a chance, and a serious one, in a country as conservative as the United States, and if he wasn’t more successful it was because he was old and ugly, and because some of his ideas didn’t completely come together. I do believe that if the left reconfigures itself, there’s a significant opportunity to change things.

I’m going to tell you a personal story. When I went to Stanford, I came from a background in leftist economics, which is what was taught in Spanish universities at the time. And reading those old Soviet Marxist texts, you didn’t need to be very smart to realize that they were going nowhere. In the eighties, when I got to Stanford, I started reading Hayek, Harberger, Friedman… and even though they evidently didn’t convince me, you could see the intellectual power, the bravery, the attraction of those texts, it was like day and night, between the macroeconomics that had a layover in Moscow before reaching Spain, and those guys that spoke to you about liberty, taxation, and transformational creativity: it simply was like day and night! And on top of all that, they were good economists. At the time, I thought: “These guys are going to rule for the next twenty years.”

Nowadays, there’s an extraordinary new leftist economic literature. There’s a new generation of economists that, to me, represent an intellectual revolution; what happened with Biden and global taxation wouldn’t have been possible without the Pikettys, the Keltons, the Zucmans, the Saezes, the Pettis, the Milanovics… This truly powerful intellectualism will rule for the next twenty years if the rest of us do our part, if we don’t get tangled up in complicated philosophies that make us boycott the ideal of a reasonable world… It’s enough to be reasonable, because what we have today isn’t reasonable, it’s an insult to our intelligence.

[1] See Max de Esteban, “The Very Simple Principles of an Art Practice”, https://maxdeesteban.com/principles/