Joy Garnett: When we first met years ago, I visited your studio in Brooklyn. I noticed source materials that appeared to be unrelated, such as Old Masters paintings, architectural structures, and what we all used to call "new media." Were you already trying to make connections in your work between these far-flung elements? Why do it through painting?
Tiffany Calvert: Well, I used paint in my formative years, but I wasn’t much of a painter. In the nineties, installation art was queen. I thought that no one made abstract paintings anymore, so I made installations and sometimes paintings, and ascribed to the idea that concept should lead and medium was merely a tool for ideas. It wasn’t until grad school at Rutgers and training with Thomas Nozkowski that I came to understand paint itself as an idea-driver.
I’ve always pursued digital media alongside painting, but it didn’t enter my painting for a long time. When Photoshop first became available to a wide audience, we students wanted to learn it. Most of our teachers didn’t know Photoshop yet, so we huddled around the computer clusters in the basement of the library teaching each other. It’s how I learned to code in HTML. The internet was brand new then, and art-like websites such as JenniCam were beginning to emerge. I remember feeling this would be the true beginning of virtual art, that a bunch of avant-garde conceptual websites were going to proliferate. But they mostly didn’t and still haven’t, which I don’t get. It seems like so much unexplored potential.
My one constant commitment has been to painting. And when I first committed to it, I wasn’t sure how it would engage digital media. Smashing far-flung things together is an impulse of mine. As soon as I learn a new technology, I feel the urge to go the other way and get my hands into an old one. For instance, in 2001 I made my first painting with glitching (image interruptions which appear digital), but I didn’t revisit this until 2018, the same year I learned how to do buon fresco (painting into wet plaster, what’s called true fresco). You have to apprentice with an experienced expert, it’s too difficult to learn it any other way.
JG: Rather than talk about the so-called death of painting, let’s talk instead about your capacity (through painting) to connect art history (and therefore, history) to the current culture and our daily lives.
TC: By integrating digital media, I’m speaking directly to painting’s relevance. But so many unplanned alignments and intersections in this current body of work are still unfolding. For instance, there are surprising overlaps between Old Master Dutch flower paintings and viruses. During the period of these paintings, tulip growers were obsessed with a particular tulip called the Semper Augustus, whose characteristic scarlet striping on white was the result of infection from tulip breaking virus (TBV). The rarity of this infected bulb and growers’ inability to control its reproduction became the reason for its value. Contemporary tulip growers still have to protect their bulbs from TBV blight, and they do so by using artificial intelligence!
I play off this idea in my paintings; tulip growers use AI to avoid mutations, and conversely, I use AI to create mutations.
JG: Why is it important for your paintings to incorporate digital mediums and to reside at what you’ve called “the intersection of painting and digital media, integrating painterly mark with digital reproduction”? What exactly does AI technology allow you to tap into that might otherwise be impossible to get at?
TC: It’s exactly because we still argue about whether painting is dead that the question of the digital versus painterly gesture is relevant. At this point, painterly idiom has adopted the language of technical image-making. Painters such as Laura Owens and Trudy Benson have incorporated drop shadows, Photoshop grids, and MacPaint marks in their painterly repertoire. But also, I think computer desktop space has fundamentally changed our vision and the way we see space. Space is different now. In his classic book of essays, Other Criteria, the art critic Leo Steinberg noted that pictorial perspective radically changed in the 1950s, shifting from upright (easel painting) to “flatbed” on a desk or studio floor, like the works of Rauschenberg. But now that we hold our phones and iPads vertically, we're back in vertical space again. This time it’s a layered-panes space. Why doesn’t more painting address this fundamental shift? I don’t know. We are all affected by this change, but few painters make work that puts that idea up front.
I’m still not sure how to get digital media and painting to integrate, but I find the challenge enticing. And frustrating. I have several experiments in the studio right now, like painting on a moving-image screen, and new pieces on the horizon. They don’t work yet, and I’ll have to see where they go.
JG: How did you start working with AI, and why choose that particular technology? There was a point fairly early on when your tight, architecture-inflected works fell away and you became a painter of lush, brushy renderings of seventeenth-century flower still lifes, painterly remixes that veered towards abstraction. AI appeared to have nothing to do with your painting at that stage. Am I wrong?
TC: Yes! No! Everything! After graduate school, I gave myself permission to be painterly. I think it just took me that long to gain the confidence. But I was always thinking about and exploring digital media on the side. Then in 2015, I started painting on top of reproductions. I wanted to engage Old Masters paintings very directly. I wanted to call attention to the fact that I see their elaborate detail as abstraction. In some ways, all 2D representation is abstraction, but I see highly detailed still life paintings as being especially abstract. I wanted to intervene and disrupt them, to convincingly blend representation and abstraction. I know those two goals seem at odds with one another.
In 2018, I started datamoshing the images before I painted on them. Datamoshing means manipulating digital information at the code level. It results in characteristic striping or banding in brilliant colors (glitches). I did this as a way of creating a middle ground between the two spaces—between the reproduction and the paint—something that would float in between.
Datamoshing is already kind of passé though. It’s almost digitally nostalgic. So I resolved to explore AI. The software I started using just as the pandemic lockdown was happening in 2020 was still in the beta testing stage. It was so new that I knew I would have an accessible way to play with AI.
I collected over one thousand images of still life paintings, and these became the dataset that I give to AI software, so that it tries to learn their characteristics and then make a forgery. The AI does this using a machine-learning framework called a StyleGAN (Generative Adversarial Network). This was created in 2014 by Ian Goodfellow, and then refined by Nvidia for use with images in 2018. The problem is, my dataset is relatively small (it should be in the thousands or millions to get a really good image), so the mutant forms you’re seeing in my work are the result of it not having enough information. For instance, the AI erroneously assimilates a spherical flower with a cut-open lemon. I then print these images out large-scale on canvas, and paint onto them.
JG: If the technology became obsolete, what would you be left with and what would you do?
TC: I’d be left with painting.
JG: Where do you plan to go from here? Tell us how the lockdown may have hampered or propelled you into your next projects.
TC: The pandemic has been hard. I’m lucky to be a professor, and I can teach remotely or hybrid, but I had a 5-year-old (now 6 ½) and my husband is immunocompromised. It feels like the rest of the world has chosen to move on, and we can’t. Still, somehow I made a lot of work in the past two years. I started applying AI to the images right at the beginning of COVID, and it was successful right away, so I ran with it.
My plans from here: I can improve my AI model. The harder I work at gathering a large dataset and refining the training, the closer the output image will be to a recognizable still life painting. Conversely, the AI model I use will continue to be improved by its developers. It will be interesting to eventually get an image that’s almost believable, but still somehow wrong—this is called the uncanny valley. The closer the simulation, the more unsettlingly wrong it looks. Also, I’m going to dive back into painting directly on moving-image screens. I tried it a few years ago and couldn’t resolve it, but studio practice is always like that. I envision an installation with paintings, video, projections. I’m not totally sure, I just keep following the next thing.
Tiffany Calvert’s paintings incorporate diverse technologies, including fresco, 3D modeling, and data manipulation. John Yau, in his Hyperallergic profile, compares their “improvisational riffs and fractured views” to de Kooning. Calvert’s work has been exhibited at the Lawrimore Project (Seattle, WA), E.TAY Gallery (NY), the Speed Museum (Louisville, KY), the Susquehanna Art Museum (PA), and Cadogan Contemporary (London, UK), among others. She teaches at the Hite Art Institute, University of Louisville. She is a member of the Tiger Strikes Asteroid curatorial collective.
Joy Garnett is an artist and writer from New York. She lives in Los Angeles where she’s writing a family memoir of Egypt. She is the art editor of Evergreen.