Ellen Ramsey
Contemporary Tapestry

An Experiment with AI

My latest tapestry was designed using generative AI. I have mixed feelings about it. You probably will too.

Ellen Ramsey, “DALL-E 07.51.35,” 2024, mixed fibers, 32” x 32”

I didn’t plan to design a tapestry using AI.  Most artists feel that any use of AI image generators is either “unethical or exploitive” due to the fact that these large models sample from existing online content, with no regard to copyright. Still, I was curious. Could it possibly be a useful tool and if so, how?

I have been using digital tools to design my tapestries since 2014. Primarily this means designing in Photoshop. In my current body of work, I collage together compositions using stock circuit board illustrations as source material. (I want to make clear that I purchase a license to download and use these illustrations before working with them.) I cut and paste, rearrange and invert various circuit “passages” into new compositions. Then I manipulate these images using the built-in filter algorithms in the Photoshop.  I like to iterate the algorithmic manipulations to an image to the point where unexpected effects begin to happen. Color data on the CMYK spectrum frequently separate out, creating “algorithmic glitches” in the image that can, in turn, be copied, pasted, and rearranged as visual elements in their own right. Sometimes I run my digital designs through glitch generator software to achieve more unexpected effects. I transform the stock images I use into something very unlike the source material through this process. 

Ellen Ramsey, “Portal to the Metaverse”, 2022, mixed fibers, 68” x 77”

This detail of Portal to the Metaverse shows a huge algorithmic glitch that I wove as a feature of the design.

This is  how my tapestry Portal to the Metaverse was designed. The intention behind Portal was to merge the visual language of circuit board assemblies with the design language of traditional carpets. (Read my post No Longer Lost in the Metaverse for more background on the design of this tapestry.) I spent many hours over several days working out the digital manipulations that became the cartoon for Portal

Ellen Ramsey, digital cartoon for “Portal to the Metaverse”

In contrast, the cartoon for my AI tapestry came together in a matter of minutes. I opened an account with Open AI and began to play with the image generator DALL-E2.  I attempted to craft a prompt based on the same concept as Portal to the Metaverse: a circuit board arranged in reference to a traditional textile form. My first attempts at a workable prompt were failures.  The prompt “a circuit board designed to look like a traditional woven carpet” generated images that were neither circuit board-like or textile-like. They were just weird. I quickly learned that I needed to be more specific.

Eventually I honed in on the prompt  “colorful circuit board lines and shapes on a green background arranged as a rectangular woven carpet design with a central medallion and a border.”  Below are four images created from this prompt. You can see that the model was grasping my intention. 

The first three are very generic, but reflect the prompt at least. Digital image DALL·E 2023-06-02 07.51.35, however, stopped me in my tracks. It was exactly what I was looking for: a true marriage of textile language with the digital, something I had achieved in Portal using Photoshop.  It conveys the idea of some kind of motherboard, and yet it is not literal. It is totally symmetrical, like a carpet, and yet totally random at the same time. The colors are varied, repeated and distributed logically, and true to digital subject matter. The pops of pink were weirdly arbitrary and made me swoon. And there were even black boxes built in to the design, a symbolic motif I have used in my past work.* I wanted to weave it exactly as DALL-E generated it, with only a few simplifications to aid in weavability.  

I don’t like to weave images that are not my own, but weaving designs created by others is foundational to the tapestry tradition as practiced in commercial workshops throughout the world to this day. Instead of collaborating with another artist, in this case I am collaborating with an algorithm. I was eager to weave a tapestry on the topic of AI and here was my opportunity.  I decided to do it.

Digital image DALL-E 2023-06-02 07.51.35

I anticipate that this experiment will be controversial. I’m expecting some blow back. When I tell people this work was designed by AI the presumption is that I MUST be violating somebody else’s copyright by reproducing it. In a recent webinar for the Surface Design Association called Fiber Artists vs. Artificial Intelligence, Ursula Wolz, PhD explained the workings of these predictive models and made it clear that  “AI can’t innovate, it can only regurgitate.” 

My main concern with digital image DALL-E 2023-06-02 07.51.35 was that it appeared too resolved as a design to be a coincidence. I worried that the model might be plagiarizing someone’s intellectual property and/or appropriating cultural references. I went looking to see if I could figure out where this image, or parts of this image, was coming from. 

I conducted image searches in Google Lens using both the digital image and a photo of the finished tapestry. I was not able to find a single source that explains my AI generated image, but I think I have a pretty good idea where the elements of this image are coming from: (1) Gaming maps, (2) engineering diagrams, and (3) textile motifs.

Gaming Environments

A Google Lens search on the digital image generated by DALL-E2 brought up only digital image matches, no physical artworks. Most of the matches were gaming environments created by amateurs in Minecraft or by VFX professionals using 3D modeling software. These images were primarily found in gaming forums on Reddit like “battlemaps” and “dndmaps.” They all resemble circuitry to a large degree. Many a “prison” or “arena” could pass for a microprocessor. One of these game maps stood out to me as sharing a lot of compositional similarities with my AI design.  It is called Tomb of the Last Necron King, made for the game Dungeons and Dragons.  

Digital image, Tomb of the Last Necron King

Engineering Diagrams

No actual images of circuit boards or illustrations of circuitry come up as visual matches in Google Lens. (Perhaps this content somehow tagged and protected from scraping by AI, or removed from training data as proprietary?) However several diagrams from academic papers on circuit design do appear in the image search. One engineering diagram in particular resembles my design in key ways. The image, below, comes from a product review for a circuit design software program.  I see similar colors and visual elements, especially the “Greek key” type organization of the circuit lines.  I am assuming the green/pink binary is used to make the on/off switching of the circuits clearly visible in this program. Could something like this be the source of the pink in DALL-E 2023-06-02 07.51.35?

Example from a circuit design software product review

Textile Patterns

This is where it gets interesting.  The digital image DALL-E 2023-06-02 07.51.35 brought up only images of similar digital assets, no image matches to non-digital art or hand crafted work.  However, searching on the image of the completed tapestry in Google Lens brought up only images of hand crafted artworks, most of which were textiles, and no digital assets. 

From the textile point of view, Google Lens found similarities between the DALL-E tapestry and many textile traditions: Patchwork quilts, crochet squares, Navajo rugs, Peruvian textiles, and Central American Molas.  Clearly the piece speaks the language of textile, just not any one specific dialect of that language. 

The artwork image matches that appeared ran the gamut from work by Friederick Hundertwasser to “tribal” upholstery fabric for sale at Walmart. The matches were all over the place, and none of them looked much like my tapestry.  I did learn about several artists working with similar themes and aesthetics, though. I learned about Eric Celarier, who makes patchwork quilts out of actual circuit boards, and abstract painters like Hildi Malcolm and Anil Revri whose work uses repetitive patterning and shares an interest in technology as a subject.  However, the work of several famous artists I know of who are working directly with circuit board imagery (Elias Sime, Robin KangAnalia Saban to name a few) are nowhere to be found. 

Google Lens did match my DALL-E tapestry to images of my own work over and over again, pulled from various websites where my work can be found.  At first I felt surprised and validated by the search results, but then I thought about it more deeply and felt less impressed. Google easily identifies the exact source of DALL-E 07.51.35  as my website, so no surprise it would weight similar images from the same source more highly than other images on the web. Nothing to see here.

Restricting the image search to just the central motif brought an entirely different set of similar images, all textiles. The matches touched upon just about every textile tradition that uses, or ever has used, the chevron or diamond patterning. A few were notable visual synonyms. The African fabric illustration, the Moroccan rug motif, and the Sardinian rug pictured below are pretty close matches for the central motif.  I see clear similarities between elements in the AI generated design and these three textile sources.

Conclusion

If my AI generated tapestry design is plagiarizing something, it is not easily identifiable using Google Lens.  I would like to think that DALL-E is Stealing like an ArtistAustin Kleon style, but that would be anthropomorphizing the model. DALL-E is not capable of making aesthetic choices. (This is clear from the many BAD aesthetic choices the model made in other images generated from the same prompt.)  My research suggests that the model is borrowing from a huge variety of online sources to create the image. How these elements came together so artfully arranged, I can’t explain.  Could digital image DALL-E 2023-06-02 07.51.35 be a happy accident?  I would love to discuss this with someone knowledgable about AI (although I expect they would ruin the magic of it all for me).

Here’s where the mixed feelings enter stage right. On the “pro” side, I think DALL-E 07.51.35 is a really cool tapestry!  I want my work to reflect upon the dominant technological issues that we are experiencing. AI is THE issue right now so I’m super happy that I have managed to introduce this concept into my body of work in some fashion. The process of making this work has manifested, for me, the uncomfortable realities of the AI age.  Everyone is worried about being replaced, or duped, or harmed in some way by this technology, myself included. Therefore, I am not sorry that I wove this piece. Could it be the first ever tapestry designed by AI?

On the “con” side, I am wrestling with the implications of using AI in this way. This tapestry does feel somehow tainted by the fact that an algorithm 100% designed this piece. So, am I in this work? Certainly, yes, my hand and material knowledge is there in the interpretation and execution of the idea.  Certainly, yes, my artistic intention was built into the prompt.  Is the simple act of humanizing AI through translation and craftsmanship enough to make this tapestry into a legitimate and unique work of art?  I hope so, because the deed is done.

Will I continue to experiment/collaborate with AI?  The jury is out.  I’m very curious how another image generator, like MidJourney, or the latest iteration of DALL-E (3), will interpret the same prompt. I definitely want to check that out, just for fun. But will I weave any more tapestries based on AI generated designs?  I doubt I can get past my ambivalence.


*From Wikipedia: In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is “opaque” (black). 

Using Format