Public Talk
Perturbed Hypotheticals: Exploring Design Language for AI Art Ethics
Jane Adams. MFA
12:00-1:00pm, April 10th, 2025
CC Midcentury Room
Join artist and PhD researcher Jane Adams for a thoughtful and inspiring talk on how creativity and technology come together through AI, data, and art. In order to talk about both hypothetical and very real scenarios involving multiple stakeholders in creative economies (especially in this era of AI), Jane has created a design language that coalesces around forms of mark-making to graphically represent actors, media, and text attributes such as provenance, transformation, credit, and compensation. This talk explores the elements of this design language and the discusses multiple scenarios where the language can help visualize the complicated networks of creative economies.

Jane Adams
PHD Researcher at Northeastern University
Full Video of the Talk
Exploring the Edges of AI Art Ethics
Podcast Transcript
Stephen McCauley 00:00
Hi, I’m Stephen McCauley. I’m a co-director at WPI’s Global Lab. I’m here today with Jane Adams. Jane is an artist who works in both physical and digital mediums. She’s a really interesting thinker, and we have her here today to help us think about AI and art ethics. Jane is currently a doctoral student at Northeastern University’s Visualization Lab. She has also been at University of Vermont’s computational stories Lab, which is in their complex systems center. Jane also works with industry partners, and she builds machine learning tools that help biological scientists. So she brings a lot of different perspectives into this conversation, and we’re looking forward to learning about AI art andethics. Welcome Jane.
Jane Adams 00:44
Hey, nice to be here.
Stephen McCauley 00:45
So can you tell us a little bit? I know I just gave a little bit of a bio, but can you help us kind of understand your journey into digital arts especially?
Jane Adams 00:53
Yeah, absolutely. My undergrad background was in graphic design and digital media, and when I graduated, I realized that most of the jobs in graphic design were advertising, and it just pulled all of the like joy out of creativity for me. And I ended up going back to school and doing an MFA in Emergent Media. And so that was kind of a two part program at Champlain College, where on one hand, we were trying out new technologies, virtual reality, physical computing, like Arduinos and making weird little art robots and things. But then on the other side, we were looking at sort of the like the social aspects of emergent technology. And, you know, How do humans respond to new technology? How does the introduction of new technology change the way that we use old technology or we think about old technology. So yeah, that was a really like eye opening experience for me, because it gave me some exposure to new tech, but it also made me think very deeply about how that new tech impacts our lives.
Stephen McCauley 01:55
Yeah, right. So your background has kind of situated you perfectly to help us kind of think about this new moment with the emergence of generative AI and sort of, at least a widespread distribution of it. So maybe just starting with big picture like, can you help us understand, like, what’s the story we’re telling ourselves about AI as you see it, and are we getting that right, or what are we missing in that big story?
Jane Adams 02:18
Yeah, I mean, I think it’s a very polarized story right now, both like AI is a conversation in general, and then specifically, when we think about AI and creativity AI in general, there’s kind of this spectrum where a lot of people are on the extrema. So on one end, thinking that artificial general intelligence is going to kill us all right, we’re at the brink of apocalypse and, and, and then on the other end of the spectrum, there’s sort of the techno utopian view that, like AI will, you know, save the world, and we’ll never have to work again, and all of the world’s wicked problems will go away and, and there’s parallels there with the AI art conversation, because you Have people on one end that are saying, you know, AI art is ruining the livelihoods of artists and, you know, completely destroying everything that we know about the creative economy and how it functions. And then on the other end of the spectrum, you have people who are saying that, you know, AI art is the future. It will allow artists to be more creative and enable them to sort of overcome the drudgery of, you know, the the laborious parts of creating art. And I think that both of those spectra, right, the AI conversation in general, and the AI art conversation, there’s gray area in the middle there, there’s some like complexity, there’s some, you know, discussion that should be had. And so I really see both of them kind of as a bimodal distribution right now. And I’d love to get some more conversation in the middle, talking about, you know, how it’s a little bit of one and it’s a little bit of another
Stephen McCauley 03:55
Yeah, it seems like that’s where all the work is going to have to be. It says the digital artist, Holly Herndon, digital musician, describes that the sexy that the sexy middle ground is where we have to kind of work, right, yeah. So can you help us, maybe with some some analogies of how we might think about AI art ethics, just as we enter into this conversation?
Jane Adams 04:15
Yeah, definitely. So one of my biggest inspirations in talking about this actually, is a researcher at Adobe called Aaron, named Aaron Herzman. So Aaron has created this catalog of AI art analogies, and so he’s sort of been observing this conversation that folks have been having about, you know, are there analogies that are useful for thinking about AI art? And he identified a couple, right? So one that’s very common is the idea of theft. And so with thinking about AI art as theft, you’re, you know, it’s, it’s useful in that it, we can think about the like damage that using other people’s art without their permission. Could cause in terms of financial harm, but also it’s a it’s an analogy that falls flat when we think about the fact that when someone breaks into your house and steals your television, you no longer have a television, whereas when somebody scrapes art off of the internet, that art still exists in the original place. Some of the other analogies that he talks about are photography, right? And the idea that, you know, when, when photography came on the scene, a lot of artists and a lot of art appreciators were thinking that, you know, we can’t possibly consider photography to be a form of art because it’s just, you know, pressing a button. And that’s a critique that we hear a lot of times, also in in this inference step where folks are, you know, generating an image using dolly or something like that. Some of the other analogies that he looks at are humans learning to paint, right? So this is an argument we hear often, especially from the sort of pro AI art side, which is that machines are just looking at other art and then creating something from their, you know, mental representation of that art in the same way that humans do. But then, of course, there are drawbacks with that analogy as well, right? Because humans and machines aren’t the same, and a person can’t possibly, for example, scrape a website at the same rate that AI can. And so something like an AI scraping bot creating a direct denial of service attack by accident because of the rate of scraping is not something you’re ever going to have from a from a human who’s perusing a social media website to look at art. So those are just a couple examples of the of the analogies that he uses. Some others are conceptual art, right? So, you know, when we think about all of the modern art that has come, you know, just in the last century, there is this perspective that anything can be art, remix and collage is another one. But, yeah, so I think his AI art analogies are really good, because they kind of capture how it’s difficult, right? There are imperfect analogies, and we can use an amalgamation of them to sort of talk about the different facets.
Stephen McCauley 07:11
Yeah, yeah, thanks. That’s helpful. So as we then start to parse apart, like, what are the real ethical issues at play when we’re trying to understand AI, art ethics. What are some of the real conundrums or sticky points that we have to consider that might actually, you know, even have divided opinions, even within artist community, for example, what are the real sticky questions around AI and art ethics?
Jane Adams 07:36
Yeah, so to go back to the idea of, you know, the analogy of a child learning to paint, and, you know, humans experiencing art. So that’s one perspective that some people have, which is that, you know, these algorithms are training on images that are distributed across the internet, right? They’re not behind a paywall. Oftentimes, some of them are but and so that kind of take is to say that, well, you know, the models are just sort of looking at all of these images and then creating something as a result of it. And so it’s not really something that we can or should protect in terms of of of artist labor. And then you have some folks who say, Well, if you know, it could be okay, but we should think about compensation and also consent, right? So not only should we pay artists for the work that we’re using because it is going into these models, but also that you can’t just pay someone and say, oh, sorry, I scraped all of your work and trained a model on it that it shouldn’t be retroactive, that it should be, you know, with knowing consent ahead of time. And then you have people like who, no matter what, even if you have consent, even if you have compensation, they’re still always going to be opposed to AI art, because we still haven’t really crossed that bridge of figuring out how to make these algorithms efficient. So you know, the carbon impacts of training a large, large image model or a large language model are not negligible. And so, yeah, there’s, there’s a wide, wide range of perspectives on, on AI art right now, right?
Stephen McCauley 09:19
Yeah, and I’m glad you brought up the environmental impacts, because that is an ethical consideration, you know, that goes beyond art even, but I’m glad you brought that in. But thinking of the artists, I mean, I guess we could think of artists as just creators, or people who create to make something artistic, but a lot of artists are working in the realm of producing something for either to make a living or for some sort of commercial or economic ends. Can you talk about the question of labor and how the ethics of AI art around raised questions around that?
Jane Adams 09:53
Yeah, I’m really glad that you brought that up, because I think that, honestly, a lot of the questions, or a lot. Of the arguments that we’ve been having, and a lot of the sort of reaction, negative reaction towards AI art is not actually towards the technology, right? We’re just seeing that there are inequities that are playing out with this technology that have kind of already been in existence in our creative economy and in our society. So in a way, it would be useful for us to kind of separate the technology from the way that humans are using that technology. You know, I really appreciated Corey Doctorow had written about the idea that, like, copyright won’t save artists from, you know, the the, you know, economic impacts of AI, because at the end of the day, it’s, you know, if, if a person uses AI art to, you know, AI to generate art, say, for an album cover. But they’re like an indie musician who can’t afford to pay somebody to design that album cover. Like, you know, is that lost income, or is that just, you know, creative output that wouldn’t have existed otherwise? Yeah, so there are some interesting, interesting things to explore there. I also really appreciated Sylvia Federici, in 1975 had this great work called wages against housework, where she talks about how women should be compensated for the labor that they do in the home, and she says that that basically we shouldn’t just because we’re being compensated for that doesn’t mean that we then will always do it, but that actually, by demanding compensation, we can also sort of withhold our labor in exchange for not being compensated. And so that’s something that I’ve heard brought up in the context of AI art specifically, or in the context of art in general, because it would allow people to say that, you know, just because I’m being compensated for my art doesn’t mean that I always have to be producing it, but that also it allows people to not make art in exchange for not being compensated. So, yeah, I think that, I think that all of those are kind of interesting ways to think about the labor question and also to maybe decouple the technology from the societal problem, right?
Stephen McCauley 12:28
And I know that you’ve worked in sort of both commercial and non commercial art and creativity. Maybe there’s a point to bring in your art a little bit. Can you tell us about some of the projects you’ve done either both and in both physical forms and digital art.
Jane Adams 12:44
Yeah. So it was funny, because I started making AI art in 2020 and we all know that 2020 was the year of the global pandemic, and it’s not a coincidence that that’s when I started, you know, delving into this medium. Because prior to that, I was making really large like sculptural works. I actually had an exhibit right when the pandemic hit, that was an aquaponic diorama. So it was a giant fish tank on the bottom of a doll house, and the doll house was fully plumbed and electrified, had living plants inside of it, and it was a fully self sustaining ecosystem. So you can imagine it was quite a sight to behold, and it was also like heavy as hell, and and it was large, and the only reason that I was able to make it was because there was a maker space, and I was able to have room to spread out. And then when the pandemic happened, it was like the gallery shut down, the Maker Space closed and all of a sudden, a lot of the, you know, tools and affordances that I had for making large scale art went away, and that was when I sort of was like, grasping in the dark for, like, what is a medium that I can use right now that, you know, I’ve always been interested in using technology in my Art that uses technology. And how can I, like, learn something new while I’m experimenting with a medium? And so I started training my own stylegan models. So stylegan was kind of pre text to image. So a lot of the things that we think about now with AI art are in the text to image. Explosion that happened in like 2021 and 2022 so that’s mid journey. That’s Dall e that’s anything where you, you know, put in a visual or textual prompt, and you get an image back style. Gan was something where I was transfer learning from a large generative adversarial network using 1000s of images that I had collected by hand. So during the pandemic, I was like listening to podcasts for hours on end. I was, you know, at one point, I scraped a bunch of potted plants from Amazon’s web store in order to just get, like, this huge collection of potted plants. And then I was actually digitally repotting the plant. So I would cut the plant out of the pot, and then I created this pot plant adjacent. C matrix, and then I was repotting all of the plants into different pots, into like in order to artificially inflate the size of my training data. Because transfer learning is something where you need to have 1000s and 1000s of images to really get you know what you want out of the model. So yeah, and it’s been very funny, actually, because, you know, that was what AI art was to me. And then when we got text to image just exploding in the sort of collective imagination and people, you know, creating all kinds of things, wonderful and horrible, it was very interesting, because, you know, it almost was kind of a juncture in like, identity for me, where I actually didn’t love the text to image creation as much as I loved the like, incredibly difficult labor of collecting all of these images for training data and thinking about ways to, you know, artificially inflate the size of A training data set. So, yeah, I, you know, I’ve played around a little bit with text to image, but it’s not something that I like use regularly in that artwork, just because it is so easy
Stephen McCauley 16:10
Interesting. Yeah, I can actually see the connection between your physical sculpture work and the AI art, just in a sense of how you were bringing together like disparate forms that aren’t usually together, and mashing them up and making something new. So I can see how that fed into the digital creativity.
Jane Adams 16:27
Yeah, I actually, I made a sculpture. So, you know, everything came back, right? And for better or for worse, everything came back. And I started doing some sculptural work again, but with the context of having, you know, trained these models. And I had this one model that I had trained on aerial photographs and aerial imagery, and so it was just this stylegan model that could generate landscapes, and you can create these interpolations that are basically like a walk through the latent space of a model to and so I created this animation that looked almost like rising sea levels, and I took all of the frames of those that video, and I printed them out onto translucent paper and stacked them all together. So then I had this giant translucent cube of like time, essentially through this latent space, and I illuminated it, and so I had this big glowing cube. But then, you know, we were all having this discussion right about, like, Well, how do you give credit to the artists in that were used in training data when it’s such an incredible volume, and I actually had all of the data in a JSON file, because I’d taken all these images from royalty free stock photo websites, Pexels and Unsplash were the two that I had used. So I had, I think it was like, 17,000 credits of the photographer’s name and the name of the work, and I printed them out at like size four font onto receipt paper, and it was 120 feet long, and I just taped the whole thing underneath the sculpture. So it was just this enormous pile of receipt paper underneath the sculpture that had all of the credits. And so on one hand, it was my way of, you know, including those, those credits. But on the other hand, it was a way of, kind of pointing out the ridiculousness of the activity, right? Like, in a lot of ways, you know, I was just transfer learning. So that was a very, very small portion of the images that actually went into the, you know, training of the model in the first place. So yeah, that was my one response, sculptural response,
Stephen McCauley 18:33
Right? It’s perfect, and it does show the kind of ridiculousness of trying to document all the sources. But then again, it raises question of, well, should we just throw up our hands and not try to attribute because there’s so much there. So it raises a lot of questions. Maybe we can think about, you know, how do AI art ethics relate to the ethics of non AI art, such as other analog forms of mashups like collage or remixed art?
Jane Adams 19:00
Yeah, that’s a great question. And that actually is one of Aaron Hartman’s analogies, too, the idea of collage, I think it’s, you know, it’s interesting to think about, because these models aren’t actually taking pieces of the works themselves. They’re taking like a mathematical representation of the impression that that work is, is putting on, you know, creating during the training process. And so it’s sort of a, it’s sort of a fuzzy analogy, but one thing that that I do think about often in terms of, like art movements, is glitch art. So glitch art in like, the 80s and 90s was really characterized, almost in this, like post post modernism. Way was characterized by this, like, self referentialness, you know, we think about like Marshall McLuhan, and the medium is the message. Glitch Art was very much characterized by people love. Loving the fact that it was clearly made by a computer and the, you know, marks that were left by the computing process. And it’s the same reason right that some people say they love listening to records is the pops and crackles. And so hartzman has this term that he calls visual indeterminacy to talk about the sort of weird artifacts that come out of AI art. And that’s something that I really loved when I was, you know, training my own style grand models, was that it was kind of imperfect. If you asked it to generate landscapes, there would be these, you know, floating islands and and, of course, you know, we know. We know about the six fingered hands and things like that, which have been, you know, characteristic of, you know, the caricature of AI art. But I think that there’s something interesting there, right, as a parallel to other, you know, mediums that have come before, that the marks of the medium are valuable to people in an artistic sense and in a way, the artistic goals and the corporate goals are completely misaligned there, because the corporate goal, right, is to generate images that are indistinguishable from, you know, non AI art, but for a lot of artists like they appreciate the sort of weirdness and the, you know, indeterminacy of the images.
Stephen McCauley 21:21
Yeah, it sounds like there’s almost some space to play there, artistically with the kind of nostalgia of other of analog forms of art. And, you know, so there’s infinite places to play now with digital arts, really. So is the distinction between computer generated art and human generated art? Is that a valid and useful distinction at this point?
Jane Adams 21:45
I think that it’s, it’s a little bit more complicated than that, you know. I I think that it’s really useful to think about a spectrum from human generated to computer generated art. And I actually don’t think that AI art is at the far end of that computer generated spectrum. I think that it’s sort of one step in towards the human side because of the use of training data, which contains a lot of human creativity and human, you know, ephemera. The very, very end of that spectrum actually would be generative art. That’s not AI. So think about the Mandelbrot sets. Think about Ariston lindenmeyer l systems, which are computational way of generating trees. There’s been a whole movement of generative art in the processing community. So processing is a IDE that originally used Java. I think now there’s also a Python version where people have you know, used simple mathematical equations to create really complex and beautiful generated art. So, yeah, that’s kind of like, I would say that there’s generated non AI art, then there’s AI art, and then, kind of, in the middle of that spectrum of human and computer generated we could think about, you know, things like in painting. So a lot of artists, even that would self describe as being anti AI, have used things like in painting, in Photoshop, or, you know, directly on their cameras. So there are lots of places where AI, kind of, like, seeps in at the corners. And those are the places where we almost could think about AI as like a colleague or a collaborator, and then, and then we kind of get more into the into the human side of things. And even on the the extreme human and analog side of things, there are still some, you know, generative approaches that artists have taken. You could think of, like Sol LeWitt, whose work is just up the road at Mass MoCA, that’s entirely analog, and yet it is based on a system of really detailed instructions on how to create that work, which is, you know, very similar to a computer program. So, yeah, I think that there’s, like, a lot of variation there. And it’s really funny because my partner is working on synthetic image detection, and so we’re always, like, standing around the kitchen, drinking coffee, having these like debates about, like, well, what is synthetic, right? If you generate a completely AI generated image, but then you take a photograph of it, if you’re building an AI image, you know, detector, do you want it to flag that as synthetic or not? Because technically, it’s a photograph, you know, does it matter how far away from the wall you stand for it to not be? Yeah, so, so it’s very interesting, very philosophical.
Stephen McCauley 24:31
Yeah, it is. As you were talking about that, I was imagining multi agent, AI systems that are talking to each other and telling each other to make art and to build on that art, and that art, and I had this dystopian vision of maybe the way that a general AI takes over the world is just inundating us with bad art. Maybe that’s really the way this goes, or maybe it creates really amazing art that blows our minds. So yeah, so thanks for this. This is really helpful. And I was wondering one. More kind of question in the weeds about the ethics and AI art is, do you is your sense that you know our sort of popular understandings of what is ethical in AI art? Does that match our current legal frameworks that we have for for parsing all of this?
Jane Adams 25:18
Oh, gosh, we are so woefully unprepared, you know. And I think that it goes, you know, it’s important to also note that our legal framework doesn’t quite protect artists in the way that we would like it to anyway, like, independent of the AI art question. And this is something that, you know, I’ve been thinking a lot about recently, because I’m working on this workshop right, where we’re encoding relations between stakeholders in the creative industry with one another, and you know, the edges between them that there are so many, you know, qualitative characteristics about a singular entity and a network that really impact how we think about them, right? Power is one, so you know, and also, like financial stability being well known, right? All of those things kind of enable people or companies to leverage our legal system in very different ways. So, you know, the Disney corporations of the world are able to send a team of lawyers to enforce their copyright claims in ways that can be, you know, silencing or damaging to artists. And conversely, a lot of artists, you know, feel that they have no recourse in the legal system, simply because they can’t afford to have recourse. Beyond that, I think that, you know, we’ll end up having a lot more conversations about what constitutes fair use. Outside of AI art, there have been really interesting, you know, conversations in headway made about using likenesses. So I know that the state of Illinois has been kind of at the forefront of that, and protecting the idea of a person’s likeness. And this is related to some of the existing work that, or, you know, legal framework that we have around fair use that talks about the economic impacts of use, right? If you’re using something for an educational purpose, you’re not taking away from the proceeds of a company, per se, and in the same way with, you know, using someone’s likeness you you know, we need to be able to explain in court exactly how the use of a person’s likeness could undermine their their brand or their identity, right? You think about the you know, Donald Trump, Taylor Swift endorsing his campaign. Ai generated image, and how that kind of forced her to then actually come out and declare that you know her, you know, allegiance to Kamala Harris and voting. Yeah, so, you know. And another thing about that is the idea of of, like libel and slander, that if you are going to, you know, charge somebody with libel, you have to prove that any reasonable person would believe that story, right? And so it’s, you know, if you know, you tell them you share some rumor that somebody is a Martian, it’s, you know, not, not something that will materially affect them, but if you share some rumor that you know a person has committed a crime, that’s something that can drastically affect their employment and things like that. So yeah, I think that there are some, like legal analogies and frameworks that can be useful. But I also think that, going back to what we just talked about with, you know, labor and power, that there’s, there’s a lot of room that we need to cover in terms of who is allowed to enforce the law, who is allowed to, you know, get compensation,
Stephen McCauley 28:54
right? And how do AI tools play into that whole set of dynamics, too? So just like we asked, Could the internet or social media be more democratizing? We can ask that about AI too. Can it help the little guy to fight against these kind of dynamics, or not? So?
Jane Adams 29:13
Yeah, there was a funny story in the news recently, actually, of someone who used an AI generated legal counsel to represent themselves. And you know, of course, the judge immediately caught it, and was like, What are we doing here? Get out of my courtroom. But you know, I felt a little bit for this person, because really, you know, they were self representing, and they felt like they needed the authority of a person who was able to speak, you know, without stumbling over their words about very complex legal doctrine in order to argue for justice, right? So, yeah, there will be seeing a lot of AI, both, you know, in front of the bench and possibly behind it, whether we like it or not, right?
Stephen McCauley 29:58
Well as an artist. You’ve also, I know, been working with visual design language for helping think through AI art ethics. Can you tell us a little bit about that visual design language and why that’s important for helping you think through these questions?
Jane Adams 30:13
Yeah, I wanted to create something that was participatory, because I realized that this is a very fraught conversation. It has a lot of stakeholders. And one of the best ways that we can really like work through these ethical questions is through, you know, representing them, thinking about hypotheticals, perturbing those hypotheticals. And so I wanted to create sort of a medium through which we could do that in a replicable way, in a workshop that could be run anywhere in the world. So I wanted to use very like, low cost materials. So I’ve just got a bunch of like, colored round stickers, some colored washi tape that connects those stickers to each other. And from that we can create these networks that are, you know, representative of scenarios in the creative economy, and we can think about, well, you know, what if this edge was a different color, or what if these, you know, entities were connected in this way? And so that was important for me, because I wanted to be able to give people a way to talk about these things in a way that’s, you know, easily documented, that allows us to take a step back, right and look at a lot of these scenarios in aggregate, and think about, you know, What relations do we see coming up a lot that people are discussing, what makes things okay or not okay for the majority of People? So, yeah, it was, it was very inspired by, you know, design thinking and the prototypical mindset and things like that. But yeah, just a low cost way of getting people to talk about these sticky situations. Great.
Stephen McCauley 31:53
Yeah, that’s really helpful. And since we are here at an educational institution at WPI, I wonder, just as our kind of last conversation point, if you could help us think about what this transformative technology, what its implications are for what we do here in higher education, how fundamentally do we need to rethink the way we’re educating young people, especially at a stem University, where There are a lot of creators and designers and makers here, you know what, from your perspective as a AI artist and data artist and also machine learning scientist, you know what are the core learning objectives that we should be focusing on at this moment and going forward, recognizing that that this landscape is quite different now, yeah,
Jane Adams 32:41
I mean, I think that always like, search for the gray area. That’s kind of been my approach to any education, right? Is that we don’t really gain anything from thinking about things in terms of black and white. There’s always like augmentations to situations or new information, right? This is the whole scientific method really is like, when we gain new information, we adjust our our perspective, and so that would be kind of my advice to students, is look for the gray area. I also think that AI has major impacts to just the day to day of, you know, pedagogy, the idea that we’re kind of at the end of busy work. If you assign busy work to a student, you can be almost guaranteed that they’re going to use generative tools to get through it a lot faster, which maybe is a good thing, right? I’ve heard of professors who have, you know, assigned their students to generate an essay with chatgpt and then to edit that essay and then explain, you know, what are the differences from what was generated to what they turned in? Why did they make those changes? Why was it an effective way of of making their argument? What were the shortcomings, either factual shortcomings or messaging shortcomings, of those tools. And also, you know, this isn’t unique to education, but there is an interesting conversation to be had about the difference between whether using AI tools comes from the bottom up or the top down. So we see this in creative industries. We see this now in software engineering. Also, I know that the CEO of Shopify, I believe, recently said, like, if you want to hire new people, you have to justify that head count by proving that that job can’t be done by AI. And so people have been very, very reactionary to AI being sort of, you know, quote, forced down their throats by company CEOs who are saying like, you have to use this in order to increase your productivity. Because, you know, creatives and engineers alike have pushed back and said, Well, you know, this is how I think through ideas. This is, you know, where I get joy from my labor or I. That, you know, I’ve gone from creating things myself to being an editor and having to spend a tremendous amount of time correcting the problems that are in this generated media, whether it be text or image. Whereas, when you have companies where the CEO has entirely banned the use of AI or classrooms or whatever, you have students that are like smuggling it in, using it, you know, even though they’re not supposed to, and I know that in companies that can be a real danger, right for data leakage reasons. So it’s a very interesting dynamic that that you can have that dichotomy, even within an individual or within a company, that when the directive to use AI comes from on high, it can be really stifling and have this huge negative reaction, and yet there is this like impulse by people to use AI to sort of automate away the busy work and focus more on the creative aspects.
Stephen McCauley 35:54
Awesome. Jane, and that’s really helpful. Well, thank you so much. This has been really enjoyable and really helpful for us. So thank you. Yeah.
Jane Adams 36:00
Thank you so much.
Interactive Workshop
The Artists’ Industrial Revolution: An Interactive Workshop on AI Ethics
4:00-6:00pm, April 10th, 2025
WPI Innovation Studio, 2nd Floor
Global Lab
Jane Adams is an artist working with emergent forms of mixed media, including data and generated content. She is a PhD candidate in Computer Science at Northeastern University Boston, Massachusetts. She is an engineer of machine learning driven research software for biological scientists.
Workshop Gallery







