Ep 307 - Rights for Machines? AI's Ethical Future with James Boyle
Can you imagine granting personhood to AI entities? Well, some of us couldn’t imagine granting personhood to corporations. And yet... look how that panned out.
In this episode, Steve talks with Duke law professor James Boyle about his new book, The Line: AI and the Future of Personhood.
James explains the development of his interest in the topic; it began with the idea of empathy.
(Then) moved to the idea of AI as the analogy to corporate personhood. And then the final thing – and maybe the most interesting one to me – is how encounters with AI would change our conceptions of ourselves. Human beings have always tried to set ourselves as different from non-human animals, different from the natural universe.
Sentences no longer imply sentience. And since language is one of the reasons we set human beings up as bigger and better and superior to other things, what will that do to our sense of ourselves? And what would happen if instead of being a chatbot, it was actually an entity that more plausibly possessed consciousness.
Steve and James discuss the ways in which science fiction affects our thinking about these things, using Blade Runner and Star Trek to look at the ethical dilemmas we might face. As AI becomes increasingly more integrated into our society, we will need to consider the full range of implications.
James Boyle is the William Neil Reynolds Professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former chair of Creative Commons. He is the author of The Public Domain: Enclosing the Commons of the Mind, and Shamans, Software, and Spleens. He is co-author of two comic books, and the winner of the Electronic Frontier Foundation's Pioneer Award for his work on digital civil liberties.
Transcript
All right, this is Steve with Macro N Cheese. Today's guest is
Speaker:James Boyle. James Boyle is the William Neil Reynolds Professor
Speaker:of Law at Duke Law School, founder of the Center for the Study
Speaker:of the Public Domain, and former chair of Creative Commons.
Speaker:He is the author of the Public Domain and Shaman Software and Spleens,
Speaker:the co author of two comic books, and the winner of the Electronic
Speaker:Frontier Foundation's Pioneer Award for his work on digital civil
Speaker:liberties. I'm really excited about this, quite frankly, because
Speaker:we've been talking about AI [artificial intelligence] a lot lately.
Speaker:We've taken it from different angles, but today is kind of a snapshot
Speaker:of personhood and the future of AI and are we going to be willing
Speaker:to grant personhood and grant rights to artificial intelligence?
Speaker:This is a subject that fascinates me. I am not the expert,
Speaker:but I do believe we have one of them on the show here. James,
Speaker:welcome so much to Macro N Cheese.
Speaker:Thanks so much, Steve.
Speaker:Tell us about your new book. Give us kind of the introduction,
Speaker:if you will, giv us the Cliff Notes.
Speaker:Well, the book is called The Line: AI and the Future of Personhood.
Speaker:And it's a book about how encounters with increasingly plausibly
Speaker:intelligent artificial entities will change our perceptions
Speaker:of personhood, will change our perceptions of those entities, and
Speaker:will change our perceptions of ourselves. The book started with
Speaker:me, or rather the project started many years ago, back in 2010,
Speaker:with me writing a speculative article about what would happen if
Speaker:we had artificial intelligences that were plausibly
Speaker:claiming to have consciousness of a kind that would require us morally
Speaker:and perhaps even legally to give them some kind of personhood.
Speaker:And the next stage was thinking about the possibility that
Speaker:personhood would come to AI not because we had some moral kinship
Speaker:with them or because we saw that under the silicone carapace
Speaker:there was a moral sense that we had to recognize as being fully
Speaker:our equal, but rather that we would give personhood to AI in the
Speaker:same way that we do to corporations for reasons of convenience,
Speaker:for administrative expediency. And then the more difficult question
Speaker:is whether or not you have corporate personhood, which, after
Speaker:all, just means corporations can make deals. And including corporate
Speaker:entities, including unions, for that matter. What about constitutional
Speaker:protections, civil liberties protections, which in the US have
Speaker:been extended to corporations, even though they don't plausibly
Speaker:seem to be part of we the people for whom the Constitution
Speaker:was created. So started with the idea of empathy, moved to the
Speaker:idea of AI as the analogy to corporate personhood. And then the
Speaker:final thing, and maybe the most interesting one to me, is how
Speaker:encounters with AI would change our conceptions of ourselves.
Speaker:Human beings have always tried to set ourselves as different from
Speaker:non human animals as from the natural universe. That's the line
Speaker:that the title refers to. What will it do to us when we look at
Speaker:a chatbot or have a conversation with it and realize
Speaker:this is an entity which has no sense of understanding, that isn't
Speaker:in any way meaningfully comprehending the material, but yet
Speaker:it is producing this fluent and plausible language. Sentences
Speaker:no longer imply sentience. And since language is one of the reasons
Speaker:we set human beings up as bigger and better and superior to
Speaker:other things, what will that do to our sense of ourselves? And
Speaker:what would happen if instead of being a chatbot, it was actually
Speaker:an entity that more plausibly possessed consciousness? So that's
Speaker:what the book's about.
Speaker:Fascinating. You know, I remember, and it's funny because
Speaker:the first page of your book speaks of the guy from Google, Blake
Speaker:Lemoine, who kind of thought that his AI was sentient. And I believe
Speaker:he was fired immediately for saying this out into the public.
Speaker:Yes, indeed.
Speaker:Let's talk about that for a minute. Paint this picture out for
Speaker:the people that maybe haven't heard of this and tell us what it
Speaker:informs us.
Speaker:Well, Mr.Lemoine is remarkable, as I say in the book,
Speaker:he wrote a letter to the Washington Post in 2022 saying that
Speaker:the computer system he worked on, he thought it was sentient. And
Speaker:you know, the Washington Post is used to getting letters from lots
Speaker:of crackpots and people with tinfoil hats and people who think.
Speaker:And people who think there are politicians having sex rings in basement
Speaker:of pizzerias that have no basements. And so, you know, he could
Speaker:have been written off as one of those, but he wasn't this simple
Speaker:delusional person. This was a Google engineer who was actually
Speaker:trained to have conversations in quotes with Google's chatbot LaMDA
Speaker:(Language Model for Dialogue Applications) at the time to see
Speaker:if it could be gamed to produce harmful or offensive speech.
Speaker:And instead he started having these conversations with it that
Speaker:made him think, wrongly, as I say in the book, but made him think
Speaker:that this was actually something conscious. He would read
Speaker:it koans and then ask it to comment. And its comments seemed
Speaker:wistful and bittersweet, as if it was searching for enlightenment
Speaker:too. And so the thing that fascinated me, because I've been
Speaker:thinking about this issue for many years, way before that. Mr.
Speaker:Lemoine was wrong,buthe'sonlythe firstinwhatwillbe
Speaker:alongseriesofpeoplewhose encounterswithincreasinglyintelligentorincreasinglyconvincingAIswillmakethemthink,is this
Speaker:aperson?AmIactingrightly towardsit?Andsowhilehemayhave towards
Speaker:it? And so heclearlydid,chat botsarenot conscious. Ithinkthat
Speaker:he's chat bots are not conscious, I think that he's the
Speaker:harbinger oquiteprofoundlychangeourconceptionofourselvesaswellasoftheentitieswe'redealingwith.
Speaker:As I think about this, you know, I watched a show called the
Speaker:Expanse and I've watched a lot of these kind of futuristic space
Speaker:worlds, if you will, where they kind of, I guess, try to envision
Speaker:what it might look like in a future maybe far away, maybe not
Speaker:so far away. But one of the ones that jumps out is one that you've
Speaker:used in the past, and that is the Blade Runner, a sci fi movie.
Speaker:That was really a phenomenal movie. I loved it big time. How does
Speaker:this relate to that? How can we roll into Blade Runner?
Speaker:So Blade Runner, and th book it's based on Do Androids Dream of
Speaker:Electric Sheep by Philip Dick are two of the most remarkable artistic
Speaker:musings on the idea of empathy. I'm sure your listeners,
Speaker:at least some of them, will have seen the original Blade Runner.
Speaker:You may remember the Voigt Kampf test, which is this test to
Speaker:figure out whether or not the person being interviewed is really
Speaker:a human being or whether they're a replicant, these artificial,
Speaker:basically androids or artificially biologically created
Speaker:super beings or superhuman beings. And the test involves giving
Speaker:the suspect a series of hypos involving non human animals. Tortoises,
Speaker:butterflies, someone who gives the interviewee a calf skin wallet.
Speaker:What would you do in each of these cases? And the responses are,
Speaker:well, I'd send them to the police, or I'd call the psychiatrist,
Speaker:or I'd, you know, I've had my kid looked at to see if he was somehow
Speaker:deluded because he has insufficient empathy for these non
Speaker:human animals. And the irony which Dick sets up and which Blade
Speaker:Runner continues is of course that we're testing these non human
Speaker:entities, the replicants, by asking them how much they empathize
Speaker:with non human animals and then deciding that if they don't
Speaker:empathize as much as we humans think they should, then we need to
Speaker:have no empathy for them and can, in fact, kill them. And so this
Speaker:test, which is supposedly about empathy, is actually about
Speaker:our ability radically to constrict our moral circle, kick
Speaker:people outside of it and say, you're weird, you're different, you're
Speaker:other. And so we need feel no sympathy for you. And so the thing
Speaker:that, yeah, it's a test of empathy, but whose empathy? Seems
Speaker:like it's a test of our own empathy as human beings, and we're
Speaker:failing. At least that's the message I take from the movie. So
Speaker:what fascinated me in the book was that how easy it was in the movie
Speaker:to trigger different images that we have. Priming. You know,
Speaker:this is the sense of psychological priming where primes
Speaker:you to have one reaction. There'll be a moment where the replicant
Speaker:seems to be a beautiful woman. It's like, oh, my God, did I just,
Speaker:you know, voice a crush on a sex doll? Moments when it appears
Speaker:to be a frightened child, an animal sniffing at its lover. You
Speaker:know, like two animals reunited, a killing machine, a beautiful
Speaker:ballerina. And the images flash by, you know, in, like, just
Speaker:for a second. And immediately you can feel yourself having the
Speaker:reaction that that particular priming, the ballet dancer, the beautiful
Speaker:woman, the killer robot, produces. And you can feel your sort
Speaker:of moral assessment of the situation completely change depending
Speaker:on what image has been put in front of you. And I say that it's
Speaker:kind of the moral stroboscope. You know, it's designed to induce
Speaker:a kind of moral seizure in us to make us think, wait, wow, are
Speaker:my moral intuitions so malleable, so easily manipulated?
Speaker:And, you know, how do I actually come to sort of pull back
Speaker:from this situation and figure out what the right response is? And
Speaker:to be honest, I think that fiction has been one of our most
Speaker:productive ways of exploring that. And science fiction, obviously,
Speaker:in particular.
Speaker:You know, I have children, small children. And there's a new
Speaker:movie that's just come out called the Wild Robot. I don't know
Speaker:if you've had a chance to see this yet,
Speaker:I have not.
Speaker:but it's fantastic. So this robot is sent to Earth to gather
Speaker:data for some alien robot race, I guess, outside of our normal
Speaker:ecosystem. And this robot ends up developing empathy, develops empathy
Speaker:for all the wild animals. And the power brokers, if you will, on
Speaker:the spaceship want that robot back so that they can download all
Speaker:of its information, reprogram it for its next assignment. Well,
Speaker:this robot says, no, I want to live. I like my life here. I like
Speaker:these animals. I don't want them to die. I don't want bad things
Speaker:to happen. And the mothership comes back and puts some tractor
Speaker:beam on this robot. It's a cartoon, it's an animated show. But
Speaker:it's really deep thinking, you know, there's a lot going on here.
Speaker:It's very, very sad on so many cases because you watch various things
Speaker:dying, which is not really the norm for kids shows, you know what
Speaker:I mean? They're showing you the full kind of life cycle. And
Speaker:as the machine gets sent back up, somehow or another they didn't
Speaker:wipe the memory banks completely. There was just enough
Speaker:in there that it remembered everything that it had learned while
Speaker:on Earth. Somehow or another reconstitutes its awakening. And
Speaker:some of the birds and other animals came up to the spaceship
Speaker:to free this robot and the robot falls out and they win, of
Speaker:course, and the robot becomes part and parcel with them and so
Speaker:forth. But it was very, very much a child version of teaching
Speaker:empathy for non human beings, if you will, in terms of AI, in terms
Speaker:of robots, in terms of kind of what you're talking about within
Speaker:the Blade Runner, for kids. I don't know if it's a good analogy,
Speaker:but it sure felt like one. I just saw it the other night and I
Speaker:was like, wow, this is going to play well into this convo.
Speaker:It sounds like exactly the themes from my book. I'll have to
Speaker:check it out. Thank you.
Speaker:My wife looked at me and she goes, oh my God, this is so depressing.
Speaker:And my kid was like crying because it was sad parts, but he
Speaker:loved it and he was glued to it the whole time. I think it does
Speaker:say something, you know, I remember watching Blade Runner and
Speaker:feeling that real kind of, I don't want her to die. You know,
Speaker:I don't. I. What do you mean? Right? And I did feel empathy. I
Speaker:did feel that kind of, I don't know how to define it, but I don't
Speaker:want it to die. You know what I mean? And I don't know what that
Speaker:says about me one way or the other, but, you know, it definitely
Speaker:resonated with me. How do you think that plays out? Today we talked
Speaker:a little offline and obviously we have Citizens United that has
Speaker:provided personhood to corporate interests, to corporations,
Speaker:providing personhood to robotics with AI based, you know,
Speaker:I hate to say sentience, but for the lack of better term, sentience.
Speaker:I mean, what about today? What should people be looking at that
Speaker:can help them start making some of this futuristic cosplay,
Speaker:if you will. How do you think you could help people tie together
Speaker:their experience today in preparing for some of these kind
Speaker:of thought exercises? Because this is a real existential type thought
Speaker:exercise. I don't know that you go into this, but I can tell
Speaker:you I come from a certain era where people were not afraid to dabble
Speaker:in mycelium and the fungi of the earth and had, you know, their
Speaker:own kind of existential liberation, if you will. And seeing
Speaker:that kind of alternative universe, I imagine there's a lot
Speaker:of cross pollination in terms of leaving one's current reality
Speaker:and considering a future sense of what life might be like in that
Speaker:space.
Speaker:One way for me that's interesting to think about it is
Speaker:I think people automatically, when they confront something new,
Speaker:they say, what does this mean for my tribe, for my group, for my
Speaker:political viewpoint, for the positions, for my ideology, for the
Speaker:positions I've taken on the world? And so if you take someone
Speaker:who's, you know, thinks of themselves as progressive, I'll use
Speaker:that vague term. On the one hand, you could think that that person
Speaker:would be leading the charge for if - it is an if, as I point
Speaker:out in the book, but it's a possibility and one that seems more
Speaker:likely if we get increasingly capable AI, not in the generative
Speaker:AI chatbot mode, but actually AI, which is closer and closer to
Speaker:the kinds of aspects of human thought that we think make us special.
Speaker:You would think that the progressive would be there leading
Speaker:the charge because this is the next stop in the civil rights movement.
Speaker:You know, these are my silicon brothers and sisters, right? You
Speaker:know, we are the group that has fought for previous. I mean,
Speaker:we've been very good at denying personhood to members of
Speaker:our own species. And the expansion of moral recognition is
Speaker:something that we, most people at least see as an unqualified positive.
Speaker:Isn't this just the next stop on that railroad? And I think that
Speaker:is depending on how the issue appears to us, that might well be
Speaker:the way that it plays out, that people would be presented with
Speaker:a story where, let's say they are interacting with an ever more
Speaker:capable AI sort of enabled robot that is, let's say, looking
Speaker:after their grandmother as their comfort unit, which I think
Speaker:is a very likely future, regardless of whether or not they're
Speaker:conscious. And then they start having conversations with it. They
Speaker:start thinking, whoa, am I acting rightly towards this being?
Speaker:You know, can I treat this as just the robotic help? Doesn't it
Speaker:seem to have some deeper moral sense? Or doesn't that pull on me
Speaker:to maybe recognize it, to be more egalitarian towards it. So that's
Speaker:one way I think things could play out. But then if instead of
Speaker:doing that, I started by mentioning Citizens United, as we
Speaker:did, and I talked about the history of corporate personhood.
Speaker:And I'll say often in sort of progressive, leftist, radical circles,
Speaker:people are kind of loosey goosey in the way that they talk
Speaker:about corporate personhood. I actually think that it's probably
Speaker:a very good idea for us to enable particular entities which
Speaker:take particular risks at particular moments, because not all
Speaker:plans that we have will pay off that come together in a corporate
Speaker:form, whether it's a union or a corporation, and try and achieve
Speaker:a particular goal. And the fact that we allow them to sue and
Speaker:be sued seems like actually a pretty efficient way of doing something.
Speaker:And something that enables some benign innovation, risk taking,
Speaker:that kind of stuff. The next question, of course is, and what
Speaker:about political rights? And that's the next stage. And what happened
Speaker:in the US was that you had the 13th, 14th and 15th amendments passed
Speaker:after the Civil War, some of the amendments I'm personally most
Speaker:fond of in the Constitution, and that offer equal protections
Speaker:to formerly enslaved African Americans. And what we saw instead
Speaker:was that the immediate beneficiaries of those equal protection
Speaker:guarantees, as I lay out in the corporate personhood chapter,
Speaker:were not formerly enslaved black Africans. They were corporations.
Speaker:Black people brought very few suits under the equal protection
Speaker:clause and they lost most of them. The majority were bought by
Speaker:corporate entities. So if I tell you that story, you're going,
Speaker:oh my God, they're doing it again. This is another Trojan horse
Speaker:snuck inside the walls of the legal system to give another immortal
Speaker:superhuman entity with no morals an ability to claim the legal
Speaker:protections that were hard fought by individuals for individuals.
Speaker:And that's absolutely anathema to me. And so if you think of yourself
Speaker:as having those two people inside you, the person who feels
Speaker:moral empathy, the person who is aware that in the past we collectively
Speaker:as a species have done terrible things where we foreclosed
Speaker:our empathy to groups and said, you don't matter, that that's
Speaker:among the most evil periods in our history, in our collective history,
Speaker:you could be, oh, wow, this is absolutely rights for robots. And
Speaker:if you started on the other track that I described there, the
Speaker:one that follows what happened with corporate personhood, you might
Speaker:see this as an enormous cynical campaign that was just here
Speaker:to screw us one more time with another super legal entity. What
Speaker:I'm trying to do is to get there before this fight begins. And
Speaker:everyone's got their toes dug in, in the sand going, this is my
Speaker:line, damn it. This is what my tribe believes, you know? I'm not
Speaker:going to even think about this seriously, because if I do, then
Speaker:it challenges my beliefs on all kinds of other things, from,
Speaker:you know, fetal personhood to corporate personhood to animal rights,
Speaker:what have you. And look, guys, we're not there yet. You know, these
Speaker:aren't, in fact, conscious entities yet. Maybe now's the time
Speaker:to have the conversation about this kind of stuff so that we could
Speaker:think about, for example, if we are going to have corporate forms
Speaker:specifically designed for these entities, what should they
Speaker:look like? If we're going to have a test that actually says, hey,
Speaker:you know what, you graduated, we actually have to admit that you've
Speaker:got enough human-like, qualities that morally we have to
Speaker:treat you, if not as a human, then as a person. Well, what would
Speaker:those tests be? And so I want to sort of preempt that, get ahead
Speaker:of the doomers who are saying they'll kill us and the optimists
Speaker:who think they'll bring us into utopia and say, let's have a
Speaker:serious moral discussion about what this might do to us as a species.
Speaker:And so that's what the book's trying to do. Whether or not it does
Speaker:it, that's up to your listeners to describe.
Speaker:Yeah. So this brings something very important to mind, okay? We
Speaker:are in a very, very odd time in US History right now. Some of
Speaker:the debates that are taking up the airwaves sometimes feel a little
Speaker:crazy that, like, why are we even talking about this? But one
Speaker:of the things that jumps out is the fear of immigrants. Okay.
Speaker:Othering people.
Speaker:Yes.
Speaker:And calling them illegals. And anytime I hear someone say illegals,
Speaker:no matter how colloquial it is, it makes me lose my mind. I can't
Speaker:stand it. No one's illegal. Everybody is a human being. But then
Speaker:you start thinking about. It's like, well, what are some of the
Speaker:conversations as to why people are afraid of immigrants? And you've
Speaker:got the cartoon flavor that is put on the television during political
Speaker:ads where everybody that's an immigrant is going to murder your
Speaker:wife when you're not looking and they're going to rape your daughters
Speaker:and, you know, all this horrible, let's just be honest, fascist
Speaker:scapegoating, right? But then you flash forward to the other factor
Speaker:there, and it's like, there's a cultural norm. The more people
Speaker:you allow into the country, or any country for that matter, that
Speaker:are different, that have a different quality of cultural perspective,
Speaker:the more the existing culture is challenged for hegemony, for challenge
Speaker:for the dominant culture, challenge for, you know, what do
Speaker:we value? And so forth. And rightly or wrongly, that is a real
Speaker:debate that is happening. No. happento comedownonthatmuchdifferent,moreopen
Speaker:more open borders, let's ofournation's currencyissuing powertomakeeveryonewhole.
Speaker:There'snoneed topiteachother against eachother.Butifyouflashforward,youthinkabouttheSupremeCourt,even
Speaker:whenDonaldTrump the Supreme Court even, when Donald Trump appointed
Speaker:rulings,thelawsofthisland, evenoutofpower,thelongarm ofhisappointments
Speaker:andsoforthreallyhadanimpact.AndwejustsawRoeoverturned.Well,itjust aseasily,and
Speaker:thisis inyour impact. And we just saw Roe overturned. Well, just
Speaker:as easily, and this is in your other wheelhouse, being focal on
Speaker:law, Biden could have reformed the court, he could have stacked
Speaker:the court. He IthinkaboutrobotsandAI,youknow, robotsdon'tjusthappenautonomously.
Speaker:Robotsarecreated.And asI thinkaboutthesedifferententities,I mean,
Speaker:maybe somedaytheyfindawaytoselfcreate,Idon'tknow,butmaybetheydonowand Ijustdon't
Speaker:knowitthrough some microbiologyormicrotechnicalstuff.But ultimatelytheyhavetobecreated.Sowhat'sto
Speaker:stop afactionfromcreatinga millionrobotswith sentientabilitiesand
Speaker:empathy andsoforththathavesomeformof centralprogramming?Shouldwegivethemvotingrights,shouldwegivethempersonrights,etcetera?Whatwouldpreventthemfrombeing
Speaker:stackedlike thescotus?Andagain,thisisallpieintheskybecauserightnowwe'rejust talkingtheoretical.Butfromthoseconversationsyou're
Speaker:advocatingfor, Ithinkthatthere'ssomethingtobe cetera?
Speaker:What would prevent them from being stacked like the SCOTUS? Sohowwould
Speaker:youaffectthebalanceofthat? Andyouknow,justoffthetopofyourhead,obviouslythisis not
Speaker:somethingthatI'vegivenalotofthought.Itjust came
Speaker:tomerightnow.Butyour thoughts?
Speaker:Well, I think that one of the issues you raise here is that we
Speaker:would have a fundamentally different status vis a vis robots,
Speaker:autonomous AI, what have you, than we have in other moral debates.
Speaker:We didn't create the non human animals. You know, we weren't the
Speaker:ones who made chimpanzees as they are, or the cetaceans and the
Speaker:great apes in general, the whales and so forth. Those non human
Speaker:animals that have really quite advanced mental capacities. But in
Speaker:this case, we will be, we are designing artificially created entities.
Speaker:And that brings up a lot of issues. What should be. What are
Speaker:going to be the limits in that? There's the let's make sure
Speaker:they don't destroy us. is one thing in AI circles. This is the
Speaker:idea of alignment, that we can ensure that the interests of the
Speaker:entity that we create align with our own, and that it is, in
Speaker:that sense obedient to us. But of course, there are other issues
Speaker:are raised here. Say there is a line that we come to decide. It's
Speaker:like, okay, this is the line. If you're above this line, then we
Speaker:really have to give you at least a limited form of personhood.
Speaker:We're perfectly capable of designing entities that fall just
Speaker:short of that line or that go over it. So what are the ethics there?
Speaker:Is it unethical for us to say, okay, well, we don't want sentient
Speaker:robots because we don't want to face the moral duties that might
Speaker:put on us? If you think of the Declaration of Independence, it says,
Speaker:all men are endowed by their creator with certain unalienable
Speaker:rights. They say unalienable, not inalienable. And of course, in
Speaker:this case, we will be their creator. And so does that mean we
Speaker:say, well, that's great because, you know, we created you,
Speaker:so we didn't give you any rights. Will we ever be able to recognize
Speaker:real moral autonomy in something that we're conscious we
Speaker:made a deliberate decision to create? So I think that those issues
Speaker:definitely are worth thinking about.
Speaker:You are listening to Macro N Cheese, a podcast by Real Progressives.
Speaker:We are a 501c3 nonprofit organization. All donations are tax
Speaker:deductible. Please consider becoming a monthly donor on Patreon,
Speaker:Substack or our website, realprogressives.org. Now back to
Speaker:the podcast.
Speaker:I think that our experiences right now show that both we can be
Speaker:terrified by others. That's as you described in the discussion of
Speaker:immigrants. We can also feel great empathy towards them. And I
Speaker:think that right now our attitude towards AI is a little of
Speaker:both. What's going to happen in the future is fundamentally uncertain
Speaker:because we have no idea how the technology is going to develop.
Speaker:For example, a lot of it is going to develop in places outside
Speaker:of the United States. Obviously it already is. And the
Speaker:ideas of the people who are creating artificial entities in another
Speaker:country may be entirely different than ours and their goals
Speaker:and morals. Even if we could agree, they might not agree with
Speaker:us. So I do think that there are definitely issues there And I
Speaker:think they're ones that are quite profound. For myself, just
Speaker:thinking about all of these things, for example, thinking about
Speaker:the interests of non human animals, doing the research for this
Speaker:book actually changed my ideas about some of it. I came to believe
Speaker:that we actually do need to give a greater status to the great
Speaker:apes and possibly the cetaceans. Not full personhood, not,
Speaker:you know, that the chimpanzee can walk into the voting booth and
Speaker:vote tomorrow, but that we need to treat them as something more
Speaker:than merely animals or pets or objects. And that actually that moral
Speaker:case has been very convincingly made. So for me, thinking
Speaker:about this very, what seems to a lot of people sci fi, an unrealistic
Speaker:issue, the issue of how we will come to deal with AI, as well
Speaker:as how we should come to deal with AI, actually made me reassess
Speaker:some of my moral commitments elsewhere. And I found it kind of
Speaker:useful because it gave me a vantage point. And so I guess I'm
Speaker:slightly idealistic in that regard. I do love the tradition of
Speaker:the humanities that asks us to look back at history, to look at
Speaker:literature, to look at speculative fiction, to look at moral
Speaker:philosophy, and then to say, okay, once I take all this on board,
Speaker:are all my views the same? Because if they are, then I'm really
Speaker:not thinking. I'm just processing everything into the whatever
Speaker:coherent ideology I had before these things came on board. For me,
Speaker:this book was a process in that process of exploration, that
Speaker:process of self criticism and that process of, I would say, anxiety
Speaker:about where we might be going.
Speaker:You know, it speaks to another thing. And I want to kind of touch
Speaker:on Star Trek, because I love Star Trek. I've watched it forever
Speaker:and seeing some of the humanoid androids that are part of
Speaker:this from Data on through. I mean, there's always the dilemma.
Speaker:And you can see real genuine love from the people towards them.
Speaker:They treat them as equals to some degree, right? There is a sense
Speaker:of equality with Data, for example. But you brought up chimpanzees
Speaker:and, you know, animals of today, right? And I think of things
Speaker:like killer whales, orcas, and they're brilliant animals, they're
Speaker:extremely smart, and yet their livelihood is based on hunting down
Speaker:and killing other animals to eat, to survive. And I mean, we as
Speaker:humans, we hunt, we do all sorts of stuff like that. But yet
Speaker:at the other hand, if we shoot another human being, we call that
Speaker:murder. Whales eat each other sometimes. I mean, they attack different
Speaker:forms. Like, you know, a sperm whale might get attacked by an orca
Speaker:or, you know, a shark so obviously we want to protect, I mean,
Speaker:desperately want to protect our ecosystem from a survival standpoint.
Speaker:How do you look at that in terms of like organic entities today?
Speaker:Non human, organic, biological entities that are created by procreation
Speaker:of their own species. How would you view that? And then I want
Speaker:to pivot to Data.
Speaker:Okay, excellent. So, as you know, human beings throughout history
Speaker:have attempted to justify their special moral position that
Speaker:they have or think they have above non human animals and above
Speaker:mere things. And sometimes it's done through some holy text.
Speaker:Pick your favorite holy text. That we've been given the earth in
Speaker:dominion. And sometimes it's done because we are supposed to have
Speaker:capacities that they lack. So we reroot it in language. Aristotle
Speaker:says language allows us to reason about what's expedient. I'm
Speaker:hungry. There's some grapes up there. There's a log that I can roll
Speaker:over and stand on. I can reach the grapes. But also to reason about
Speaker:morality. Wait, those are Steve's grapes. And he's hungry too.
Speaker:And if I take them, he's going to stay hungry. And maybe that's
Speaker:wrong and I shouldn't do it. And so Aristotle goes, that's what
Speaker:puts us above other non human animals. And I have to say we've
Speaker:obviously made some improvements to the philosophy in
Speaker:the 2300 years since he wrote that, but not that many. The basic
Speaker:idea is still there. And I do think that one reason to believe
Speaker:that there is a difference between us and non human animals
Speaker:is that I don't think there are any lions having debates about
Speaker:the ethics of eating meat around the Thanksgiving table. Thanksgiving,
Speaker:there'll be lots of vegans and vegetarians sitting down with their
Speaker:families having to explain yet again that they don't think that
Speaker:this is right. That's just not a conversation that one can imagine
Speaker:in the world of non human animals. So I think we do have some
Speaker:reason for thinking that we do have capacities that perhaps mark
Speaker:us out as using moral reasoning in a way that is really
Speaker:relatively unique on this planet. The difficulty is, of course,
Speaker:that that's why we say if you kill another human being, it's murder.
Speaker:But if the lion eats an antelope, it's like, that's just
Speaker:David Attenborough telling you a story, right? And sort of like
Speaker:we draw that line. And I think it's a perfectly good one. But I
Speaker:think that it also shows us that if those same reasoning capacities,
Speaker:in particularly the moral one, start being evidenced by artificially
Speaker:created entities, we're really going to be in a clef stick. Because
Speaker:then the very thing that we said entitled us to dominion over
Speaker:all of them, the non human animals, is suddenly something that
Speaker:we share with or potentially share with some other entity. So
Speaker:absolutely, I think those issues are real. But you said you
Speaker:wanted to turn to Data from Star Trek.
Speaker:Yes, One of the things that I found fascinating, and there's several
Speaker:episodes throughout that kind of touched on this. But what happens
Speaker:when a human being's right to self defense is passed on to an android,
Speaker:a non biologically created entity in this space? I mean, there
Speaker:were certain fail safes built in, certain, you know, protocols
Speaker:that were built in that were hardwired to prevent xyz. But what
Speaker:happens to, you know, anyone's right to self defense? I think we
Speaker:all at some level recognize that it is a human right. We say
Speaker:human right, right? A human right to self defense. If you're
Speaker:an oppressed community being dominated by a colonizer and you
Speaker:decide that your freedom is worth more than their right to colonize
Speaker:you, you might find the roots of war. You know, people fighting
Speaker:back. I mean, I imagine slaves, and I never would condemn
Speaker:one of them for going in and strangling their master, so to speak,
Speaker:because no man should be a master of another in that way. And
Speaker:that's a judgment, I believe that's fundamental. And yet at the
Speaker:same time, though, to your point earlier, where it's like, well,
Speaker:hey, we don't believe in slavery, but yet here we have these
Speaker:autonomous entities that we're treating as slaves. What happens
Speaker:to their right to self defense? I mean, is that pushing
Speaker:the boundaries too much? I mean, what are we talking about?
Speaker:Do they have a right to exist?
Speaker:Right. You bring us back. You mentioned Star Trek, but it also
Speaker:brings us back to Blade Runner. As you may remember in the
Speaker:penultimate scene where Roy goes back to the Tyrell Corporation
Speaker:to meet Mr. Tyrell himself, the person who created the replicants.
Speaker:He says it's not easy to meet one's maker. And it's, of course,
Speaker:play on words. You know, you're talking about meet one's maker,
Speaker:like meet God or die. Are you talking about meet one's maker? Meet
Speaker:the person who actually created you? And then Roy passionately
Speaker:kisses and then kills Tyrell in an unforgettable, and really kind
Speaker:of searing scene. And I think that the film really makes you think
Speaker:about whether or not Roy was wrong to do that, or at least maybe
Speaker:both parties were wrong in what they did. I'm kind of still
Speaker:upset about the way that Roy seems to have treated J.F. Sebastian,
Speaker:who was really nice and just made toys. And I don't think J.F.
Speaker:Sebastian survived that. So I was like, I'm kind of down on Roy
Speaker:on that for that reason. But it does raise the problem. I mean,
Speaker:one of the things about human beings is we make moral decisions
Speaker:poorly at the best of times, but very poorly when we have been
Speaker:told we must be afraid. And if someone can tell you you need to
Speaker:be afraid, they're coming for you, they're going to kill you, then
Speaker:our moral reasoning all but seizes up. I mean, it's interesting.
Speaker:The person who invented the word robot, which it comes to us
Speaker:from the Czech, from Vroboti. It was this play by Karl Capek Russen's
Speaker:Universal Robots. It was in the 1920s. And he immediately imagines,
Speaker:as he coins the word that would become the English word robot.
Speaker:He also invented the movement for robot rights and imagines people
Speaker:both passionately protesting against robot rights and the robots
Speaker:attempting to take over and to kill those who oppose them. So we
Speaker:literally have been thinking about the issue of robot rights and
Speaker:robot self defense as long as we have been thinking about robots.
Speaker:You can't really have one without the other. And I just think
Speaker:the fact that the very word was born in a play that contemplated
Speaker:the possibility of a robot uprising is just quite wonderful.
Speaker:The Movie what? Space Odyssey 2001.
:A Space Odyssey.
:Yes.
:HAL. The computer.
:Yeah, yes, you've got HAL. And then take it a step further. I mean,
:HAL's made some decisions. I'm going to live, you're going to die
:kind of thing. And we've got Alien. I, I remember the movie Prometheus.
:It was really, really good. Michael Fassbender was in it. He
:was also in Alien Covenant where he was an android and, and
:they sent him to do things that didn't require an oxygen breathing
:individual to do that. He could go where they could not. But
:he had been twisted as well. He was there basically as both, you
:know, a bit of a villain and also a bit of a hero at times. There's
:so many people poking or taking a bite off of this apple through
:these kinds of fictional stories. But they're really good
:at creating kind of the conversation, if you're looking to
:have the conversation. They create that kind of analytical framework
:to give you pause to say, Hmm...
:I think that's exactly right. And one of the things I address in
:the book is some of my colleagues said, you know, look,
:you're supposed to be a serious academic. Why are you writing
:about science fiction. Why didn't you just do the moral philosophy?
:Why didn't you just come up with the legal test? And so that
:issue I think is a fundamental one.
:Back to Mr. Data and Spock, right? So Spock being an alien and
:you know, a non, you know, humanoid. I mean, I don't even know
:how you describe Spock. Right. Vulcan, obviously. And then Data,
:who is definitely an AI, you know, sentient being. I mean, at
:some level Data goes way out of his way to say he doesn't have
:feelings or oh, I'm not programmed for that. But I remember
:all the times of him being on the holodeck, you know, I remember
:all the different times of him crying and feeling things and trying
:to figure out what is this thing I'm feeling and so forth. Help
:me understand the relationship there. Because these two, both were
:deeply integrated and highly vital to the success of not only
:the different starship commanders that, you know, Star Trek
:and all the other spin offs, but these two are key figures that
:should cause all of us to ask some pretty important questions.
:And I think that falls right into the work you're doing.
:Yeah, I mean, Star Trek was great at and I think was very honest
:about taking the best of science fiction treatments and basically
:transferring it onto the Enterprise. So there are themes that
:many other sci fi writers explored. Star Trek, we're all grist
:to the writer's mill and I think that was acknowledged and everybody
:thought that that was a fair deal. And they are there, probing
:two different versions of the line. One is the thing that gives
:us rights, that makes us special, is the fact that we belong
:to the human species, that we have human DNA. And Mr. Spock is
:in fact half a Vulcan, half human. And so some of his DNA isn't
:human. And yet obviously you said, well, therefore, you know,
:we could kill him for food or send him to the salt mines to labor
:for us for free. Then most people would find that a morally
:repulsive conclusion. So what the writers are doing there is they're
:changing one of the variables, the DNA, and making you see that
:it's not just DNA that actually animates your moral thinking.
:And as for Data, it's like, well, it's, he's not even a biologically
:living being, he's actually robotic or silicon based. And so
:there it's showing. And again now it's not the DNA issue. Now it's
:like, this isn't even something that is like you in being
:actually a biologically based human being. And again, we wouldn't
:say, oh well, that means that's freeusof allmoralobligations.Moral
:philosophers,particularlyoneswho'vebeenthinkingaboutthenon human
:animals,havearguedthat tosaythathumansgetrightsbecausewe'rehumanisspeciesist. That'sasbad
:as beingaracist orasexist,saying thatI deservespecialrightsbecauseI'mwhite
:orI'maguyorwhateverotherspurioustribalaffiliation Iwantedtobasemy
:selfconception in.And theysaysaying humans shouldhaverightsjustbecausethey'rehumanisjustasbad.Andinsteadtheyargueno,we
:need tolookatcapacities.It'snotthe factthatyou
:andIare havingtheconversationnowandweshareabiological kinshipthatgivesusmoralstatus.
:It'sthefactthatwehaveaseriesofabilitiesfromempathytomorality,tolanguage, tointuitiontohumor,totheabilitytoform
:communityand evenmake jokes.Andthatthosethingsarethethingsthatentitleusto to
:intuition, to Maybebecausewemakefreemoralchoicesourselves andweare,
:wearewillingtobemoral subjects,paymoralpatientsaswellasmoralactors,andthatweshouldjustturnaroundandrecognize,givemoralrecognitiontoanyother entity,
:regardlessofitsform,thathasthosesamecapacities.Andthere Ikindof
:twocheersforthatpointofview.Right.Soastothepointthatifsomethingelseturnedup thatwasradicallydifferentforusbuthadallthecapacitieswethinkaremorallysalient,sure,Iagree
:wewouldbe I kind of - two cheers for that point of view, right?
:regardlessof whetherthe personshared ourDNA,regardlessof
:whethertheentitywasinfactbiologicallybased atall.Socheer,
:cheer.Thedownside,thethingthatIdon'tlike,is thatitattemptsto
:abstract awayfromour commonhumanity.AndIthinkthatthefightduringthe 20thcenturyfor
:anidea ofuniversalhumanrightsbasedmerelyonourcommonhumanity,notourrace,notourreligion, notoursex,
:notour gender,justthefactthatwe'rehuman.That wasagreatmoral
:leap forward.That's oneofthe greatachievementsofthehumanrace,inmyview.Andthething thatmakesmeunwillingtosay,oh,
:it'sunimportantwhetheryou'reahuman.That'samorallyirrelevantfact. It'sasbad
:asbeing aracist,isthatifwerootmoral recognitionsolelyinmentalcapacities,whatdoesthatsayabout
:thepersoninacoma?Whatdoesthatsayabouttheanencephalicchild?Theydon'thaveanyreasoningabilities,at leastrightnow.Doesthatmeanwe
:havenomoralobligationstothem?I wouldfranklythinkthatclaimwasliterallyinhuman.And morally
:irrelevant fact, it's kinshipof thisideaofhuman rightsforallhumans,regardlessofrace,
:sex,yada, yada,butalsoregardlessofintellectualcapacities. Becausethemovementforuniversalhumanrightswaspartlythefightagainsteugenics,and
:inparticular,Nazieugenics.And so Ithinkbeing soinfluencedbywhathappenedin
:theanimalrightsdebatethatwesay, oh,it'sjustprimitiveandirrationaltothinkthat
:there'sanythingspecialaboutbeing humanand
:togiverightstoyoujust becauseyou'reahuman.IthinkIwantto getoffthebusthere.
:Iwanttosay,no,Ithinkweshouldgivemoralrecognition toeverymemberofourspecies,regardless
:oftheirmentalcapacities.ButI alsothinkthatwehaveamoraldutytotake seriously
:moralclaimscomingfromentitieswhomightinthefuturebeveryunlikeusandwhowewouldhavetoconfront and
:go,wow,youknow,youdon'tlooklikeme atall.Youdon'tsoundlike
:meatall.ButdoI neverthelesshaveamoraldutytowardsyou?AndIthinkthat isaconversationthatitnot.Notjustagoodthingto
:happen.It'sgoingtohappen. AndsoIguess Iwrotethisbooktoget theconversationstartedalittle
:early. sss n, i
:One of the things that brought us together was the fact that, you
:know, we've had Cory Doctorow on here before, and there's a there's
:a great synergy between both of your worldviews and some of the
:writings you've done. And. God, have you ever seen anybody more
:prolific than Cory in terms of writing Cory's just produced?
:I mean, I have some doubts about him being an android myself.
:He is, he's one of the most wonderful people I've ever met. And
:if your listeners haven't gone out and bought his books, they should
:go out immediately and do so, both his fiction and his nonfiction.
:He's also just a really frustratingly nice person, too. You
:know, you want someone that prolific to maybe have some other
:personality flaws, but he's just a really good guy. So, yeah,
:Cory. I've definitely wondered about Cory. I mean, possible android,
:no doubt, but I think, you know, Cory quite rightly would be
:someone who, looking in this current political environment, would
:go, what I see is a lot of AI hype. I see people, particularly
:the people heading companies developing AI, making claims that
:are absolutely implausible. I see them doing it largely as an attack
:on workers because they want to replace those workers with inferior
:substitutes, whether it's the scriptwriters or whether it's the
:radiologists, and that you should see this as a fundamental
:existential struggle in which these new technologies are being
:harnessed and being deployed in ways that will be profoundly bad
:for working people. And that's what we ought to be focusing on.
:And I think Cory's absolutely right. I think that that is an incredibly
:deep concern. I do agree with him about the hype. So for him, I
:think this book is kind of like, well, Jamie, why are you doing
:all this philosophizing about the idea that these entities might
:one day deserve rights? Shouldn't you be focusing on, you
:know, the far more real struggles that actual human beings
:are having right now? And my answer to that is I don't find that
:I need to pick one or the other. And in fact, I personally,
:I don't know about you, I find that the more one gets morally engaged
:or engaged in serious moral reflection, it's not like you have
:limited bandwidth. So you're like, okay, wow, now I'm sympathizing
:with non human animals. Sorry, granny, there's no more disc space
:for you. I'm going to have to stop thinking about you. I don't
:find that that's the way my brain works. So I think all the concerns
:that Cory raises are actually very real ones. I just think that
:they're not the only concerns. And I think that we very much should
:worry and agitate to make sure that whatever form this technology
:takes, it's one that isn't driven solely by the desire to disempower
:working people. And there's lots of precedent for that. I was
:talking to someone recently who's doing a study of the development
:of the history of capitalism. And one of the things that he was
:talking about was the Arkwright water loom, which was a
:way of spinning thread that was being developed at the same time
:as the spinning jenny. And the thing is that the spinning jennies
:and the other technologies which were superior were still hefty
:enough so that they really needed a lot of upper body strength
:and thus tended to need a male workforce at the time. But the Arkwright
:water mill could be worked by women and even children. Produced
:crappier thread. Worse thread count. You wouldn't see it on Wirecutter
:as your recommendation for sheets, but a great advantage in
:that this isn't a group of people who are unionized. This is
:not a group of people who are organized, and so this is if we can
:manage to use the technology to push production into a less politically
:organized group, then hell yeah, let's do that. And so I think
:that that's the kind of concern that Cory has, to be honest,
:it's one that I share. That's just not what this book is about.This
:book is about the moral issues. And I personally don't think
:that one has to choose between thinking about those two things.
:And I guess I've been around long enough that I've seen confident
:claims that this is a distraction from the real fight,
:turn out not to age very well. I remember during the Campaign for
:Nuclear Disarmament, which I marched in lots of rallies in Britain
:campaigning for nuclear disarmament, people would be talking
:about climate change, which back then we call global warming,
:and would be shouted down. It's like, that's ludicrous. Why
:are you worrying about the weather? You know, we could all be
:dying in a nuclear war. Or when people started talking about
:the rights of non human animals. My God, why are you talking
:about dogs and cats when there are people, etc., etc. You know the
:argument, right? Which is a familiar one. You may not be worried
:about this thing because there are more important things to be worried
:about. And I just personally have seen people be wrong often enough
:that I am now have a little bit less confidence in my ability
:to pick winners in terms of what's going to look in the future
:like it was a really good allocation of my moral energies.
:Let me just tack this on. During this past election, I mean,
:for the last year, our organization fought to bring about
:awareness of the slaughter of the Palestinian people. Watching
:estimates of 40,000 children killed in Palestine and Gaza and,
:you know, bringing that up. And I'm not going to mention the
:name because I'd probably get in trouble at Thanksgiving. But individuals
:that are Democrats. So why are you worried about that? Our family's
:here in this country, what are you worried about that for? I need
:to know that I'm not going to lose more rights for my daughter.
:So I don't really care. And hearing that was almost like nails
:down a chalkboard to me. That line of reasoning did not work for
:me at all. And I'm hearing you as you're going through this. And
:I'm saying to myself, you know what? There is room to walk and chew
:gum. There is room to do that. So I really value that. Listen, folks,
:the book we're talking about is The Line: Artificial Intelligence
:and the Future of Personhood with my guest, James Boyle. James,
:I'd like to thank you first of all for joining me today. I know
:we're running up against time, but I'm going to give you one last
:offer to make your final point. Is there anything that we
:didn't cover today that you would like our listeners to take
:out of this, aside from buying your wonderful book?
:Well, funnily enough, I'm going to argue against interest and
:tell your listeners, who may have much better things to do with
:their money, that if they don't want to buy the book and they're
:willing to read it electronically, I wanted this book
:to be under a Creative Commons license, to be open access, and so
:that anyone in the world would be able to download it and read it
:for free. Because I think the moral warrant for access to knowledge
:is not a wallet, it's a pulse. And that's something that's guided
:me all through my time as being a scholar. It may seem sort
:of an abstract or pompous way of putting it, but I believe it very
:deeply. And so basically, everything I write, everything I
:create, whether it's books like this or comic books about the
:history of musical borrowing, they're all under Creative Commons
:licenses and people can download them for free. So if you
:can buy it, please do so. MIT was kind enough to let me use a Creative
:Commons license, but if you don't have the dough, then just download
:it and the book's on me. I hope you enjoy it.
:Fantastic. Thank you so much, James. And it was nice to get to
:know you prior to the interview here a little bit, and
:I hope we can have you back on. There's so much more I know that
:you bring to the table that I think our listeners would really
:enjoy.
:Well, thank you, Steve. I very much enjoyed chatting to you, and
:I hope you have a wonderful day.
:You as well. All right, folks, my name is Steve Grumbine with my
:guest, James Boyle. Macro N Cheese is a podcast that's a part
:of the Real Progressives nonprofit organization. We are a
:501C3. We survive on your donations. Not the friend next to
:you, not the other guy, your donations. So please don't get hit
:with bystander syndrome. We need your help. And as we're closing
:out the year of 2024, just know that all your donations are
:indeed tax deductible. Also remember, Tuesday nights we have
:Macro N Chill, where we do a small video presentation of the actual
:podcast each week, Tuesday nights, 8:00pm Eastern Standard Time.
:And you can come meet with us, build community, talk about the subject
:matter, raise your concerns, talk about what you feel is important,
:and we look forward to having you join us for that. And again,
:on behalf of my guests, James Boyle and Macro N Cheese. My name's
:Steve Grumbine, and we are out of here.