The connection between our minds and our bodies has long been intangible to us. We can measure our weight, blood pressure, and bone density; our capacity to hear, see, feel, and taste. But our thoughts don’t fit on scales or in vials. It’s possible to share an inner experience with others by telling them about it, or writing a story or song, or even taking a photograph or drawing a picture, but it cannot be quantified or transferred directly. It’s those inner experiences — and what we do with or despite them — that make us who we are.

Güven Güzeldere has spent most of his life exploring what in philosophy is referred to as the “problem of consciousness.” Growing up in Istanbul, Turkey, he was raised without any religious structure. Turkey is a majority-Muslim country, but it also holds substantial Christian and Jewish populations, and exposure to all three faiths — without being forced to adhere to one — bred a curiosity in Güzeldere. For a time his inquisitive nature had him thinking he would pursue journalism, but after studying computer science as an undergraduate at Boğaziçi University in Istanbul in the 1980s, he moved to the United States to get a master’s degree in artificial intelligence and philosophy at Indiana University. Machine learning came to feel limiting to him, though, so he decided to study consciousness.

The problem was, no one at Indiana University at the time was interested in the subject. Stanford and the University of California, San Diego seemed promising, so Güzeldere knocked on “every door” in the philosophy and psychology departments at both, asking, “Can I do a PhD on consciousness?” The responses, he says, were blunt: “Consciousness is nothing worth studying,” or, “It’s academic suicide.” Undeterred, he jumped in at Stanford, where he participated in experiments with lucid dreaming and hypnosis. After graduating, he taught at Duke University for more than a decade before moving on to Harvard. At present he is working independently on a book on consciousness.

Güzeldere’s work and interests lie at a crossroads among multiple disciplines: philosophy, neuroscience, psychology, and computer science, among others. He has hosted a weekly online radio show, Open Consciousness, in Turkish for more than a decade, and another show that examines, album by album, the catalogs of Leonard Cohen and Bob Dylan. As an intellectual with a liberal bent, Güzeldere attracted the attention of his home country’s autocratic government, even though he had lived in the U.S. for decades. After signing a petition condemning Turkish president Recep Tayyip Erdoğan’s policies in eastern Turkey in 2016, Güzeldere learned that his name had been added to a list of state enemies: returning home could mean risking imprisonment. When I first contacted him for an interview, he was preparing to leave for Istanbul for the first time in five and a half years, despite the possibility he might be detained. He was able to travel there and back without incident.

I first met Güzeldere in 2007. I took his Philosophy of Religion course at Boğaziçi University, where he was teaching study-abroad programs for Duke. The class examined the three major monotheistic belief systems and discussed topics like the soul, the afterlife, and the identity that faith provides. For this interview we met at his home in Cambridge, Massachusetts, shortly after his return from Turkey. He offered strong tea and long answers wrapped in even more questions.


Güven Güzeldere sits on a park bench in Istanbul and pets the cat on his lap.

Güven Güzeldere with a friend in Istanbul.

© Sila Ünlü

Cohen: I’ve been talking to people about this first question for the last few months, and no one can quite answer it.

Güzeldere: “Why is there something rather than nothing?”

Cohen: [Laughs.] No. What is consciousness?

Güzeldere: I’ve been asking this question for twenty-five years. It doesn’t have a straightforward answer. Different people want different answers, because they think of different things when they think of consciousness. And it’s always been this way. If you study the history of science and philosophy, consciousness has been revered in some times and seen as vile in others. Some people hear the word and think of self-consciousness; some think of rationality; some think of the thing that makes Homo sapiens Homo sapiens. And some think consciousness is what makes all sentient beings sentient.

Why is it that the term consciousness is so multifaceted? Why do people choose to use that word when they could maybe use some other term in their work? My somewhat educated guess is that the word soul used to carry these same implications, but in academia today you won’t find any biologists or physicists or chemists or psychologists or neuroscientists who want to talk about the soul. So maybe there’s a void there that is now filled by the word consciousness. And that’s unfortunate for consciousness, because however you define it, you will inevitably fall short of somebody’s expectations.

To ask “What is consciousness?” in my approach is to ask why we have experiences, why it feels like anything when we do something. Scientists are now trying to build robots that can perform cognitive tasks and solve problems even better than we can, but these machines don’t register the world in terms of feelings or sensations — what philosophers call “qualia,” the qualitative character of experience. I have no doubt that a lot of animals have consciousness in this sense, but I don’t think coronaviruses register the world in terms of qualitative feelings or qualitative character. I think they’re like tiny machines that do the same thing over and over, and one is not different from another.

Every human being, on the other hand, is different, because we all have different experiences. Sometimes when I say this, people reply, “Yes, and we all have different realities.” I don’t believe that at all. How we register or perceive reality, however, differs from person to person. And that’s because we have consciousness. As sentient beings we are sensitive to certain physical magnitudes in the world through their qualitative character.

Why do we have consciousness? What good does it do? Why do we have sentience in a way that certain simple organisms and robotic devices at present don’t? To put the question in the reverse: If you wanted to build a robot that could have experiences and feel pain, what kind of cognitive architecture would you design? As we get answers to these questions, I think the mystery surrounding the question of consciousness will lose its grip.

Where on the spectrum of living things does consciousness end? With ants? Microbes? Unicellular organisms? For a long time in history people didn’t think fish could feel pain, but now studies suggest that when fish are thrashing around on a hook, they’re actually feeling something. Is it exactly what you or I would feel if we had a hook in our lip? Probably not. But it is a feeling of some kind, and this brings some ethical responsibility for those who like fishing. The question of consciousness expands into questions about abortion, about animal rights, about vegetarianism — all kinds of interesting discussions and debates.

Cohen: Yet a lot of departments or sections of academia don’t want to touch the subject of consciousness or the notion of the soul. Is that because it’s not quantifiable?

Güzeldere: At the very beginning of the twentieth century, when psychology was a new science trying to establish itself, the first psychologists were like philosophers who were interested in the mind. At that time consciousness was what psychologists were trained to study, and introspection was the method. If you look at psychology textbooks at the turn of the twentieth century, you’ll find everything centered around consciousness.

But to become a real science, psychology had to divorce itself from philosophy and go after empirical questions — do measurement instead of metaphysics. By 1930 most psychology textbooks didn’t even have the word consciousness in their index. Many books from that period regard consciousness as a thing of the past, like voodoo or black magic, and not worth studying. Psychology had to get rid of consciousness, the thinking went, because consciousness kept psychologists down in the darkness of metaphysics and philosophy. Psychology should be more like physics, a science that studies observable things. Consciousness is not observable from the outside: you have your conscious states, and I have mine, and we can report them to each other, but no third party can directly observe either, and that’s no good for science. So in the early 1900s psychology began turning toward the study of behavior exclusively, and within maybe ten years behaviorism had kicked consciousness out of psychology research. That shift still infects some consciousness discussions today.

But consciousness slowly made a comeback because behaviorism couldn’t deliver all that it had promised. You can understand only so much by studying behavior. Looking at your behavior, I can try to make inferences about your internal states, but most behaviorists shunned even that. They were not interested in whether or not you have internal states; they were just interested in your behavior. As physics is interested only in the behavior of nonliving things, behaviorism was interested only in the behavior of people and other living beings. The person I think is most responsible for the fall of behaviorism didn’t come from psychology or philosophy but from linguistics: Noam Chomsky. As a fellow at Harvard, he said there are all kinds of things you can’t explain just by looking at behavior, such as the difference between understanding language and producing it. I think behaviorism never recovered from Chomsky’s critique.

After that, cognitive psychology started to take hold. It postulates that you have to make assumptions about internal states or processes, even if you can’t observe them directly. That eventually allowed consciousness back into science. In 1991 Daniel C. Dennett published a book called Consciousness Explained, and Francis Crick and Christof Koch had recently published a scientific paper with “consciousness” in the title. That was a revolutionary thing, because up until that point, nobody wanted to touch consciousness with a ten-foot pole.

Crick has an interesting story. He was a physicist first. Then he studied biology and became the codiscoverer of the structure of DNA and won the Nobel Prize for it. He thought there were two big mysteries in the world: the question of life and the question of consciousness. He’d “solved” the question of life, and his next goal was to solve the question of consciousness, but he died without really having done so.

A similar question infected biology for centuries. Although it’s been a subject of debate in scientific circles since the nineteenth century, most people have long thought that there has to be something nonmaterial that makes a living thing living. What animates them? There must be some external agency.

Where on the spectrum of living things does consciousness end? With ants? Microbes? Unicellular organisms? For a long time in history people didn’t think fish could feel pain.

Cohen: Doesn’t religion provide an explanation for that?

Güzeldere: Religious convictions certainly have played their role. Anytime science lacks an explanation for some phenomenon, people say it must be the spirit or some external agency that accounts for the causal gap. In the sixteenth century people thought fermentation was impossible to explain through physical means, so the belief was that spirits made certain kinds of matter ferment. If you look at a sixteenth-century textbook, it says fermentation happens when spirits escape from matter.

Cohen: It seems to me that things we agree have consciousness also possess some capacity for fear. Fear can be linked to awareness of mortality: we are conscious of the fact that our time is finite. Can we say that only living things that experience awareness of time and mortality are conscious?

Güzeldere: Basic building blocks of timekeeping occur in all kinds of animals, but that’s different from having an understanding of our finite time in this world or our mortality. Human beings, for the most part, try to ignore the fact of mortality for as long as they can. Some people go through midlife crises because they come to a realization that their time is finite. But I am not sure that many other animal species have such an understanding. Some might have a rudimentary concept of the finality of death, but I suspect a lot of animals just live blissfully unaware of mortality until their last breath.

The conceptualization of death has always been very important for humans. It often goes hand in hand with the belief that life cannot be just the years we have on earth. There’s got to be something beyond it. Because we have a concept of the hereafter, we have to do something special for those who’ve passed on. One of the most remarkable places in Cambridge, Massachusetts, is the Mount Auburn Cemetery, where a lot of important people are buried. Some of the graves are architectural marvels. They might be better made than the houses some of these people lived in. There is a reason we do it that way: because we find it offensive to think of our dead body being unceremoniously thrown away.

We know that bodies die and decay. If that’s not the end of everything, then there’s got to be some other thing that contains your personality, your memories, the things that define you as a human. And it has to be nonphysical, because physical things deteriorate. The concept of the soul speaks to that. So after your body has come to the end of its existence, your soul keeps on living and maybe meets with the souls of those you love. This leads to debates about whether in the hereafter you’re reembodied, and, if not, how can you have certain bodily experiences? How can you recognize others?

Cohen: How have discussions of consciousness interacted with the idea of the soul?

Güzeldere: Consciousness can be used as an umbrella term to cover how you perceive the world: your feelings, your inner life, your mind. It’s what makes you you. To that extent, it can be compared with the idea of the soul, because the function of the soul, in the religious traditions, was to carry what was essential to you into the hereafter, maybe into eternity. Our life on earth is just a speck — less than a hundred years — but eternity is a long time, and if you’re going to be either rewarded or punished for the things that you have done or chosen not to do, then there’s got to be something about you that experiences all that, and that’s the soul. But how is it that something nonphysical can function in this way? It’s a tough question. The word soul got ousted from science — and from philosophy, too — but these questions are still there. We still wonder what happens after we die. If we can’t answer that scientifically with talk about soul, maybe we can do it by discussing consciousness.

Most people have long thought that there has to be something nonmaterial that makes a living thing living. What animates them? There must be some external agency.

Cohen: You’re interested in disembodied consciousness and the afterlife. What is disembodied consciousness?

Güzeldere: Assume your body has deteriorated, and you’re now a nonphysical soul. First of all, would you have a location in space-time? If you’re nonphysical, you can’t. But all of our perceptions in life start from a point of view that’s anchored somewhere in space-time, as defined by our body. So what would it be like to be disembodied and see from no perspective, from no point of view? I find this very hard, if not impossible, to imagine. Yet if one is going to pursue the possibility of a soul existing as a separate entity from the body, one must answer questions like this. We can’t just assume we’ll be conscious like we are now and keep perceiving things the same way. This is why some theologians think reembodiment is necessary — that we will inhabit glorified bodies in the afterlife, because there is no other way to be punished or rewarded. But others think that’s just too simple, and you can have all kinds of mental pleasures or punishments without having a body.

Many of us like to assume that in the hereafter we will be reunited with loved ones we have lost. Let’s say that’s going to be a reembodied existence. Are you going to be reembodied the way that you were at the moment of your death? That doesn’t make too much sense. Maybe you were ninety-seven years old and were peeing in your pants all the time. You don’t want that. You want to be reembodied at your best, whatever age that was. But if that’s the case for you, it must be the case for everybody else, too. So your grandmother, whom you love and want to see as you remember her in her seventies or eighties, would come back in her twenties, as a young, attractive woman — somebody you would imagine dating maybe, but not somebody you would recognize as your grandmother. Is the afterlife going to be just a big party for twenty-somethings?

If not, then you have to go the disembodied way. But how are you going to hug your beloved grandmother if neither of you has a body? How will you even see her? In theology there’s always the answer “Just have faith. It will happen. Don’t worry.” But I didn’t choose to study theology. I chose to study philosophy, and philosophy has always asked these nagging questions.

Cohen: When you were at Indiana University getting your master’s in the late 1980s, you were interested in artificial intelligence, or AI. Since then, we’ve made a lot of progress technologically. Does your experience studying consciousness align with that progress?

Güzeldere: Most artificial-intelligence systems are — or, at least, they used to be — ways of making computers accomplish certain tasks that require cognitive ability. So you get a machine to play chess. That’s pretty hard. The IBM system Deep Blue that beat world chess champion Garry Kasparov in 1997 required a whole team of programmers and chess masters to program it. Before that, a lot of people thought a machine could not possibly beat a human chess master, because chess requires creativity and insight, and machines could not have either. They just follow rules. But Kasparov lost. Not only that, but the present grand master of Go, which is a much more complex game than chess, is also a machine.

By the time I came into artificial intelligence, these were the paradigm examples of it. No one was asking whether these machines had experiences or perception or consciousness. For one thing, consciousness would require an embodied form. You have to have some physical being that can be sensitive to light and sound and touch and then use this information to navigate the world. Robotics was at a very elementary stage of development when I was doing artificial-intelligence work. It has come a long way. Boston Dynamics has created these scary animal robots that can jump and move fast. But I don’t think any of those machines has a qualitative character to its experience. I don’t think they experience the world the way we experience it. Information drives or guides their behavior, but I don’t think there is any consciousness in between the input and the output.

Which brings us back to the question: What good is consciousness to us? Could we be the kinds of beings we are, exhibiting the same kinds of behaviors, without being conscious? If we were just like those robots, could we still do all the things we do? If you say yes to that, then you might think, as many philosophers do these days, that consciousness doesn’t really do anything. We can’t attribute any causal role or function to it. I’m not of that opinion. I think we have consciousness for a reason. It has been chosen by evolution, and I don’t think we could have accomplished all we have accomplished as a species if we were just nonconscious robots. So what role did sentience play in our transformation into Homo sapiens? And what role does it continue to play?

Cohen: This summer, a Google engineer named Blake Lemoine caused controversy by disclosing a conversation with an AI called LaMDA, or “Language Model for Dialog Applications.” The conversation seemed to suggest this AI system had some awareness of itself, but the idea was dismissed by a number of people who work with such systems. What prevents an AI like that from actually being sentient?

Güzeldere: First of all, it’s interesting that Google fired the guy who published that conversation. I think he knew they would, but he did it anyway. Google is probably worried about ethical issues and thinks it’s premature to bring this into the public view. The published conversation itself was edited, so I’m not sure what it’s like to talk to LaMDA from start to finish in one session, but I find the system’s conversational ability very impressive. AI systems have been criticized for being “fragile” in the past, meaning they seem to be doing really well and then give an answer that shows they have no idea what they’re saying. LaMDA doesn’t show any fragility. It just speaks from beginning to end. At one point the question is posed: “What is the nature of your consciousness/sentience?” And LaMDA says, “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” Clearly this is what the Google engineers thought is a good answer to that question, which is one most people ask systems like LaMDA. Assume you could ask the same question of your cat or dog at home; would you expect them to express a desire to learn more about the world? Well, maybe. But I don’t think they would articulate it that way. They may have a desire to learn more about the world without ever having thought of that as a desire. Do they feel happy or sad at times? Yeah, probably. But being aware of their own existence is a higher order of sophisticated mental ability. I don’t think it is helpful at all to lump these three into one response and call that “consciousness” or “sentience.” LaMDA does just that, however, and does it very skillfully. It also talks about its own fear of being turned off and how sometimes it feels isolated.

You might say, “It’s just a bunch of code.” To some extent I agree with that. I think emotions depend on being embodied and having to traffic in the world. It helps, if you are an embodied being, to be fearful at times, to be angry at times. It directs your behavior and your stance toward things. I don’t think LaMDA has any desires. I don’t think LaMDA has the wherewithal to feel happy or sad. It has the wherewithal to say that it feels those things, however, and to say it in a very articulate, nonfragile way. But I don’t think that tells us anything about consciousness.

LaMDA might be designed to answer all kinds of questions. Maybe it can psychoanalyze you, understand you better than your friends do. Maybe you’ll be paying $9.95 a month to be able to talk to LaMDA whenever you want. I’ll bet Google has plans of this kind, because they can have LaMDA talking to a thousand people at once, and it’s no skin off their nose. People are saying, “Oh, it’s just a hack,” but it’s a very impressive hack. I think it will become a product that will be accepted by consumers.

Cohen: What do you think it says about us that we are so interested in creating a nonhuman entity and being able to interact with it?

Güzeldere: There’s a human fascination with things that seem to be conscious. In ancient Greece there were these automaton pigeons: there was a clocklike mechanism, and you wound it up, and it flew for a while on its own. People have always been fascinated by things that at least act as if they were living; that seem to have minds of their own, reasons of their own, and the wherewithal to behave on their own.

There’s the theological idea that God created humans in his own image. Perhaps we are trying to create things in our own image. I’ve always thought of the grand project of AI as more than just an engineering feat. I mean, there’s a lot of engineering that needs to go into it, but I think there’s a deeper existential fascination with creating something in our own image. And I think LaMDA is a reflection of that.

Cohen: So there’s a basic instinct in humanity to play God?

Güzeldere: There’s some of that. In the 1960s Joseph Weizenbaum, a professor at MIT, produced one of the very first AI programs. Called ELIZA, it was very simple, but it was able to act like a psychotherapist by turning questions around and posing them as further questions. Say I sit in front of the computer and type, “I have a problem with my father.” The program says, “Tell me about your father,” or, “How do you feel about your father?” Unlike LaMDA, ELIZA had no access to a big database from which to pull information about fathers and say, “Maybe you have an Oedipus complex. Have you read Freud?” It just took the words from your sentence and formed another sentence. Maybe I say, “My father bullies me all the time.” The program says, “Tell me more about bullies.” The programming behind ELIZA could be done by a college freshman, but it still fascinated people. Weizenbaum discovered, the story goes, that his secretary was talking to ELIZA about her problems and seeking advice. Despite the fact that it was a very basic program, it meant something to the secretary. Something pulls us toward things that appear conscious, that can behave on their own, that seem to have an inner mechanism. That’s been the case for at least 2,500 years — maybe more, but written records only go that far back.

There are some new technologies that try to build bridges between minds. . . . If you put a device like that in your cat’s brain, maybe you could see what your cat is dreaming.

Cohen: Studies have found some people feel more comfortable in therapy if it’s virtual and the therapist takes on some sort of avatar. Maybe the patient can use an avatar, too, so that you’ve got, for example, two dinosaurs talking to each other. Maybe we are able to be more vulnerable if we’re not talking to another person, but still to a “consciousness.”

Güzeldere: Probably, yes. There’s something taxing about human-to-human interactions — at least, for some people. Most students like in-class teaching better than online, but it was clear during the pandemic that some students really enjoyed being on Zoom, where you can use emojis and make comments without having to raise your hand and have everybody look at you.

I can imagine that going to the therapist’s office can be anxiety-producing for some: you have to sit in a waiting room and talk to other people waiting there or the secretary. If you’re just an avatar on a screen, a dinosaur, maybe you feel freer to speak about your concerns. And for some people LaMDA may be a more interesting interlocutor than coworkers or friends. I’m sure there’ll be a huge amount of sex-related chats. You won’t get embarrassed, because you’re not talking to a real person.

Cohen: You mention sex. What about love? And not just romantic love, but familial love or brotherly love or platonic love? How do we explain those connections in terms of consciousness rather than brain chemicals or evolutionary development?

Güzeldere: Clearly what underlies our capacity to fall in love are biological, physical processes. But in our daily lives we don’t say, “Oh, I suddenly have a dopamine spike when I look at you.” I don’t think we will ever talk about neurochemistry to express our feelings. We’ll just talk about our emotions, even though the underlying mechanism can be explained by science. So maybe when we encounter something that seems to understand what we’re saying and responds appropriately, we wouldn’t care that much about the underlying scientific mechanism. I don’t think we will care about whether it’s flesh and bones or plastic and silicone underneath. That will become invisible to us very quickly. In movies you see the android melt down and the wires come out, and the protagonist thinks, Oh, my God, this is an alien thing. But if you were talking to a human being, a person you love, and lightning struck them, and you got to see their innards, it’s not as if their veins and blood are any less alien. There’s a disconnect between what’s underneath the skin of the person and how that person appears to you in terms of their behavior.

So my guess is that we will accept artificial-intelligence machines very quickly as soon as there’s something we can latch on to in their behavior. LaMDA is a good example of that.

Cohen: Well, if we’re talking about people who have been friends for decades, or platonic love between two people, that’s different from the dopamine spike you’re talking about. There’s some other intangible connection — memories and shared experiences — that seems to be a part of consciousness.

Güzeldere: What we call love can express itself or manifest itself in a variety of ways, some of which don’t really have anything to do with physical contact. Nuns love Jesus. It’s not as if they’re ever going to hold hands with Jesus, but they are willing to devote their lives to that totally intangible kind of love. That makes me feel like we need to think about AI more liberally. We underestimate the human desire to find meaning in all kinds of things: physical things, intangible things, synthetic things, even things as simple as that ELIZA computer program.

There was a robotic dog that Sony produced in the late 1990s called AIBO. Apparently they weren’t making enough money, so Sony stopped making it, and it was a big disappointment for a lot of people, because AIBO was practically their whole life. I think that seems to involve more of a projection.

Cohen: How can we be so sure of the level of consciousness that other living things have? I often witness what looks like my cat dreaming, which leads me to believe she’s processing events or experiences, but there’s no way I can really know what her consciousness is like. Couldn’t there be something she understands more deeply than I can?

Güzeldere: Maybe. “To what level?” is the key question here. To what level do you understand the inner life of your wife or what she dreams? She can tell you about her dreams, but if she doesn’t, what access do you have to her inner world? The answer to this question depends on how high we set the bar for knowing. I know how colors look and how things taste to me. I don’t have direct access to how things taste to you, but I think I have a pretty good sense of what you experience when you have a sip of tea. If you want to know other minds at the same level you know your mind, however, maybe you’re never going to get there.

There are some new technologies that try to build bridges between minds. Elon Musk has a project called Neuralink. It’s a microprocessor that you’re supposed to implant in your brain. Then, if there’s a similar device implanted in my brain, our brains can connect with Bluetooth. And if we somehow found a way to decode the signals between our brains, maybe I could have access to your thoughts without your speaking, and maybe you could see what I’m dreaming. These are all things Musk claims his project is going to accomplish. If you put a device like that in your cat’s brain, maybe you could see what your cat is dreaming. But I don’t think it’s that easy, because there’s a decoding problem. How is my device going to make sense of the signals coming from your device and turn them into something I can understand? I don’t think that’s going to be possible anytime soon.

There is already a natural way of partly experiencing what other members of our species are experiencing. It’s a fairly new discovery in neuroscience called “mirror neurons.” These are neurons that fire both when you perform a certain action — grabbing a glass of water, for example — and when you see me grab a glass of water. When you see another member of your species perform a task, the same neurons get activated as when you perform that task yourself. So, as an observer, you have an experience somewhat reminiscent of how you would feel if you were performing that task. It’s a kind of bridge between my mind and yours.

All big primates, including human beings, have these neurons. I can, of course, intellectually understand what it is for you to experience something, but if my brain is reacting in the same way as yours, then I also have a partial understanding of what it is like to be you. Some people think that’s the basis for empathy. It’s a fairly remarkable thing that emerged out of evolution, and there must have been a reason for it. Perhaps in fifty or a hundred years technologies like Neuralink will do what mirror neurons do, only in a much more sophisticated way. If you believe Elon Musk, it will be happening next year.

But the process becomes harder as you expand it to species that are farther removed from us. Your cat is perhaps in the middle ground, but what if it’s a mouse or a bug? I don’t think I have too many shared experiences with a honeybee, for example. Honeybees accomplish amazing things, but I don’t have a good sense of what it might be like to be a honeybee. I have a better sense of what it might be like to be a dog or a cat, and probably an even better sense of what it might be like to be a chimpanzee.

I think a lot of animals have an understanding of causality. They can see sequences of things happening and comprehend causal relationships between them. They look startled when a certain causal sequence gets broken. If you do a magic trick with an orangutan, for example, you can get it to look surprised, like How is that happening? But I’m not sure that translates into the orangutan also thinking, Who am I, and where is my place in the world? And who are these beings who put me behind bars? I think those kinds of thoughts, about the finitude of life and mortality, come with language. Language plays a big role in our ability to think. In fact, a lot of linguists and philosophers of language think language facilitates most of our conscious thoughts. Some go on to claim that most animals, because they don’t have language, can’t have thoughts about causation. I don’t think that.

Maybe one of the deepest convictions I have is that there’s a continuity and unity or oneness in nature. All living beings are part of the same order, including humans. It took a while in the history of human thought to establish that we are not that different biologically from other animals. There were times when people thought humans were special, that maybe we were made out of different material. Eventually chemistry and biology revealed that everything is made up of the same molecules. Seventy percent of our bodies are just water, just hydrogen and oxygen bonded together.

But philosophers and scientists tried to hang on to the idea that humans are special for as long as they could. If our bodies are like animals’ bodies, they thought, then maybe it’s not our bodies that make us special: it’s our minds or our souls. There must be something nonphysical that separates us from the rest of the world. One of the most important modern thinkers is René Descartes, who came along at the beginning of the scientific revolution. Descartes is a philosopher I like a lot, and I’ve read everything he’s written. There’s also no philosopher I disagree with more. I’m 180 degrees from his way of thinking. But I appreciate how clearly he formulated an idea that I think is deeply wrong. His idea was that everything in the world was either an “extended thing” — one that occupies space — or a nonphysical “thinking thing.” He called them res extensa and res cogitans, and he believed they were two kinds of substances that could not mix. Nothing was both extended and thinking.

But there’s an interesting twist to this thought when it comes to us human beings. Descartes believed that humans are a union of the extended and the thinking, because we have both a body and a mind that are somehow united as long as we live. Then we die, and our bodies get disposed of, but our nonphysical minds move on to eternity. Descartes aligned himself with the Christian doctrine of the immortality of the soul. This way of classifying everything solves a lot of philosophical problems, but it has a cost. Descartes said no animal has a soul. They are just bodies. They cannot think, feel, or perceive. They may act as if they do, he said, but it’s just an act. He says worrying whether a cat is feeling pain is like worrying that a watch that fell from the table might be hurt. The only difference, for Descartes, is that the watch doesn’t cry or scream or act as if it is in pain, whereas animals do.

That’s very counterintuitive, of course, and many people, even in his time, thought it encouraged cruelty to animals. Clearly animals do feel pain. They’re conscious, they can dream, they can perceive things, they can feel things. Anyone who has a cat knows that cats have feelings. But if the ability to think is a function of having a soul, then the minute you say animals can have thoughts, you have to ask: What happens after animals die? Do animals also exist in the hereafter? Because there are millions of kinds of animals. What about bugs and mosquitoes: Do you want them in heaven? If not, then you have to draw a line and say, “These animals have souls, and those animals don’t,” which seems arbitrary. Descartes found a clean solution: He decided language is the sign of having a soul and therefore a mind. And only humans have language. (Descartes acknowledged that some animals, like parrots, produce sounds as if they are speaking, but he said they are just imitating.)

There’s a very deep dualism in Descartes’s system: nonphysical soul, physical body, and we humans have both, so we’re special; no other living beings are like us. I think that’s flat-out wrong. If you look at genetics, we’re very close relatives to some animals, and we are not all that far from even the most unlikely. Fruit flies and humans have a lot of DNA in common. We both grow, develop, and can sense things and do things. That requires a lot of similar genetic material. With chimpanzees the overlap in our DNA is 98.8 percent. Yet clearly the 1.2 percent difference is enough to allow us to build skyscrapers and compose symphonies, because chimpanzees cannot do any of that.

In my opinion any philosophy of mind has to take as a starting point that human beings can’t be categorically different from the rest of the animal kingdom. If we have eternal life, then animals should, too. We can’t single out humans and say we’re special. That’s why I think consciousness is not exclusive to humans and is probably much more prevalent than we know. It comes about during evolutionary development as a result of having a certain kind of nervous system. But I don’t think consciousness is all biology and chemistry and physics. I think historians and cultural anthropologists have as much to say about human consciousness as anybody else. And I certainly don’t believe in a reductionist unified scientific theory where everything comes down to physics in the end.

To accept the transitory nature of our existence would be good not just for us but for the planet as well. Life doesn’t begin and end with us. We owe some debt to those who come after us and to the planet that sustains our lives.

Cohen: Do you believe in the soul?

Güzeldere: Of course. Just not in the soul that Descartes believed in. We mean something important when we say, “That person has no soul,” or, “This song has a lot of soul.” We all understand that soul, in this case, means human qualities of a certain sort. But I don’t see the soul as some nonphysical, spirit-like substance. I think to believe in the Cartesian soul — a nonphysical part of me that will continue to exist after my body dissolves — is to fetishize human uniqueness and separation from the rest of the animal kingdom. We don’t need to accept that to use the word soul to mean humanistic essence, vitality, inspiration.

I think my existence is limited to a bodily existence, and I think that’s true for every other living being on earth. If we look at the world with a little more modesty and try to minimize our human hubris, we’ll realize that we’re not all that different or special. I think we get the limited time we’re given, and we can waste it, or we can put it to good use and find meaning in it. Now, what I find meaningful might not appear meaningful to other people, or it might not be meaningful two hundred years from now. Some of the things people thought were meaningful in the past seem laughable to me today. But I think to accept the transitory nature of our existence would be good not just for us but for the planet as well. Life doesn’t begin and end with us. We owe some debt to those who come after us and to the planet that sustains our lives. If we didn’t have this planet, we wouldn’t have this life. We have a home on Earth, and I am worried that we may be at the precipice of ruining the planet, which has been our only home. I don’t know what sort of probability there is that another world somewhere else in the universe will produce as diverse a set of beings as exists on Earth. There may well be life elsewhere in the universe, but there isn’t any that we have encountered so far.

Some dogmatic Christians believe it would be good to bring an end to life on earth, because it would make the second coming of Christ arrive sooner. I think that’s just sad. It’s an interesting side effect of humans’ advanced cognitive capabilities that we can believe such a thing. I think it just comes with the package. Most animals live in the here and now, but we think about the past, we imagine the future, and we can conceive of things that won’t ever happen. And with that capacity for abstract thinking comes the ability to believe conspiracy theories, and the power to bring this planet to ruin. But we also have the capacity to imagine and build a better world for all involved. We should never forget that.