I hear it all the time: “AI is going to take our jobs!” “AI is going to make schoolwork obsolete!”
AI is not taking anything from us that we aren’t already trying desperately to avoid and give away.
In fact, AI is literally nothing without us. It can’t build itself, train itself, or use itself. Not yet, anyway.
But this is a really good time to get back in touch with what we humans do uniquely well. We use our imagination to describe and communicate about abstract concepts. We make decisions and form relationships using both reason and emotion. Sometimes we even CARE.
Unless we don’t. If we’re not going to do the uniquely human thing and reflect on our own lives with feeling, or empathize with others, and we’re only going to compare our intelligence with artificial intelligence on the basis of computational speed and efficiency, then we no longer belong at the top of the intelligence food chain.
The question we should be asking is whether our own behavior is either (a) uniquely human and therefore impossible for computer program to accurately imitate, or (b) a job that can be done by a machine. For example:
The other day I called Southern California Edison (SCE) because their equipment caused power surges that fried my appliances. I needed to make sure the “neutral issue” (read: poltergeist) was fixed before I shelled out for replacements. This was a bit more involved than the typical “I’m calling to report an outage” call that representatives are trained to handle. I talked to two different representatives. They were very different from each other.
Maria was an older woman who was kind and empathetic. She was clearly listening. She asked questions and occasionally she repeated something back to make sure her notes were accurate. I felt understood. Even though the situation was stressful, she made the call pleasant and according to my iPhone the whole thing took two minutes and nineteen seconds.
That was my second call. The first was very different. Gerardo read me a script about standing 100 feet away from a problem I was not having. Then he refused to give me information I didn’t need. (You read that right.) Finally he became frustrated when I asked to talk with a supervisor. He tried to lecture me about patience — when I impatiently interrupted him he hung up on me.
AI cannot replace Maria. AI should immediately replace Gerardo.
In a case of life imitating art, some people have started using ChatGPT to reply to text messages. If that’s you, and that is really the extent of the care you bring to your relationships, your girlfriend is better off texting with a machine than talking to a wall.
Speaking of ChatGPT in school (did you click on the art link above?), it’s great for mindless busy work, which I think is part of what makes school so miserable for so many in the first place. I avoided textbooks like the plague when I taught high school English. ChatGPT would have been worthless to the students in my classes, because we communicated about things we really wanted to learn, create, and imagine.
Even standardized curriculum can be human and original. Instead of copying an old worksheet with multiple-choice or fill-in-the-blank questions about grammar or literature, which (duh) invites students to take shortcuts, I asked questions that I had never heard before: What kind of asshole uses multiple semi-colons in the same sentence? Based on the text, do you think the author is more likely to get up early and do yoga, or reach for an open bottle with a half-inch of flat beer in the bottom from the night before? Yoga in a class or at home? What brand of beer? Good luck finding those answers on SparkNotes.
I also changed my questions all the time because I changed my mind. Last year’s notes wouldn’t help anyone. They wouldn’t have needed that kind of help anyway because the goal wasn’t a test score. The whole point was to ensure that we all understood what we were talking about. Mistakes and spontaneous conversations were part of the process. When we weren’t talking in real time I asked students to write on their blogs and websites. If you’ve ever read two authentic paragraphs from the same person, you know whether the third paragraph is real or written by a bot. (Bonus: AI will never know the artful mentor’s joy of watching the nonverbal contortions of an unprepared learner whose face is trying to find information that isn’t there.)
Good News: It’s Our Responsibility
So where does this leave us? Why are so many people so anxious about AI?
The fault is not in emerging software, but in our habituated rituals. I propose a simple rule: If it’s heartfelt, it can’t be replaced by AI; if it’s canned and replicable, it can be replaced by AI. If I call you with a question, or if you disclose a tender feeling in a text, or if we read something together, our expressed responses ought to come from an authentic place within ourselves. The moment we reach for a script or a shortcut we abandon our humanity and we deserve to be replaced by something cheaper, faster, and more reliable.
There was a time when we were vulnerable with each other. When our mistakes signified our fallibility and our desire to be better for each other. When we laughed, and felt seen, and fell in love, if even for a moment, with someone who shared reality with us, however absurd.
Learning requires memory. In every field of inquiry, we must commit to memory those basic terms and concepts upon which we can build more advanced understanding and skill.
Seems like history is a pretty obvious place to start with memory that we can benchmark with each other. I remember exactly where I was and what I was doing on the morning of September 11, 2001, because I recognize the magnitude of what happened that day and how it changed the world.
Near-term history is comforting because it’s objectively verifiable. To me getting fact-checked is a form of bonding. If I’m looking at a childhood photograph, and I say, “Oh, I remember that Thanksgiving,” my mom will call out my mistake by pointing to the facts: “I didn’t make that sweater for Uncle Ronnie until his birthday, which is in December, so that must’ve been New Year’s Eve. See the streamers in the background?” Denying the evidence would be stupid stubborn and create unnecessary tension.
In today’s society, people have a problem that AI does not. Many of us tell stories that are simply not supported by the data or the facts in the world. (Example: “I don’t need to wear a mask or get a vaccine because God/my rights/hoax/conspiracy/etc.”)
(TRIGGER WARNING: If you are sensitive to heights, gravity, or suicidal ideation, please skip the next paragraph.)
Not everything is controversial. You may not like the idea of gravity, but it doesn’t care what you think and it’s definitely not subject to your opinion or your interpretation. Are you really willing to give your life to disprove gravity? You have exactly one shot at the title. Find a tall building and become a legend. Right. I didn’t think so. But this is the kind of thinking people with no skin in the fight do every day. It is not OK for anyone to imagine magical insight that explains what we can see in ways that only they can see. Fracturing our shared sense of reality is causing irreparable harm.
What happens when we disagree about the unequivocally illegal, unwarranted vandalism and physical attacks at the nation’s capitol? We all watched the events unfold on TV just a couple years ago. In just 24 months the number of Americans who approve of the events on January 6, 2021 more than doubled. That statistic makes me sick with fear, especially when combined with the fact that only 13% of American eighth graders are proficient in history.
Bottom Line: If we’re not willing to be good humans — that is, if we’re not willing to care for ourselves and each other by remembering and understanding the past, empathizing through the present, and sharing visions of the future — then we have no business competing with the machines that are better at doing all the other stuff.
You can’t be better AI, but you can be a better person.