What AI Can Teach Us About Being Human
As algorithms take over our lives, what do they mean for humanity?
How much of the world should be guided by algorithms? That’s a question more and more people are asking themselves as awareness increases about the scope of AI. Students can input prompts into ChatGPT to get a pretty decent essay, while artists protest AI-generated art that learns by using artwork without getting proper permission. At this point, the algorithms that put us into social media echo chambers are pretty old news. But the domain of artificial intelligence is rapidly expanding, and this expansion might just give us some insight into what it means to be human.
To understand the impact that artificial intelligence has on society, we first have to understand what exactly AI is. “AI is not a very well-defined term; it’s a moving term,” said Dr. Ravi Sundaram, Professor of Computer and Information Sciences at Northeastern University. “Back in the day, having a computer understand what you’re typing and being able to run programs interactively was considered AI because that was hard to do, and now it’s just considered software.”
As the capabilities of artificial intelligence increase, we won’t associate the tasks they can do with humans alone. Still, it’s clear that the amount that algorithms can do is increasing rapidly, as is their impact on day-to-day life. As of now, artificial intelligence exists in recommendation algorithms on social media platforms, self-driving cars, and chess-playing programs that can beat even the most skilled of players. ChatGPT, a conversation modeling software, is a form of AI whose ability to coherently answer a prompt has thrown the whole idea of essay writing in schools into question.
One of the biggest concerns that the general public has is that AI is going to take over the world. Dr. Sundaram states, “I think that these computers are getting so good at what they do, and they are doing more things every day. So eventually, it’s not clear if there’s anything that we have a special advantage in. If everything we do, they can do and they can do it better, I don’t see how they won’t take over the world.” But what does taking over the world really mean? To Dr. Sundaram, artificial intelligence will replace humans at the top of the food chain. But an AI takeover, as least as AI currently stands, is unlikely to be based on a computer developing its own desires and ambitions. Dr. Sundaram cites the paperclip problem as the plausible worst-case scenario.
The paperclip problem is the idea that an AI could destroy humanity as we know it in pursuit of the goal with which it was programmed. Imagine an artificial intelligence whose only purpose is to make paperclips. At first, it would be relatively innocent, repurposing spare metal to make paperclips. But, unlike a human, this AI wouldn’t know where to stop. It was programmed, after all, with a single objective: to produce paperclips. If humans tried to stop it, the intelligence would sense an obstacle to its objective and eliminate them. And when AIs outclass humans in terms of focus, strategic capability, and sheer brute force power, there would be no way to stop it.
The uncomfortable truth is that a sufficiently intelligent AI would be able to do whatever it wanted. It would have no qualms about any actions needed to optimize its given task; there is a reason, after all, that brutal efficiency earns one the title of “machine.” But is this really so different from how we as humans operate? After all, so much of history has been defined by actions that promise short-term benefits and come at the expense of the future. We humans have our own notions of who we should and should not harm, what risks are worth taking, and what our boundaries are. Do these ideas spring from morality, or are they just products of evolution—and is there even a difference?
“One of the big challenges in AI is that [with] the best prediction algorithms we have, it’s very difficult if not impossible to look at those decisions and explain why it makes those decisions,” says Dr. Kamal Nigam, who has worked with machine learning in Google Shopping. “If you’re training an autonomous vehicle to drive, and it makes a very egregious error, it might be impossible to understand why it made that error.” Dr. Nigam highlights an issue with AI that worries many: the decision-making processes of our most accurate algorithms are nearly impossible to understand.
But that doesn’t mean that an AI-driven car would necessarily be unsafe. After all, a driver who never tires, never gets distracted, and is always vigilant seems ideal. “One of the interesting things about AI is that algorithms are often capable of being more accurate than humans, but they’re still trusted less. So there’s this interesting social dimension of, is there a way to get humans to trust algorithms more, and how can we get that to happen?” says Dr. Nigam.
The general distrust of AI highlights a common trait of humanity: we are much less rational than we’d like to believe we are. If someone only knew that one form of data analysis produces more accurate results while one produces less accurate results, they would almost certainly opt for the former. But in the context of the real world, it’s not that simple. While data collected and analyzed by a human might be more biased and less accurate, we can at least look for those biases and account for them when we view the final product. But if we have no idea how an algorithm comes to its conclusions, we have no way to assess those methods and use that knowledge to think critically about the data provided.
Some see AI as a threat to the lives we know, while others see it as our key to a better and safer world. But whether good or bad, artificial intelligence is a tool of our creation, and that means it can tell us a lot about ourselves and the lives we lead.
Look to facial recognition algorithms, which are notorious for misidentifying the faces of women and BIPOC. Racism in our day-to-day lives can be subtle, encased within the tiny cues that make up human interaction. But when a computerized device repeatedly fails to recognize faces of a certain gender or race, this bias becomes clear. An algorithm does not know that some of its analyses may be offensive, insensitive, or downright dangerous. This is an argument for limiting the scope of artificial intelligence, but it also displays the fault lines in our society.
Artificial intelligence also shows us how we develop trust in technology. General unease about the opaqueness of algorithms proves that for humans, the methodology of reaching a conclusion is just as important as the conclusion itself. Humans, after all, are social creatures. We strive to understand each other, and when we cannot we become distrustful. If we do not understand why something behaves the way it does, we have no way of knowing what it will do next.
In the end, AI is just like any other tool—capable of being used for good or bad purposes. And as humans, the only way to use artificial intelligence safely is to understand ourselves and the ways we will interact with AI.
A $50 or more donation includes a subscription to the Clayton High School Globe 2024-2025 print news magazine.
We will mail a copy of our issues to the recipients of your choice.
Your donation helps preserve the tangible experience of print journalism, ensuring that student voices reach our community and that student democracy thrives.
Pronouns: she/her
Grade: 12
Years on staff: 3
What's an interesting fact about you? I write poetry for fun (my current favorites are Ada Limon and Richard Siken).
What's...