AIanti-Israel boycottsArtificial intelligencecorporate engagementexpertiseFeaturedGenZnet-zero mandatesTechnologyworld magazine

What Christians Get Wrong About AI – Religion & Liberty Online

We have a saying in my field: Don’t take business advice from people who think the business shouldn’t exist. Obvious? Tautological, even? Don’t be so sure. As someone in the world of corporate engagement, talking to the world’s largest companies on behalf of investors, you wouldn’t believe how often I find myself confronting corporate policies that were made to placate activists who really don’t want the company to exist.

Take for example “net zero”—those activist demands that commit the company to bringing carbon emissions to zero by a certain timeline. How realistic is it for an energy company to commit itself to zero carbon emissions when its core business is … energy? There’s a reason countless companies we have engaged with have explicitly stepped away from net zero targets—it’s called physics.

Another example, supremely relevant to the current day, is that of activists demanding that defense contractors engage in politicized divestment practices. Often submitted by left-of-center religious groups or activist shareholders, these activists call for companies like Intel and Lockheed Martin to cut ties with countries like Israel, often parroting talking points from anti-Israel groups like the Boycott, Divest, and Sanctions movement. It is, in fact, unreasonable for companies to end corporate partnerships that have delivered tremendous value to America for the better part of half a century. And there’s a reason companies have rejected such calls to divest from longtime business partners over political rhetoric—it’s called math.

Fair to say these activist groups pounding on the boardroom door have focused less on physics and math and more on ideology, and in many cases on a belief that the company’s core business is somehow immoral. Think about that fossil fuels analogy: If activist groups consistently make demands aimed at decreasing the company’s share of the energy industry, at what point do they stop seeking the good of the company and start rooting against it? 

The past two years of directing part of the fastest-growing corporate engagement effort in America has taught me the critical importance of not taking advice from people who think you’re fundamentally wrong. And yet, as a GenZ Christian looking at aspects of our modern faith culture, I worry that we’re not applying this truth to a crucial area of discourse: the intersection of faith and technology.

Last year I read a comment about AI from World editor Daniel Devine, a reporter I very much respect, that seemed to give voice to a dominant perspective on this issue. “When it comes to technology, I’m admittedly a late adopter,” Devine writes. “To this day I’ve never asked a single question of ChatGPT.… It’s not that I dislike tech. But the pessimistic side of me reasons thusly: If I’ve gotten along fine without the latest gadgetry until now, why bother?”

In fairness, that’s not an unusual (or morally lacking) posture to take regarding the growth of artificial intelligence. I live in western Pennsylvania and know quite a few people who feel this way and have experienced no particularly negative effects as a consequence. Yet that cannot, and should not, be the only perspective on the subject. (I’m sure the detractors would agree.) After all, who would actually benefit from such a narrow take?

I’ve read many other articles and heard many lectures purporting to offer a Christian approach to AI that essentially boil down to some combination of the following: (1) an assertion about how little the author has used AI, (2) a section explaining how it’s not human and never will be, (3) a lamentation over jobs lost from AI, (4) several stories about how people have killed themselves as a result of AI chatbots, and (5) a persistently vague call for Christians to be wise. Now I’m all for Christians being wise, rejecting transhumanism, and not killing themselves as a result of chatbots. But I always finish reading these articles feeling as if I’m no more informed than I was when I started reading them—because oftentimes the author wasn’t either.

It’s almost as if AI is a topic where explaining how you don’t use it or know very much about it is considered a demonstration of virtue, as opposed to a demonstration of lack of expertise. Even while learning the journalism ropes in college, I was taught that the goal of a good story or op-ed is for the reader to understand more about a given topic by the end. When I read many examples of Christian commentary on AI, I don’t find myself learning more about AI’s capacities, or even anecdotal evidence of its capacity to create worthwhile things. I just find myself being warned that this could kill me, that it will probably take my job, and that the proper heart posture is a persistent sense of despair.

So what gives? As someone working at an incredibly AI-forward firm, I see the benefits of AI every day—benefits that allow us to compete with glacially moving larger firms that still work to foist ESG and DEI into the corporate space. Furthermore, I often talk with founders and innovators using AI to combat the scourge of human trafficking around the globe. Does that mean AI is all good? Obviously not, but that doesn’t seem to be the dominant understanding in Christian culture today, which appears to be that it’s all bad, inherently exploitative, or downright demonic. 

Which takes me back to that point of expertise. When learning to use any technology, which is to say, a tool, neither all good nor all bad, where should one look for insight on the tradeoffs? That answer is: the people who use it and know what the real pros and cons are. As someone who uses AI frequently, I can safely say it’s no more human than the iPhone is. But that doesn’t mean you don’t use the iPhone for email—it just means that you don’t look to your iPhone to be your friend. Perhaps that’s not easy for you. But it can be made simpler if you embrace a basic rule of thumb: Reject both hype-man-ism and doomerism.

Listening only to people who think AI is going to add unprecedented amounts to our GDP, merge us with the stars, and help us transcend our mortal bodies will shape your conception of it in profoundly unrealistic ways. But reading only those who think AI’s primary impact area is making 15-year-olds kill themselves over virtual girlfriends will also skew that conception. I’ve talked to countless fellow young Christians on the subject of AI, gotten the entire spectrum of these opinions, and learned a great deal from doing so. As I wrote at World magazine last year, “Between the polar opposites of chatbot-inspired murder and transhumanist gambits at immortality is a simple question: Can Christians use tools of artificial intelligence in a way that both serves humanity and furthers the rule of the kingdom of Christ? The answer is a fairly obvious yes.”

Above all, I want my generation to be wise when it comes to technology and its evolution in the coming century, and not just refuse to engage with tech and culture, toss around stereotypes and straw men, and pretend that all this counts as wisdom.

This all comes back to where I started: Don’t take business advice from people who think the business shouldn’t exist. And when it comes to the responsible use of tools, take advice from the people who use them, not the people who think not using tools is an inherent indicator of virtue. They’re different presuppositions, and choosing the former is how you rebuild a society that knows the nature of a tool by its telos. And that includes a tool like AI. So let’s take our analysis from people who are adept at using the tool, not those who aren’t because they’re already convinced they’re against it—whatever it is …

Source link

Related Posts

1 of 433