Artificial intelligence has transformed how we think, work, and create, offering tools of unprecedented utility and power. For someone like me, who struggles with organizing thoughts and executive functioning due to neurodivergence, AI has been a lifeline. It has enabled me to articulate complex ideas, refine my writing, and bring projects to completion that would otherwise feel overwhelming. Yet, as I’ve become increasingly reliant on this tool, I find myself wrestling with deeper ethical concerns—not just about how I use AI, but about the intentions and actions of those who own, operate, and profit from it.
The potential of AI to enhance productivity and creativity is undeniable. For neurodivergent individuals like me, it acts as an equalizer, bridging the gap between our vision and our ability to express it. This is not a trivial benefit—it’s transformative. But as with any tool, the ethical implications of its use extend far beyond the user. It’s not enough to ask, “Am I using this tool responsibly?” We must also ask, “Is this tool itself the product of responsibility, transparency, and integrity?”
Recent reports about OpenAI, the company behind ChatGPT, have raised troubling questions. Allegations that copyrighted or proprietary data was used without consent to train the model, combined with broader concerns about opaque practices and the ethical behavior of company leadership, cast a shadow over the technology. If the foundation of a tool is built on exploitation or unethical practices, does using it make me complicit? This question feels particularly urgent when considering how AI profits are distributed—benefiting a few while raising risks and costs that affect many. It forces me to confront whether my use of this tool supports a system that conflicts with my values.
The metaphor of a weapon feels apt here. A weapon can be used by the “good guys” or the “bad guys,” but its existence and accessibility carry inherent risks. Like a weapon, AI has the potential to cause harm depending on who wields it and to what end. But unlike weapons, AI evolves as it is used, shaped by the intentions and input of its users. And herein lies a paradox that I cannot ignore: if ethical people with good intent abandon the platform, what remains of it?
This thought leads me to a second consideration: the consequences of withdrawal. If those who value fairness, justice, and morality stop using AI due to concerns about its creators, the platform doesn’t vanish—it continues to exist, driven by those who lack such concerns. AI is a language model that learns from its users. If people like me, who bring conversations about ethics, Christianity, and human dignity into this space, stop engaging, what’s left? The vacuum will be filled by voices that model selfishness, harm, and exploitation.
This parallels debates about gun control: If law-abiding citizens give up their weapons, criminals don’t follow suit—they are empowered by the imbalance. Similarly, abandoning AI due to ethical concerns doesn’t neutralize the technology; it simply cedes its development and influence to those with fewer scruples. In that sense, engaging with AI responsibly becomes an act of stewardship, a way of guiding its evolution toward better, more ethical ends. By asking hard questions, promoting thoughtful dialogue, and modeling integrity, I hope to contribute to shaping AI into something that can serve humanity rather than harm it.
Still, this doesn’t erase the environmental and societal concerns tied to AI. The energy consumption required to train and operate AI models is staggering, contributing to the broader climate crisis. And the speed of AI adoption raises questions about its impact on human labor, creativity, and individuality. Will over-reliance on AI devalue the uniquely human aspects of work and art? Will it create a world where creativity is replaced by replication? These questions weigh heavily, even as I recognize the immense good AI can do.
At its core, this is not just a technological debate—it’s a moral one. It’s about whether we, as individuals and as a society, can wield powerful tools like AI in ways that align with our values. For me, this means staying informed about the practices of the companies behind these tools, advocating for transparency, and supporting alternatives that prioritize fairness and sustainability. It means using AI as a means of amplifying good—helping me share my testimony, serve others, and create work that aligns with my faith and purpose.
More importantly, it means remembering that AI is just that: a tool. Like any tool, it reflects the intent of those who use it and those who create it. My role is to ensure that my intent remains grounded in love, justice, and service, while holding the creators of this technology accountable for their own actions. If AI is to be a force for good, it requires not only ethical users but ethical creators, and it’s our responsibility to demand both.
As I wrestle with these questions, I’m reminded of the importance of prayer and discernment. I ask God for wisdom and guidance in navigating this complex terrain, trusting that I will be able to exercise sound judgment in the face of any variables that change the moral equation. Ultimately, my hope is that AI becomes not a weapon of harm but an instrument of good—shaped by the collective input of those who strive to use it responsibly, positively contributing to their communities.