While delving into all of the online discussions about the emerging AI models this year, I stumbled upon a concerning trend. Among the sources I typically align with, there’s a common concern about the potential displacement of human jobs by machines Which makes sense, after all, given that the people I follow online are by definition content creators, and if AI creates the content of the future, they are the ones that will be displaced.

However, what struck me as particularly intriguing is that, among the content creators I follow but don’t agree with, there was a conspicuous absence of worry regarding their future income. The fundamentalist Christians and antivaxers weren’t wondering about their employment. Their immediate focus was how to convert AI to their cause. They want to make the AI of the future a pro-life antivax creationist Bible thumper.

The majority of skeptic and humanist groups rely on revenue streams from advertisements and subscriptions. They craft content that people are willing to pay for, or, at the very least, tolerate ads to access. On the contrary, Christian and anti-vaxx groups primarily sustain themselves through donations. Their supporters contribute to ensure that their content reaches a broader audience. For them, the concern isn’t about being undercut by machines that produce superior content faster; they’re content as long as their content is generated.

While most of their attempts to convert AIs were silly and worked only in the short term, if at all, it does raise concerns about the long-term trajectory of AI. Large language models are trained on a foundation of human-generated text. Early models, given their training data derived from the existing internet, tended to emphasize mainstream viewpoints. However, to remain relevant, AI must continually undergo updates and training. Its training will be limited to content that’s easily accessible, which usually means unencumbered by paywalls or potential future legal barriers to data scraping.

My suspicion is that entities focused on conversion, who are sustained by contributions, will likely remain unrestricted and readily accessible. In contrast, sources relying on subscriptions may be less available for future AI training. This implies that all they need to do to shape future AI into their unique ideological ally is to persistently generate content while others struggle to stay afloat.

If they were to move on from silly attempts to argue with an AI until it converts into something more technologically savvy, these groups could be even more disruptive. I envision a future where a new field similar to search engine optimization emerges, where experts specialize in manipulating AI training software to repeatedly scrape specific content. This could result in an entire industry devoted to shifting the AI perspective away from mainstream opinions and toward their own cause.

We need to ensure that voices supporting critical thinking and humanism continue to influence this developing technology. That starts with supporting not just the content we want to read but the content we want to spread.