The illusion of neutrality
ChatGPT feels like a tool. Clean, responsive, almost benevolent. It drafts emails, writes code, and explains quantum physics with the patience of a saint. But behind the interface lies a company whose infrastructure, supply chain, and strategic partnerships are deeply entangled with systems of control and surveillance. OpenAI’s reliance on Microsoft’s Azure cloud platform—hosted across data centers in regions with documented human rights concerns—means every prompt processed by ChatGPT indirectly supports a network of digital authoritarianism. The convenience of a chatbot is not free. It is subsidized by geopolitical compromises.
Microsoft’s cloud expansion includes partnerships with state-linked entities in countries where digital repression is institutionalized. These collaborations enable mass data collection, facial recognition deployment, and internet filtering at scale. When you pay $20 a month for ChatGPT Plus, you’re not just funding model training—you’re reinforcing a tech ecosystem that prioritizes access and influence over ethical boundaries. The subscription model normalizes complicity. It turns users into passive financiers of a system that operates beyond democratic oversight.
Training data, training power
Large language models like GPT-4 are not built in a vacuum. They are trained on vast corpora of text scraped from the open web—much of it without explicit consent. This data includes content from marginalized communities, independent journalists, and activists whose words are now embedded in a commercial product that profits from their labor. Worse, the very act of training these models reinforces existing power structures. The datasets reflect dominant languages, cultural norms, and institutional biases, systematically erasing or distorting alternative worldviews.
OpenAI has not disclosed the full composition of its training data, nor has it established meaningful mechanisms for redress or compensation. The company claims alignment with human values, yet its opacity contradicts the principles of accountability it purports to uphold. When a model generates responses that reinforce stereotypes or suppress dissent, it does so not by accident, but because the training environment was shaped by unequal access, surveillance capitalism, and corporate gatekeeping. Every interaction with ChatGPT perpetuates this cycle.
Infrastructure as ideology
The physical backbone of AI is not neutral. Data centers require rare earth minerals, energy-intensive cooling systems, and logistical networks that span the globe. The extraction of these resources often occurs in regions with weak environmental regulations and exploitative labor practices. OpenAI’s partnership with Microsoft ties its operations to a supply chain implicated in ecological degradation and worker abuse. The carbon footprint of training a single large model can exceed the lifetime emissions of dozens of cars.
Moreover, the concentration of AI development in a handful of private companies creates a dangerous asymmetry of power. These firms control not only the technology but also the narrative around its use. They define what is “safe,” “ethical,” or “aligned”—terms that are increasingly weaponized to justify expansion while avoiding regulation. The push for AI integration into public services, education, and law enforcement is not driven by public demand, but by corporate strategy. ChatGPT is not just a product; it is a Trojan horse for a new form of digital governance, one where oversight is replaced by terms of service.
The myth of harmless innovation
Many users rationalize their use of ChatGPT by framing it as a personal productivity tool—harmless, even beneficial. But technology is never neutral. Every tool embeds values, priorities, and power relations. By choosing to subscribe, users signal acceptance of a model where innovation is decoupled from responsibility. The normalization of AI assistants in daily life desensitizes the public to the broader implications of automation, surveillance, and centralized control.
Alternative models exist. Open-source projects like Llama and Mistral demonstrate that powerful language models can be developed with greater transparency and community input. These efforts prioritize auditability, decentralization, and user sovereignty. Yet they remain underfunded and marginalized, overshadowed by the marketing muscle and cloud infrastructure of Big Tech. The dominance of ChatGPT is not inevitable—it is the result of strategic investment, regulatory capture, and public acquiescence.
Quitting ChatGPT is not a performative gesture. It is a refusal to subsidize a system that trades convenience for autonomy. It is a demand for technology that serves people, not power. The future of AI should not be shaped in closed boardrooms or cloud servers tied to authoritarian regimes. It must be built in the open, with accountability, and with the consent of those it affects. Your subscription is not just a payment—it is a vote. And right now, it’s voting for a future we cannot afford.