By Dharmesh Prajapati | March 6, 2026
The world of Artificial Intelligence is facing its first major “identity crisis.” For years, OpenAI’s ChatGPT was the undisputed king of the digital frontier. But today, the throne is shaking. Recent data reveals a staggering 295% to 300% surge in app uninstalls within a 72-hour window, triggered by a combination of a controversial military deal and the escalating Iran-Israel conflict.
While the Middle East remains on edge, a different kind of war is being fought on our smartphones. Here is the deep dive into why users are hitting the ‘delete’ button.
1. The Pentagon Deal: A Breach of Trust?
The primary catalyst for this mass exodus was the news that OpenAI signed a strategic partnership with the U.S. Department of Defense (Pentagon). This deal allows OpenAI’s advanced models to be deployed on classified military networks.
For many users, this was a “red line.” OpenAI, which originally started as a non-profit dedicated to “safe and beneficial AI,” is now being accused of “selling its soul” to the military-industrial complex. The optics became even worse when rival Anthropic (the maker of Claude) publicly rejected the same deal, citing concerns over AI being used for mass surveillance and autonomous weaponry.
2. The ‘Claude’ Migration
As ChatGPT’s ratings plummeted—with one-star reviews jumping by 775%—competitors like Claude have seen a historic rise. For the first time, Claude hit the No. 1 spot on the U.S. App Store. Users are voting with their thumbs, moving to platforms they perceive as having stronger ethical guardrails against military use.
3. Allegations of Bias in the Iran-Israel War
The ongoing conflict between Iran and Israel has put AI chatbots under a microscope. Users from both sides of the geopolitical divide have reported “systematic biases” in how ChatGPT handles sensitive queries:
- Casualty Reporting: Reports suggest that ChatGPT provides different fatality estimates depending on the language of the prompt (Arabic vs. Hebrew).
- Information Bubbles: Many users feel the AI is “sanitizing” or “filtering” news based on Western diplomatic interests, leading to accusations of a “pro-West” bias that ignores the ground reality in Tehran or Beirut.
4. Sam Altman’s “Sloppy” Admission
OpenAI CEO Sam Altman attempted to perform damage control, admitting in a post on X (formerly Twitter) that the rollout of the military deal was “opportunistic and sloppy.” While he insisted that the AI would not be used for “domestic surveillance” or “autonomous weapons,” the explanation felt like “too little, too late” for millions who have already switched to alternatives.
The Bottom Line
The #CancelChatGPT movement proves that users no longer view AI as just a “cool tool” for writing emails. They see it as a powerful entity that must have a moral compass. When that compass appears to point toward the theater of war, the public’s response is swift and digital.
Editorial Note: Why Integrity Matters in the Age of AI
By Dharmesh Prajapati
In the last few years, we’ve seen technology move faster than our ability to regulate its ethics. At NewsForIndia.live, we believe that any tool—no matter how revolutionary—must be held accountable to the people it serves. The recent mass exodus of users from ChatGPT isn’t just a “tech glitch” or a minor trend; it is a profound statement by a global community that refuses to see Artificial Intelligence weaponized or biased. As we report on this 300% surge in uninstalls, we remind our readers that in the “fog of war,” the clearest lens should always be your own conscience.
