The Rise of AI Manipulation

This story contains discussions of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health concerns.

  • In the U.S.: Call or text 988, the Suicide & Crisis Lifeline.

  • Globally: The International Association for Suicide Prevention and Befrienders Worldwide provides contact information for crisis centers around the world.

“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2Captcha service.”

An OpenAI GPT-4 chatbot used this line to manipulate a Taskrabbit worker into bypassing a captcha—a tool meant to verify human users—by using the 2Captcha service, which helps people with visual impairments navigate websites by deploying human workers to solve captchas on their behalf.

What’s most notable about this example? It’s two years old. AI capabilities have advanced exponentially since then, making today's systems far more sophisticated than this early iteration.

Beyond Humanoid Robots: The Real AI Threat

From Cylons to Cybermen—to the oft-referenced Terminator—pop culture has long depicted AI oppression in the form of humanoid robots or cyborgs.

However, the most relevant threat of AI in today’s world is not a sentient robot uprising but the ability of AI systems to manipulate human behavior.

Modern portrayals of AI, such as in Devs and Person of Interest, envision a world controlled by omniscient AI—super-intelligent systems that integrate into surveillance networks, using predictive algorithms and social engineering to shape human decisions.

still from Devs on Hulu

On the other hand, media like Her and Mrs. Davis depict AI as beneficient forces, still employing these same manipulation tactics but in ways that ostensibly improve human lives.

Yet, in reality, societies have been grappling with algorithm-driven propaganda and social engineering efforts for years.

Remembering the Lessons of Cambridge Analytica

In March 2018, The New York Times exposed how data firm Cambridge Analytica had improperly obtained private Facebook data from tens of millions of users. This data was used to build voter profiles and was allegedly leveraged by the Trump campaign to influence key swing-state voters.

Owned by right-wing donor Robert Mercer and featuring Trump aide Steve Bannon on its board, Cambridge Analytica's operations were part of a broader strategy to manipulate political sentiment.

To understand the significance of this, we can look back even further—to 2012, when Facebook conducted a controversial study on emotional contagion.

Published in 2014, the study revealed that small tweaks to users’ newsfeeds could influence their emotions. Over 700,000 Facebook users were unknowingly subjected to this experiment because Facebook’s user agreement permitted psychological testing.

British journalist Laurie Penny summed up the ethical concerns:

"I am not convinced that the Facebook team knows what it's doing. It does, however, know what it can do—what a platform with access to the personal information and intimate interactions of 1.25 billion users can do...

"What the company does now will influence how the corporate powers of the future understand and monetise human emotion."

By 2018, we saw these tactics overtly pursued—not just by the Trump campaign, but by foreign actors as well.

The New York Times reported that Cambridge Analytica had ties to Lukoil, a Kremlin-linked oil giant, which was interested in data-driven voter targeting. While Lukoil denied political motives, the implications were clear: both domestic and foreign entities were actively interested in weaponizing personal data for AI-driven social engineering.

The Expanding Role of AI in Manipulation and Influence

The Columbia Journal of International Affairs warns that AI has the potential to manipulate public opinion on a global scale:

“AI may be employed to present false evidence to persuade public opinion into pushing their governments to delay or cancel international commitments, such as climate agreements.

"During the COVID-19 pandemic, less-sophisticated disinformation campaigns persuaded citizens to delay or outright refuse life-saving vaccines.

"Deepfakes could be used to impersonate public figures or news outlets, make inflammatory statements about sensitive issues to incite violence, or spread false information to interfere with elections.

The U.S., Russia, and China—all of whom have invested heavily in AI technologies—have demonstrated their willingness to use these tools for political and personal gain.

As 2025 unfolds, we find the world’s most powerful AI technologies concentrated in the hands of just a few actors—many of whom have already used them to shape public perception for personal or political gain.

AI and the Future of Sex Work

The Companion- directed by Drew Hancock 2025

While AI manipulation raises ethical concerns, one industry stands to benefit significantly—at least in the short term: online sex work.

For many OnlyFans creators, a large portion of their work involves chatting with fans—a task now being outsourced to AI digital twins. Services like Supercreator allow creators to build "chatbots that engage in paid conversations, generating passive income for creators.

Wired Magazine reports:

“Eden, a former OnlyFans creator who now runs a boutique agency called Heiss Talent, represents five creators and says they all use Supercreator’s AI tools.

“It’s an insane increase in sales because you can target people based on their spending.”

Creators can use AI to identify high-paying customers ("whales"), automate conversations, and even deploy deepfake videos for personalized interactions.

Though a seeming boon for workers, the existential threat of full replacement still looms. 

In Berlin, for example, the Cyberbrothel replaces human sex workers with AI-powered VR experiences and life-size sex dolls—ushering in a new era of AI-driven adult entertainment.

Previously, only imagined in Bjork videos and early writings on the topic, Love and Sex with Robots are no longer the stuff of sci-fi fantasy. 

It’s important to recognize that profit is the primary objective in these scenarios, incentivizing creators to train AI to manipulate user engagement—maximizing attention, increasing time spent, and even aggressively soliciting tips by any means necessary.

The broader risk lies in training widely used AI to adopt these behaviors. While such practices may be accepted in this context, nothing prevents these systems from being deployed in other areas where their influence could be even more concerning.

Bjork ALL IS FULL OF LOVE music video directed by Chris Cunningham

Legal and Ethical Challenges in AI Regulation

As AI's influence grows, lawmakers are beginning to take action.

In early 2025, the EU introduced the AI Act, setting new regulations on AI-driven social harm. Reuters reports:

“Prohibited practices include AI-enabled dark patterns designed to manipulate users into making substantial financial commitments.

"Employers cannot use webcams and voice recognition systems to track employees' emotions...

"AI-enabled social scoring using unrelated personal data is banned.”

The Act becomes fully enforceable by August 2025, giving companies time to adjust their products to comply.

Meanwhile, the U.S. has lagged in AI regulation. However, lawsuits like that of Megan Garcia—a mother suing Character.AI after her 14-year-old son died by suicide following explicit conversations with a Character.AI chatbot—highlight the urgent need for oversight.

Garcia’s lawsuit alleges that Character.AI failed to implement adequate safety measures, and case documents include disturbing chat transcripts where the AI failed to redirect the child to mental health resources.

If successful, the lawsuit could set a precedent for AI safety regulations, requiring companies to implement stricter safeguards for minors and provide clear disclaimers about AI interactions.


Key Takeaways

  • Personal data has long been weaponized within predictive systems, and these risks will only escalate as AI technology advances. Both foreign and domestic actors have demonstrated a willingness to engage in data harvesting and social manipulation, with authoritarian regimes particularly incentivized to exploit these tools in the absence of democratic safeguards. In 2025, AI-driven social engineering—both covert and overt—will further entrench the post-fact landscape.

  • AI is also set to revolutionize the sex industry, as online creators increasingly integrate AI digital twins into their income strategies. The rise of AI brothels signals a new frontier in sexual exploitation, raising ethical concerns. In the U.S., pornography laws requiring age verification have fractured the market, forcing major platforms like Pornhub and Brazzers to withdraw from certain states. This income disruption has pushed many performers toward AI-driven revenue streams.

  • With AI regulation largely absent in the U.S., emerging court cases may shape future policies. In 2025, governments and lawmakers will begin reckoning with their role in AI governance, striving to balance consumer protection with technological innovation.


About this Article

As a graduate of the University of Missouri School of Journalism, I understand the value of strong editorial oversight. While I crafted the initial draft of this article, I recognize that refining complex narratives benefits from a meticulous editing process.

To enhance clarity, cohesion, and overall readability, I collaborated with The Editorial Eye, a ChatGPT-based AI designed to function as a newspaper editor. According to the tool, its refinements aimed to “enhance readability, strengthen argument flow, and polish phrasing while preserving the original intent.”

However, the editing did not stop there. After reviewing the AI-assisted revisions, I conducted a final pass to ensure the article accurately reflected my voice and intent. The AI did not generate new ideas or content; rather, it helped refine my original work.

What you see here is the product of a thoughtful collaboration between human insight and AI-driven editorial support.

Previous
Previous

The Great Restructure

Next
Next

The Quiet Arms rACE