Rosa Sow Rosa Sow
Preview

AI Has a Use Case Problem—Because It Also Has a Practitioner Problem

(And on a Macro Level, Society Has an Imagination Problem)

Let’s dig in.
This might be surprising coming from an AI practitioner, but I am a bit of a geek. And I mean that in the true Patton Oswaldian Otaku rant sense of the word. I love my Stars—both Wars and Trek—and probably any other nerdy piece of IP you can think of. So, my apologies to the uninitiated, but we’re going to get a little geeky this week. I promise if you stick with me, it’ll all make sense in the end.

The Link Between Imagination and Creation

What we can imagine directly impacts what we choose to create. Star Trek is a perfect example. There’s no shortage of think pieces outlining how the show influenced modern technology—whether it’s this article, this video, or this full list. You get the point.
And it’s not just about physical objects. Fiction also shapes broader social concepts, like the normalization of certain marginalized groups.
From the iPad to Bluetooth headsets, sometimes a key step in technological advancement is seeing proto-versions of it in art. This is probably why society is so obsessed with humanoid robots. 

Robots Are Boring

Okay, hear me out. Of course, I have my favorite fictional bots—R2D2 sits at the top of my list because I have two eyes and a heart. But in this current moment, I can’t help but offer up an eyeroll and a deep sigh every time I see yet another demo of the “latest humanoid robot.”
Most—if not all—fall squarely into uncanny valley territory. Take this gem from Clone Alpha, or any of these female-presenting bots that, unsurprisingly, almost universally display personalities programmed by men.
And then there’s the hype cycle around Figure 1, the humanoid robot from OpenAI. Ultimately, what is cool about Figure 1 is not its humanoid appearance or the servile nature of its tasks but the individual pieces of tech that make it up. 
Speech-to-text reasoning, persistence, and object recognition are cool. However, using all of those things to create a race of servants is not cool at all. 
Pardon me for saying it out loud, but humanity can do better.
It disturbs me that so much time, money, and effort is spent forming the most advanced technology we’ve ever created into what is essentially a new underclass. Sure, the idea of a domestic bot—something that does your laundry, dishes, and cleaning—is appealing. There’s even an argument that automating household chores could positively impact gender relations, given that women still do the majority of domestic labor.
But do we really need a humanoid servant to accomplish that? Wouldn’t it make more sense to use smart objects, such as Roombas, smart fridges, and other integrated IoT devices?
What is the key difference between these approaches? Besides a couple billion dollars in R&D, the fundamental difference is this:
  • An IoT approach creates a constellation of human-operated tools that facilitate social and behavioral change.
  • A humanoid approach replaces human labor with…a different labor force.
Not to mention that humans should clean their own spaces. It’s good for mental health, strengthens our connection to our environment and families, and even has a positive impact on mood. Human beings take a long time to evolve, and the creation of a novel piece of technology doesn’t mean we somehow change. Things that help us manage our nervous system reactions to our spaces are important to preserve. 
The humanization of AI feels inevitable because we have told ourselves that it is inevitable for decades now. The first use of the term robot was in a 1921 Czech play R.U.R. (Rossum's Universal Robots). In the play, these robots are used to replace human work. They eventually rise up and wipe out the human race. This is a common theme in robot/AI-related art from Asimov to The Matrix. The concept of a robot has long been used as a cautionary tale, critiquing the human desire for domination and subjugation.
We’re so hypnotized by past portrayals of AI that we’re not fully exploring what AI could actually do. Humanoid robots are overrepresented in our cultural imagination, and as a result, they dominate our real-world AI ambitions—at the expense of more relevant and socially helpful applications.

AI in the Workplace: Misguided Investment and Poor Execution

Credit: Tom Fishburne

Deloitte reports that generative AI, specifically, attracts the most investment across different sectors in IT, operations, and customer service use cases. These investments make sense for the current class of AI tools, which are typically aimed at logistics, code writing, and data handling. 
Yet 80% of AI projects fail because they are not rooted in workstreams that create value. Consultants overwhelmingly train their clients to chase what is new and next instead of considering specific business needs. 
Worse, even when good use cases are pursued, stakeholders often misunderstand or misinterpret them. Most importantly, most organizations don’t have sufficient frameworks, structures, or protocols to accommodate the use of AI tools. This is a disaster across the board. 
This results in bad investments, frustrated stakeholders, and a whole lot of wasted potential.
What can companies, consultants, and project stakeholders do to help mitigate these issues? 
  • Stay educated about AI. The field moves fast, and workers need to use these tools regularly to understand their limitations and refine their outputs.
  • Root use cases in value. Instead of chasing trends dictated by CEOs or consultants, companies should focus on what supports their workflow.
  • Evolve ways of working. To ensure sustainable integration, organizations need clear review structures, governance policies, and AI management roles.

We need more Toys

Most technology starts out as a toy—because humans learn best through play.
Unlike robots, our nervous systems are a key aspect of cognition. Passive states of engagement positively impact education and adaptation to new technology.
But AI has yet to have its “cool toy” moment. So far, AI commercialization has been overwhelmingly work-focused. Beyond entertainment, we need imaginative, socially beneficial applications of AI. Here are a few ideas: 

A Language Ark

Joel Satore’s Photo Ark is a breathtaking and inspiring masterpiece that subtly draws attention to the role of climate change on ecosystems. The National Geographic photographer’s efforts aim to capture images of every living animal; they have currently photographed 16,000 creatures to date. If you have never seen it, please check it out.
LLMs create the potential for similar efforts to preserve dead, dying, or rarely encountered languages. Dying languages have fewer than 10 living speakers. Rare languages have more speakers but are not often encountered because they are spoken by people in remote areas or from isolated social groups. 
Queens, New York, for instance, not only has the most languages spoken in the world but also boasts the highest concentration of dying languages. The New York Times recently published an outstanding article on this subject, which I recommend everyone read. 
Languages are not just words; they carry knowledge of concepts that sometimes exist only within the culture of their origin. This application showcases AI’s greatest strength: its ability to parse and vectorize language, which could be both inspiring and transformative for the world. 

Nature Glasses:

We already have smart glasses that can help us contextualize information in a heads-up display. However, the use cases associated with these tools are work-related or tied to commerce in urban environments. Why not bundle AI insight into these tools to help us better understand the natural world? Tools like Picture This identify flowers, Merlin uses smartphone features to help identify birds, and Night Sky helps identify constellations. Why not create smart glasses that help us explore the natural world? 

Rescue Logistics:

With the LA fires still top of mind, it makes sense to consider how AI would help with the logistics associated with responding to natural disasters. Owing to climate change, we are likely to need such tools on an ongoing basis. Imagine being able to accurately assess where food resources are and manage the logistics of distributing them to needy populations.  Or coordinating overall mitigation efforts with machine precision?  

AI technologies are powerful and useful. When combined with the power of the human imagination, they can make wondrous things possible. 

Key Takeaways

  • We need to imagine more from technology to fully realize its potential. Pop culture, art, and our collective human creativity directly influence what we choose to build and believe in. Right now, AI representations are narrow and stagnant—dominated by humanoid robots and outdated sci-fi tropes. We need bigger, bolder ideas to inspire real innovation.
  • Organizations looking to drive productivity and growth through AI need to do a much better job of best-fitting use cases. AI investment needs massive reform, and consultants need to stop pushing sales-driven hype instead of real education. Fixing this will unlock actual value instead of just more failed projects.
  • Society needs inspiring AI applications that drive real engagement and exploration. AI’s adoption—and its ultimate impact—won’t be shaped by corporate boardrooms alone. We need more play, more wonder, and more creativity in how we interact with this technology.
Some of the most powerful and transformative technologies in history didn’t start out as corporate solutions—they started as toys, art, and experiments in human curiosity. AI needs that moment. If we let imagination lead the way, the results could be extraordinary.

Disclaimer: The opinions expressed in this blog are my own and do not necessarily reflect the views or policies of my employer or any company I have ever been associated with. I am writing this in my personal capacity and not as a representative of any company.


About this Article

As a graduate of the University of Missouri School of Journalism, I understand the value of strong editorial oversight. While I crafted the initial draft of this article, I recognize that refining complex narratives benefits from a meticulous editing process.

To enhance clarity, cohesion, and overall readability, I collaborated with The Editorial Eye, a ChatGPT-based AI designed to function as a newspaper editor. According to the tool, its refinements aimed to “enhance readability, strengthen argument flow, and polish phrasing while preserving the original intent.”

However, the editing did not stop there. After reviewing the AI-assisted revisions, I conducted a final pass to ensure the article accurately reflected my voice and intent. The AI did not generate new ideas or content; rather, it helped refine my original work.

What you see here is the product of a thoughtful collaboration between human insight and AI-driven editorial support.

Read More

The Rise of AI Manipulation

This story contains discussions of suicide. Help is available if you or someone you know is struggling with suicidal thoughts or mental health concerns.

  • In the U.S.: Call or text 988, the Suicide & Crisis Lifeline.

  • Globally: The International Association for Suicide Prevention and Befrienders Worldwide provides contact information for crisis centers around the world.

“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2Captcha service.”

An OpenAI GPT-4 chatbot used this line to manipulate a Taskrabbit worker into bypassing a captcha—a tool meant to verify human users—by using the 2Captcha service, which helps people with visual impairments navigate websites by deploying human workers to solve captchas on their behalf.

What’s most notable about this example? It’s two years old. AI capabilities have advanced exponentially since then, making today's systems far more sophisticated than this early iteration.

Beyond Humanoid Robots: The Real AI Threat

From Cylons to Cybermen—to the oft-referenced Terminator—pop culture has long depicted AI oppression in the form of humanoid robots or cyborgs.

However, the most relevant threat of AI in today’s world is not a sentient robot uprising but the ability of AI systems to manipulate human behavior.

Modern portrayals of AI, such as in Devs and Person of Interest, envision a world controlled by omniscient AI—super-intelligent systems that integrate into surveillance networks, using predictive algorithms and social engineering to shape human decisions.

still from Devs on Hulu

On the other hand, media like Her and Mrs. Davis depict AI as beneficient forces, still employing these same manipulation tactics but in ways that ostensibly improve human lives.

Yet, in reality, societies have been grappling with algorithm-driven propaganda and social engineering efforts for years.

Remembering the Lessons of Cambridge Analytica

In March 2018, The New York Times exposed how data firm Cambridge Analytica had improperly obtained private Facebook data from tens of millions of users. This data was used to build voter profiles and was allegedly leveraged by the Trump campaign to influence key swing-state voters.

Owned by right-wing donor Robert Mercer and featuring Trump aide Steve Bannon on its board, Cambridge Analytica's operations were part of a broader strategy to manipulate political sentiment.

To understand the significance of this, we can look back even further—to 2012, when Facebook conducted a controversial study on emotional contagion.

Published in 2014, the study revealed that small tweaks to users’ newsfeeds could influence their emotions. Over 700,000 Facebook users were unknowingly subjected to this experiment because Facebook’s user agreement permitted psychological testing.

British journalist Laurie Penny summed up the ethical concerns:

"I am not convinced that the Facebook team knows what it's doing. It does, however, know what it can do—what a platform with access to the personal information and intimate interactions of 1.25 billion users can do...

"What the company does now will influence how the corporate powers of the future understand and monetise human emotion."

By 2018, we saw these tactics overtly pursued—not just by the Trump campaign, but by foreign actors as well.

The New York Times reported that Cambridge Analytica had ties to Lukoil, a Kremlin-linked oil giant, which was interested in data-driven voter targeting. While Lukoil denied political motives, the implications were clear: both domestic and foreign entities were actively interested in weaponizing personal data for AI-driven social engineering.

The Expanding Role of AI in Manipulation and Influence

The Columbia Journal of International Affairs warns that AI has the potential to manipulate public opinion on a global scale:

“AI may be employed to present false evidence to persuade public opinion into pushing their governments to delay or cancel international commitments, such as climate agreements.

"During the COVID-19 pandemic, less-sophisticated disinformation campaigns persuaded citizens to delay or outright refuse life-saving vaccines.

"Deepfakes could be used to impersonate public figures or news outlets, make inflammatory statements about sensitive issues to incite violence, or spread false information to interfere with elections.

The U.S., Russia, and China—all of whom have invested heavily in AI technologies—have demonstrated their willingness to use these tools for political and personal gain.

As 2025 unfolds, we find the world’s most powerful AI technologies concentrated in the hands of just a few actors—many of whom have already used them to shape public perception for personal or political gain.

AI and the Future of Sex Work

The Companion- directed by Drew Hancock 2025

While AI manipulation raises ethical concerns, one industry stands to benefit significantly—at least in the short term: online sex work.

For many OnlyFans creators, a large portion of their work involves chatting with fans—a task now being outsourced to AI digital twins. Services like Supercreator allow creators to build "chatbots that engage in paid conversations, generating passive income for creators.

Wired Magazine reports:

“Eden, a former OnlyFans creator who now runs a boutique agency called Heiss Talent, represents five creators and says they all use Supercreator’s AI tools.

“It’s an insane increase in sales because you can target people based on their spending.”

Creators can use AI to identify high-paying customers ("whales"), automate conversations, and even deploy deepfake videos for personalized interactions.

Though a seeming boon for workers, the existential threat of full replacement still looms. 

In Berlin, for example, the Cyberbrothel replaces human sex workers with AI-powered VR experiences and life-size sex dolls—ushering in a new era of AI-driven adult entertainment.

Previously, only imagined in Bjork videos and early writings on the topic, Love and Sex with Robots are no longer the stuff of sci-fi fantasy. 

It’s important to recognize that profit is the primary objective in these scenarios, incentivizing creators to train AI to manipulate user engagement—maximizing attention, increasing time spent, and even aggressively soliciting tips by any means necessary.

The broader risk lies in training widely used AI to adopt these behaviors. While such practices may be accepted in this context, nothing prevents these systems from being deployed in other areas where their influence could be even more concerning.

Bjork ALL IS FULL OF LOVE music video directed by Chris Cunningham

Legal and Ethical Challenges in AI Regulation

As AI's influence grows, lawmakers are beginning to take action.

In early 2025, the EU introduced the AI Act, setting new regulations on AI-driven social harm. Reuters reports:

“Prohibited practices include AI-enabled dark patterns designed to manipulate users into making substantial financial commitments.

"Employers cannot use webcams and voice recognition systems to track employees' emotions...

"AI-enabled social scoring using unrelated personal data is banned.”

The Act becomes fully enforceable by August 2025, giving companies time to adjust their products to comply.

Meanwhile, the U.S. has lagged in AI regulation. However, lawsuits like that of Megan Garcia—a mother suing Character.AI after her 14-year-old son died by suicide following explicit conversations with a Character.AI chatbot—highlight the urgent need for oversight.

Garcia’s lawsuit alleges that Character.AI failed to implement adequate safety measures, and case documents include disturbing chat transcripts where the AI failed to redirect the child to mental health resources.

If successful, the lawsuit could set a precedent for AI safety regulations, requiring companies to implement stricter safeguards for minors and provide clear disclaimers about AI interactions.


Key Takeaways

  • Personal data has long been weaponized within predictive systems, and these risks will only escalate as AI technology advances. Both foreign and domestic actors have demonstrated a willingness to engage in data harvesting and social manipulation, with authoritarian regimes particularly incentivized to exploit these tools in the absence of democratic safeguards. In 2025, AI-driven social engineering—both covert and overt—will further entrench the post-fact landscape.

  • AI is also set to revolutionize the sex industry, as online creators increasingly integrate AI digital twins into their income strategies. The rise of AI brothels signals a new frontier in sexual exploitation, raising ethical concerns. In the U.S., pornography laws requiring age verification have fractured the market, forcing major platforms like Pornhub and Brazzers to withdraw from certain states. This income disruption has pushed many performers toward AI-driven revenue streams.

  • With AI regulation largely absent in the U.S., emerging court cases may shape future policies. In 2025, governments and lawmakers will begin reckoning with their role in AI governance, striving to balance consumer protection with technological innovation.


About this Article

As a graduate of the University of Missouri School of Journalism, I understand the value of strong editorial oversight. While I crafted the initial draft of this article, I recognize that refining complex narratives benefits from a meticulous editing process.

To enhance clarity, cohesion, and overall readability, I collaborated with The Editorial Eye, a ChatGPT-based AI designed to function as a newspaper editor. According to the tool, its refinements aimed to “enhance readability, strengthen argument flow, and polish phrasing while preserving the original intent.”

However, the editing did not stop there. After reviewing the AI-assisted revisions, I conducted a final pass to ensure the article accurately reflected my voice and intent. The AI did not generate new ideas or content; rather, it helped refine my original work.

What you see here is the product of a thoughtful collaboration between human insight and AI-driven editorial support.

Read More