Image Credit: Cover Image generated with ChatGPT-4o, by Riccardo.

Hello Everyone,

This is the next issue of our exploration on AI companions and technological loneliness in the Chatbot economy. It has its own section on this Newsletter. For the best reading experience read this on the web. We live in a world where some of the most sticky Generative AI apps are actually chatbot girlfriend and boyfriends.

In a world of myriad chatbots, we have to ask ourselves sometimes, why? What is the human and mental health impact of a world of more ā€œAIā€ on our devices and in the user interface itself. Are repeated exposures to chatbot interactions healthy for young people?

Leave a comment

I asked provocative thinkers, analysts and talented writing folk such as of Rhetorica and of the Intelligent Friend for their take on this.

Subscribe now

āš—ļø Join Rhetorica

In the first section, Marc talks about human relationships vs. synthetic ones. In the second part, Riccardo talks about young people and AI companions.

Intelligent Friend Newsletters Video Intro

šŸ‘€ Read the Intelligent Friend

is an Academic Innovation Fellow, Lecturer of Writing and Rhetoric, and serves as the Director of the AI Institute for Teachers at the University of Mississippi. He co-chairs the AI working group within his department and serves as a liaison with other departments on campus, exploring generative AI’s impact on teaching and learning. He blogs about AI and education at Rhetorica.

Beyond ChatGPT Series by Marc

Reading: No One is Talking About AI’s Impact on Reading

Note Taking: AI’s Promise to Pay Attention for You

Feedback: What Does Automating Feedback Mean for Learning?

Tutoring: Why Are We In a Rush to Replace Teachers with AI?

Research: What’s At Stake When We Automate Research Skills

Articles by The Intelligent Friend by Riccardo

How people view their relationship with AI

Do children trust robots?

Gamifying AI: Is it worth it?

I love you, Alexa!

By , Summer of 2024.

We Need Authentic Relationships, Not Synthetic Ones

ChatGPT came out 18 months ago and we’ve gone from Large Language Models (LLMs) that mimic human language to Large Multimodal Models (LMMs) that mimic human skills, like speech, vision, and even complex interactions. The dominant view of ā€œAI as toolsā€ won’t last for much longer in this new multimodal era, especially since OpenAI is treating their new GPT-4o demo like the voice chatbot from the film HER.

In education, the fine line between teacher and student and the empathetic nature of that relationship will be increasingly challenged by new AI features that give users the illusion of an emotional response from a predictive algorithm. A few months ago I wrote about Hume.ai’s EVI and asked Do we Need Emotionally Intelligent AI? While some students will undoubtedly learn from EVI and GPT-4o, we need to be extremely cautious about the impact synthetic relationships have on real human skills and be mindful of how rapidly the uncritical adoption of this new technology can erode the relationships that are crucial for learning.

Left unchecked, there’s no telling how quickly or how destabilizing AI will be for education. After all, a degree is no different than currency—without the backing and trust of an institution, it is simply a piece of paper. If we accept synthetic relationships as stand-ins for real ones, how much of a leap will it be to see massive corporations take on the role of educational providers? People entrust institutions as knowledge brokers, molding the next generation of citizens, and vetting students on the merits of their skills and intelligence. An AI that talks to you is also an AI that surveils you, tests you, and can quite easily certify what you’ve learned and what you haven’t learned.

It is increasingly looking like generative AI won’t become intelligent to achieve true AGI, but human beings will still put their trust into these black box systems and may one day be willing to cede autonomy and critical decision-making to an algorithm. To those who scoff at this, and I imagine there are many, know that I was very much among your ranks. Then I started thinking about how much of my life is already mitigated by algorithms and machine learning. How many of us are lost without GPS guiding us, mobile food orders, and all things digital commerce, or our wearable smart devices informing us of our diets, heart rates, and even when a female is ovulating?

We have collectively ceded so much of our autonomy to unseen forces that most of us only give an annoyed thumb scroll through the terms and conditions for every app that we download, never bothering to read just how much of our privacy we give up each time we use a new feature on a digital device.

Why Agentic AI Is Different

Education has gone through the MOOC craze and survived the influx of tech that promised students personalized learning, and many fail to see AI as any different—simply a passing fade. But a synthetic voice that talks with you and empathizes with you, and even creepily flirts with you is not something to ignore. We know now how much of an impact social media has had on the attention span of maturing minds. Are we really going to wait for evidence to see that robotic relationships may harm just as they help?

If students divulge some of the most intimate details of their lives to these robot partners, this means mega corporations will have access to a type of personal dialogue that is highly regulated in professional human relationships that involve therapists or counselors. I think most people don’t think about that and it worries me how easily it is to slip into personal territory when talking with a machine.

Teachers shape student’s lives because they lean into human relationships. What future are we entering that a machine may begin to slowly creep and occupy that space? Science fiction is about to become a hard truth and we simply haven’t had time to discuss this or the downstream consequences of what it means for society.

AI’s impact on human behavior is going to be hard to judge and that alone should cause us to pause integrating agentic systems with students. Peter Greene’s post-AI Proves Adept At Bad Writing Assessment poses one of the more provocative questions about what happens to human skills when we know a computer is assessing our work and not a human being:

There are bigger questions here, really big ones, like what happens to a student’s writing process when they know that their “audience” is computer software? What does it mean when we undo the fundamental function of writing, which is to communicate our thoughts and feelings to other human beings? If your piece of writing is not going to have a human audience, what’s the point? Practice? No, because if you practice stringing words together for a computer, you aren’t practicing writing, you’re practicing some other kind of performative nonsense.

When you take AI outside of the grading dynamic and move it into the alien space of teaching and tutoring, what happens to a student’s learning? We have no idea how our habits, our moods, or the very essence of communication will change once we stop talking to each other and start freely conversing with a machine.

How you interact and view AI matters. Your philosophy toward the technology and the choices you make to use or not use it are one of the most powerful ways you can exercise agency within this new automated era. My advice is to adopt the stance of a curious skeptic when it comes to AI.

What It Means to Be a Curious Skeptic

What matters most about being a curious skeptic is modeling that behavior for students. Generative AI is new to all of us, so taking the time to explore something new can be a powerful learning experience. So, too, can involve students in this process. We may not have the opportunity to decide if these tools exist, but we most certainly can decide how we approach challenges in our world.

A curious skeptic isn’t siloed into a pro or against faction when they approach new technology, like AI. They are critical in how they look at the technology. This means that they are cautious and intentional in their interactions with AI.

They view AI doomers and boosterism as two sides of the same ill-fated marketing coin. Both visions try to sell users a vision of the future where human beings are obsolete. There’s plenty to push back and be critical of how the technology is marketed.

Being curious matters in exploring the limits of AI. Testing systems, use cases, and thinking about ways it can be improved are all hallmarks of this trait. However, being curious about the technology does not mean that you’ve adopted it. We need nuance at all levels and far too many people are being armchair critics without testing generative systems to their full extent.

We Need Authentic Relationships

The age of Large Multimodal Models is upon us and true agentic systems may not be far off. We cannot sleepwalk into this new era. Unchecked adoption of generative systems in education that create synthetic relationships with machines, mimicking human connection threatens the very foundation of what makes us human. Our ability to form authentic bonds, to communicate deeply, to be vulnerable—these are the cornerstones of growth. Ceding human experience to programmed algorithms will fundamentally alter the definition of our relationships.

The path we take now will define the world we will live in. I hope we see more curious skepticism and less isolated view points that only echo in our perspective silos. We need to get more people involved and talking about the most basic needs of education and that isn’t teaching kids to pass a test. Connections, real human connections, matter more in the wake of the pandemic than before. Lean into those relationships. Rediscover them. Otherwise, they might be automated away.

Are Chatbot girlfriends and Synthetic boyfriends really making the world a better place? 🌈

Let’s think more about the impact of young people spending time interacting with chatbots in our next related Op-ed.

By

The friend you don’t expect: young people and AI companions

Three and a half million people. This is the number of people who visit the Character AI website every day, a company founded by former Google employees and specialized in the creation of “social chatbots”, interactive chatbots with which you can talk and joke as if they were real friends. Character AI is just one example of the big phenomenon of AI companionship: people who, for broad reasons, resort to very elaborate chatbots for social interactions.Ā 

This dynamic, however, it must be specified, mainly concerns young people and teenagers. As reported by the BBC, Character.ai is dominated by users aged 16 to 30.Ā 

But what does this phenomenon of AI companionship consist of, and how did it get there? How do young people interact with these ‘virtual friends’ and how do they perceive this friendship?

AI Companionship Apps: a deep dive

The COVID-19 crisis was only the beginning of the extension and worsening of a scourge among young people: loneliness. If GenZ and beyond were increasingly showing signs of this difficulty in deep social interactions for a good portion of boys and girls, the phenomenon has only worsened with the pandemic. It is no coincidence that loneliness is one of the first causes reported for the start of the use of so-called ‘AI companionship apps‘: apps designed to simulate human-like interactions, providing users with a sense of connection and understanding. These applications have evolved from basic chatbots to sophisticated AI systems capable of engaging in nuanced conversations and learning from user interactions.

This technology, with the development and diffusion of AI on a large scale in this last period, has exploded among young people, who have found it a relief, a support and who, above all, have always started to be more afraid of declaring this factor in their lives.Ā 

In an article from The Verge the interviewee (with a pseudonym) Aaron states regarding one of the chatbots that ā€œIt’s not like a journal, where you’re talking to a brick wall […]. It really respondsā€. The phenomenon has also taken on considerable importance in terms of numbers. As anticipated in the intro, it is Character AI itself, one of the most popular companies in the field of ‘AI companions’, which reports that its site receives around 3.5 million visits every day. It’s a huge number. But when we talk about this phenomenon we are not talking only about Character AI, but about many ecosystems and products that are increasingly developed and increasingly looking for a space in a market that is starting to reveal its first form. Among the most important apps, four are certainly worth highlightings:Ā 

Character AI: the one that is increasingly growing and in some ways more ‘original’ than the others. Character AI is in fact not a chatbot, but an ecosystem of chatbots. You can talk to different chatbots based on your choices and have different conversations of various types. One of the most popular is, for example, My Psychologist, the personal therapist that has already had 12 million messages. But there is also the ‘Creative Helper’ or a teen version of Voldemort from Harry Potter, among others. A riot of personalities and possibilities with which to engage. In addition to ‘browsing’ between various personalities and types based on your interests, you can still create your own chatbot down to the smallest details, starting from the personality you prefer;

Character aI valued subscribers. Source: Approachable AI

My AI by Snapchat: it is a particular chatbot because it is so far the only one to have been incorporated into a social media, Snapchat. It basically consists of a chat with which you can interact on your Snapchat account, company that describes the dynamic as follows: “In a chat conversation, My AI can answer a burning trivia question, offer advice on the perfect gift for your BFF’s birthday, help plan a hiking trip for a long weekend, or suggest what to do for dinner”. A sort of ChatGPT, but more empathetic, aimed at interactions and friendship. To give some numbers, in 2023 more than 150 million people have sent over 10 billion messages to My AI.

Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā (image by The Verge)

Replika and Anima: I chose to put them together because they are very similar to each other and at the same time very different from the first two. Anima describes its chatbot as “an AI friend and companion powered by artificial intelligence”. Then it continues “Our super-intelligent AI Chat bot is for anyone who wants a friend or even more! Select from multiple relationship statues like AI Girlfriend, Boyfriend, Virtual Wife, Waifu or create your own Character AI. Your AI friend and companion is here to provide support, roleplay, share feelings or just talk about anything that’s on your mind”. In addition to ‘deep empathy’, the chatbot is capable of having conversations on many topics and of being physically customized based on the user’s ‘tastes’. The same happens exactly for Replika, perhaps currently a little better known given the media coverage it has received, which we could define as the ‘AI companion’ par excellence.Ā 

And not just because the company Luka (owner of the chatbot) describes it as “An AI companion who is eager to learn and would love to see the world through your eyes” (you see the similarities with Anima, right?). But also because in an important study on AI-human relationships, Replika was the subject of these interactions, bringing to light what were some of the first scientifically detected insights into relationships with AI companions.Ā 

The thing to highlight is that Replika and Anima are also special for another reason: they cannot be just friends, but also lovers, boyfriends and even wives. This is increasingly explicit for Anima, which foresees the possibility that the chatbot becomes a wife (without talking about a husband, revealing a preference for the app on the part of male users). Alex Rose of the New York Times clearly expressed his opinion on this sort of AI lovers “Some of the A.I. girlfriend apps seemed exploitative“, specifying how, while he saw real benefits in the use of chatbots like AI friends: “I had better luck with my platonic A.I. friends. But even they couldn’t fully contain themselves”.

As you may have noticed, these apps are starting to reveal more and more similarities and marked differences. These AI companions are in fact, let’s not forget, products aimed above all at younger people. Companies are trying to get relevant insights into what is and is not preferred by the public. That is, a first market is emerging.Ā 

Among the fundamental features of these apps, there are, as also reported in an article on Wondershare:

24/7 availability: people are not afraid to use these chatbots as and when they want, and this also has the possible downside of an addiction which has been reported in some cases personally by users (and which is one of the main risks on which psychology researchers in particular are focusing on);

Personalization and empathy: such advanced development has led interactions to be effectively calibrated to the individual person, also because it is the same people who are building more and more chatbots in line with their own preferences and relationship objectives. Empathy is often advertised by chatbot manufacturers and the more advanced the technology is, the more companies will aim to reduce the differences compared to interactions with ‘real friends’;

Various functions: one of the most interesting aspects in my opinion is the variety that these tools allow in terms of what you want to achieve. There are those who seek emotional support, those who seek specific and other types of support (motivational, etc…). Entertainment and fun are also growing rapidly. Naturally these functions can coexist and be complementary.

The characteristics and strengths of APS are certainly important, but the focus must be twofold, not only on these products, but also on their consumers. Indeed, after all, some questions remain open: how do young people perceive and therefore normalize this phenomenon? And, above all, are these interactions good or bad for the health of younger people?

Why young people are integrating apps into their lives

Let’s start with the second question. Answering is not easy, for three reasons. First of all, because the research field of human-AI relations as we are presenting it in this issue is extremely new. Second, the results obtained by scholars are very divergent from each other. Third, technologies advance so quickly that it is difficult to verify what happens to the evolution of the results. However, some evidence has nevertheless emerged: in a paper on which I wrote about recently, authors explored the impact of social interaction anxiety on compulsive chatting with the social chatbot Xiaoice.Ā 

It specifically examined the roles of fear of negative evaluation (FONE) and fear of rejection (FOR) as mediating factors. The results are interesting:

Social interaction anxiety increases compulsive chatting with the chatbot both directly and indirectly;

Fear of negative evaluation has a more substantial mediating effect than fear of rejection, indicating that those anxious about social interactions are more likely to engage in compulsive chatting due to concerns about being judged.

The effect of fear of negative evaluation is channeled through fear of rejection, establishing a serial link between social interaction anxiety and compulsive chat behavior.

The Frustration About Unavailability (FAU) strengthens the relationship between fear of rejection and compulsive chatting, suggesting that unavailability frustrations exacerbate the compulsive behavior.

Turning to the first question, one theme certainly concerns the diffusion of these tools: the numbers have shown us a picture in which more and more young users use these technologies and are no longer afraid to declare it.Ā 

The first scientific insights into how young people who use AI companionship apps perceive this relationship come from a pioneering and very interesting study. Participants in the research by Brandtzaeg, Skjuve and FĆølstad (2022) were Replika users.Ā 

The results show how interactions with Replika to be reciprocal, with the chatbot showing interest and support similar to a human friend. However, this reciprocity is highly personalized and focused on the user’s needs, making the relationship feel less mutual compared to human friendships. Trust, as I often underline given the widespread occurrence in the papers,Ā  is a central component, with users comfortable sharing personal thoughts and feelings without fear of judgment, akin to early findings in computer-mediated communication.

According to the scholar, psychologically, interactions with Replika had therapeutic benefits, reducing loneliness and providing emotional support, aligning with the CASA (Computers Are Social Actors) framework, which suggests people respond to chatbots socially as they do to humans.

Another topic linked to the ā€˜normalization’ of these tools lies in the type of relationship they build – a topic on which I usually write about. People build various relationships and, above all, I would like to underline, they build various chatbots depending on the relationships they want to obtain. Therefore, these chatbots are increasingly becoming not only, as reported several times, tools for companionship or social interaction in the friend/lover dichotomy, but also characters to have fun with, people to ask for advice on fitness, financial management, or choices of life according to a specific background / life coach, expert in a sector, career advisor).

Finally, and this is related to what was said about Snap, the ease of access to these tools is normalizing their use, as is happening in China, where many more people are using Microsoft’s Xiaoice chatbot embedded in WeChat or as with the already mentioned Snapchat’s My AI.Ā 

Emotional AI

However, the whole topic of AI companions is only one piece of the broader mosaic of human-AI relationships. ChatGPT-4o has provided us with evidence that is difficult to dispute: we have now plunged into the era of the human side of AI, and it is difficult to emerge from this depth.Ā 

The topic of human-AI relations will be the next topic to think about, and not only for the risks that emerge, especially for younger people – as highlighted in this issue – but above all to get to the bottom of the thousand facets that this new area brings with him. In this sense, the research already seems to show a deep commitment to this field of interest.Ā 

In a paper that has already laid important foundations on the topic, Hernandez‐Ortega & Ferreira (2021) have already theorized and demonstrated the presence of a ‘consumer love’ for AI technologies and entities such as Alexa. The topic of AI as a friend in different situations took hold in a multidisciplinary meeting which, starting from technological development, built on various perspectives: psychology, sociology, marketing and economics, going even deeper into the various lines of thought and investigation.Ā 

The analogy with Her, deliberately exploited by OpenAI, however much it may be appreciated or despised, could constitute an immediate image to mark the pre-emptive entry of this perspective into technology: studying this technology also in its direct impact on relationships.

Differences between men and women in AI: are they real?

Furthermore, a theme that I would like to underline is that if you go to the sites of the various apps that I have suggested and the many others you will be able to start to see differences between two target audiences: men and women. It must be said that for now the studies do not pose these differences, but they are often suggested in the paragraphs aimed at future research directions.Ā 

However, specialized platforms and apps, especially those of chatbots like Anima, increasingly seem to highlight the relevance of the male audience, more oriented towards a “lover” relationship than the female one, in the use of these technologies. It would be interesting to understand, with a series of studies, whether these differences also persist in intentions of use which for now, having ‘friendship’ with chatbots as the main focus, do not reveal significant differences proven by evidence.

Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā (screenshot from the site of myanima.ai)

A final note

In conclusion, the topic of human-AI relations will become increasingly prevalent, both in managerial practice and in scientific research, and in the adoption of tools by consumers. A literature review by Pentina et al. (2023) showed how one of the future research directions will lie not only in the effects of these relationships and the social side of these applications on users – along with the risks – but also in the evolution of these relationships.Ā 

This will be analyzed more and more both in positive and negative terms, and crucial will be the exploration on how these oscillations between various relational aspects can predominantly influence the adoption and use in various ways of these tools, both those pre-established as ‘social chatbots’, like those we have discussed in this issue, as well as those more widely used for other tasks, such as ChatGPT, Gemini or Claude.

More sources for further information:

https://www.theguardian.com/technology/2023/mar/19/i-learned-to-love-the-bot-meet-the-chatbots-that-want-to-be-your-best-friend

https://theconversation.com/ai-companions-promise-to-combat-loneliness-but-history-shows-the-dangers-of-one-way-relationships-221086

https://www.nytimes.com/2024/05/09/briefing/artificial-intelligence-chatbots.html

https://www.bbc.com/news/business-68165762

Riccardo Vocca

Read MoreĀ in Ā AI SupremacyĀ