close
close

Association-anemone

Bite-sized brilliance in every update

MIT Technology Review
asane

MIT Technology Review

I’ve been feeling heartbroken recently. A very close friend recently cut off contact with me. I don’t quite understand why, and my attempts to fix the situation backfired. Situations like this are painful and confusing. So it’s no wonder people are increasingly turning to AI chatbots to help solve them. And there’s good news: AI might actually be able to help.

Researchers at Google DeepMind recently trained a system of large language models to help people reach agreement on complex but important social or political issues. The AI ​​model was trained to identify and present areas where people’s ideas overlap. With the help of this AI mediator, small groups of study participants became less divided in their positions on various issues. You can read more from Rhiannon Williams here.

One of the best uses for AI chatbots is for brainstorming. I’ve had success in the past using them to write more assertive or persuasive emails for awkward situations like service complaints or bill negotiations. This latest research suggests that it might help us see things from other people’s perspectives, too. So why not use AI to fix things with my friend?

I described the conflict as I see it to ChatGPT and asked for advice on what I should do. The response was very validating as the AI ​​chatbot supported the way we approached the problem. The advice he gave was in line with what I was thinking of doing anyway. I found it helpful to chat with the bot and get more ideas on how to deal with my specific situation. But in the end, I was left unsatisfied, because the advice was still pretty generic and vague (“Set your boundaries calmly” and “Communicate your feelings”) and didn’t really offer the kind of insight that a therapist would could.

And there’s another problem: every argument has two sides. I started a new chat and described the problem as I think my friend sees it. The chatbot supported and validated my friend’s decisions, as it did for me. On the one hand, this exercise helped me see things from her perspective. After all, I was trying to empathize with the other person, not just win an argument. But on the other hand, I can totally see a situation where relying too much on the advice of a chatbot telling us what we want to hear could cause us to double down, preventing us from seeing things from the other person’s perspective.

This served as a good reminder: an AI chatbot is not a therapist or a friend. Although he can parrot the vast internet text he’s been trained on, he doesn’t understand what it’s like to feel sadness, confusion, or joy. That’s why I’d tread carefully when using AI chatbots for things that really matter to you, and not take what they say at face value.

An AI chatbot can never replace a real conversation, where both parties are willing to really listen and take the other’s point of view into account. So I decided to drop the AI-assisted therapy talk and reached out to my friend one more time. Wish me luck!


Deeper learning

OpenAI says ChatGPT treats us all the same (most of the time)

Does ChatGPT treat you the same whether you’re Laurie, Luke or Lashonda? Almost, but not quite. OpenAI analyzed millions of conversations with its chatbot and found that ChatGPT will produce a harmful gender or racial stereotype based on a user’s name in about one in 1,000 responses on average and up to one in 100 responses in the most bad case

Why this matters: Bias in AI is a huge problem. Ethicists have long studied the impact of bias when companies use AI models to analyze resumes or loan applications, for example. But the rise of chatbots, which allow individuals to interact directly with models, brings a new twist to the problem. Read more from Will Douglas Heaven.

Bits and Bytes

Introduction to AI: A Beginner’s Guide to Artificial Intelligence by MIT Technology Review
There’s an overwhelming amount of AI news out there, and it’s a lot to keep up with. Would you like someone to take a step back and explain some of the basics? Look no further. Introduction to AI it is MIT Technology Review’s the first newsletter that doubles as a mini-course. You’ll receive one email a week for six weeks, and each edition will walk you through a different topic in AI. Register here.

The race to find new materials with AI needs more data. Meta offers massive amounts for free.
Meta is releasing a massive set of data and models, called Open Materials 2024, that could help scientists use AI to discover new materials much faster. OMat24 addresses one of the biggest bottlenecks in the discovery process: lack of data. (MIT Technology Review)

Cracks are beginning to appear in Microsoft’s “bromance” with OpenAI
As part of OpenAI’s transition from a research lab to a for-profit company, it sought to renegotiate its agreement with Microsoft to secure more computing power and funding. Meanwhile, Microsoft began investing in other AI projects, such as DeepMind co-founder Mustafa Suleyman’s Inflection AI, to reduce reliance on OpenAI — much to the chagrin of Sam Altman.
(The New York Times)

Millions of people use abusive AI “nudify” bots on Telegram
The messaging app is a hotbed for popular AI bots that “remove clothes” from people’s photos to create non-consensual deepfakes. (wired)