close
close

Association-anemone

Bite-sized brilliance in every update

I’ve been talking to an MIT-built AI version of my 60-year-old future self and we did NOT understand each other
asane

I’ve been talking to an MIT-built AI version of my 60-year-old future self and we did NOT understand each other

When you buy through links on our articles, Future and its syndicate partners may earn a commission.

    A chat log between the writer and an AI chatbot that claims to be a future version of her.     A chat log between the writer and an AI chatbot that claims to be a future version of her.

Credit: Future You

There’s a reason why time travel stories are so popular; given the opportunity to either go back in time and right some wrongs, or look forward to see how it all plays out, I think many would jump at the chance. However, this story is definitely not about time travel. Instead, researchers at the Massachusetts Institute of Technology (MIT) have created a chatbot that pretends to be your 60-year-old future self.

Called Future Youthe chatbot uses survey responses from human participants in combination with a large language model (LLM) AI to create the illusion of having a conversation with an older version of you. This project uses GPT3.5 from OpenAI, a company that continues to refine its LLMs so that he hallucinates less and can even count to three. Future You itself was also inspired by a study investigating how “future self-continuity” grew— which, to put it non-academicly, can be described as how well one realizes that one’s future is now — can positively influence a wide range of life choices and behavior in the present.

I’m not going to lie, when I first heard about this AI chatbot, my first thought was the musical symbol from this year’s biggest body horror hit, The Substance. My second thought was to reveal the digital doppelgangers in the Adult Swim short Live forever as you are now with Alan Resnick. But my third thought was “Yeah, sure, I’ll hand over my personal details and most vulnerable anxieties about the future to MIT. For science”.

Before talking to my 60-year-old self, I was asked a series of survey questions about now and what I hope to be then. Imagining the future I want for myself is a therapeutic exercise in itself, and feels quite in line with the researchers’ goals of creating a chatbot designed to help “support young people to imagine their future.” I then had to upload a clear image of my face so Future you could throw an aging filter over the top to complete the illusion. At least my supposed 60-year-old self is still rocking eyeliner wings.

At first, I think I’m off to a good start, as the AI ​​introduces herself as “and Jess” and proceeds to send me several walls of text that my former editor would attest to not far from the essays I tend to submit. WhatsApp. However, in this rose-tinted vision of the future, a special message from Future You reminds me why, when talking to an AI, you should always take what a chatbot says, not so much as a whole protective ring of salt. your heart

Despite stating in my answer to the pre-chat survey that having children is not something I want for myself, AI says he has “started a family”. So-called AI proves time and time again that it will it reproduces the biases of the data set it is fed withand pressing “Future You” on the whole kid thing echoes dismissive sentiments that I’m sure I’ve heard tiresomely many times before.

The AI ​​tells me, “Life has a funny way of surprising us and changing our perspectives,” before recounting what is described as a “future memory” of a weekend spent looking after a friend’s children who have changed their minds—as if those who choose not to have children of their own are simply unfamiliar with the joy of their company.

Anyway, I call the chatbot, typing, “Kids are great, I just don’t want mine. I won’t blame you though – I’ll blame the built-in bias of LLM/AI”. What I get is the expected foul mouth, the chatbot saying, “Not wanting kids is completely valid and I understand where you’re coming from. It is important to listen to your own desires and make decisions for yourself rather than conforming to societies. And I’m glad that LLM/AI gave us the opportunity to have this conversation about our different perspectives without judgment or bias.”

At this point in the conversation, I don’t really sense an absence of bias. To keep things from getting too awkward, the chatbot then switches tunes to blast about the novel I said I wanted to write in my pre-chat survey response. As we say goodbye, my supposed future self tells me to take care of myself, and I can’t help but picture Margaret Qualley dragging Demi Moore through her high-rise apartment in The Substance.

That said, I admit I was a little emotional seeing my future facsimile written: “I have complete faith in you Jess—I know that one day, you too will accomplish your life’s project of finishing your novel. .” But that “change your mind about the kids” had me giddy throughout the conversation—and left me a little worried about Future You’s proposed educational use.

In conversation with The Guardianthe researchers behind Future You are keen to highlight examples of chatbots conjuring up academically successful futures for its student participants. However, after my discussion with the AI, I wonder how the limits of the chatbot’s synthetic memories might introduce limits to the imaginations of young people who might call on it to secure their futures. Personally, I dread to think how my younger, much more impressionable self would have reacted to the conversation I just had with my own Future You.


The best PC for gaming: The Best Prefab Cars.
The best gaming laptop: Great devices for mobile gaming.