upworthy

artificial intelligence

Parenting

2 years before ChatGPT, a kids cartoon warned us about the environmental impacts of AI

Kids should know what AI can and can't do, and what it really costs. Doc McStuffins is on the case.

Disney Jr./YouTube, Unsplash

My 4-year-old watches so much Doc McStuffins that the show has basically become white noise in my household. It's the only thing she'll watch, so when it's on in the background, I barely notice — outside of the absurdly catchy songs living rent-free in my head 24/7. But the other day, she was watching one particular episode when I half tuned in just to see what the plot was.

If you don't have kids in this age bracket, Doc McStuffins is a 10-year-old girl who helps fix up broken toys. It's a really cute show with sweet messages on acceptance, accessibility, imagination, caring, and more. But the episode in question seemed to have a lot more going on plot-wise than the usual, so I sat down and watched a little more. And pretty soon I was hooked into a fascinating story about the climate dangers of Artificial Intelligence and automation. I couldn't believe it!

'The Great McStuffins Meltdown' explained

Season 5, Episode 13. Doc McStuffins, in the previous season, has stopped running her toy-doctoring practice out of her childhood home and now works at McStuffins Toy Hospital. In this episode, it has received a major upgrade with lots of fancy new equipment.

The new machines do a lot of the work that Doc and her friends used to do around the hospital. There's a machine that plays with and encourages toy pets, a Cuddle Bot that cuddles sick toys, and even a Check-Up 3000 that gives routine medical care so the Doc herself can do other things. Doc and her friends are a little bored, and the patients aren't so sure about these new machines, but mostly, things are going pretty great. The hospital is able to help more toys, faster this way.

But oh no! Doc gets a distress call from her friends at the Toyarctic, a fictional frozen land where toys live. Chunks of ice have been breaking off their glaciers. The Toyarctic is melting!

Doc and her friends quickly figure out that the Toyarctic has gotten too warm, which is causing the ice to melt. And the culprit is McStuffins Hospital. With all the new automated machines running, the hospital is using too much power and overheating the power grid, which is causing the Toyarctic's climate to warm at a dangerous rate.

I mean... woah! Doc McStuffins definitely did not have to go this hard, but I respect it.

What fascinated me most was that this episode was released in 2020 — a full two years before ChatGPT became publicly available and the AI craze kicked into hyperdrive.

Disney Jr./YouTube

AI and climate change are both inevitable parts of our children's lives. It's crucial that they learn about them both from a young age.

AI is moving so fast and changing every day. It's also publicly available to people of all ages, and so many of us don't understand how it works very well. That's a dangerous combination. Teachers and college professors everywhere are bemoaning that more and more kids are using AI to write their papers and do their homework without ever learning the material.

And, of course, the even bigger elephant in the room is climate change, which will play a major role in our children's lives as they grow into adults. Parents are desperate for some way to help their kids understand how big of a deal it is. A report from This Is Planeted states "Nearly 70% of parents and caregivers surveyed in 2022 believed children’s media should include age-appropriate information about climate, and 74% agreed that children’s media should include climate solutions," but that less than 5% of the most popular children's shows and family films have any content or themes related to climate change.

(I'd be curious how much of the heavy lifting the GOAT Captain Planet is still doing!)

captain planet flyingGiphy

What's not being talked about enough — unless you're a McStuffins-head like my family is — is the relationship between AI and climate change.

In short: It's not good! AI seems like a quick and fun thing we can access on our phones and computers, but the massive data centers that perform the calculations behind this 'intelligence' consume staggering amounts of power and water, while generating heat and harmful emissions. Promises of more energy-efficient intelligence models, like DeepSeek, are murky at best.

Scientific American even writes that the environmental impact of AI goes far beyond its emissions and energy usage. What is it being used for? In many cases, to make things faster and bigger — including industries that can harm the Earth like logging, drilling, fast fashion.

I was so impressed that a show popular with children as young as 2 could tackle such an urgent and important topic.

Watching it together opened doors for us to begin age-appropriate conversations with both of our kids about AI, climate change, and how the two are related. Conversations that, I'm sure, we'll be continuing to have and build on for years to come.

To be fair, Artificial Intelligence can do some good things. You see this play out on the show. Initially, it does help the hospital treat more toys! And in the real world, for all the negative environmental effects, there are people out there trying to use AI to monitor emissions and create more energy-efficient practices that might ultimately help the planet.

In the end, Doc McStuffins and her friends decide to shut down the fancy automated machines at the hospital. Not only are they hurting the toys that live in the Toyarctic, they just aren't as good as the real thing. They don't always know the right questions to ask, they don't make the patients feel safe or cared for, and of course, their machine-cuddles don't come with any real warmth or love.

If nothing else, I hope that's the message that sticks with my kids long after they've outgrown this show.

Joy

Top iPad app takes a stand for human creativity, flat refusing to offer generative AI tools

The CEO and co-founder of Procreate made a blunt, powerful statement in a viral video.

The use of generative AI tools in art software is up for debate.

Whether we like it or not,artificial intelligence (AI) has arrived in our lives. Once only the subject of sci-fi films and tech geeks' imaginations, various iterations of AI technology are now in use across nearly every industry.

Depending on your beliefs about and understanding of AI, that's either a good or a bad thing. At this point, most people seem to recognize and acknowledge that there are some profoundly helpful uses for AI, while also feeling trepidation about the reliability of popular language learning models such as ChatGPT, Gemini, Perplexity and other AI tools many of us have begun using regularly.

One realm that has seen significant backlash against AI is art. It's one thing for a machine to do complex equations or write code or or analyze medical images or defuse a bomb. It's another to replace human creativity with AI, which is why Procreate co-founder and CEO James Cuda is saying "no" to incorporating AI tools into the company's art software.


Procreate is a popular iPad app with the slogan "Art is for Everyone," which allows users to sketch, paint, illustrate and animate. In a video shared on X, Cuda was blunt. "I really f__king hate generative AI," he said in a post captioned, "We're never going there. Creativity is made, not generated."

"I don't like what's happening in the industry, and I don't like what it's doing to artists," he said. "We're not going to be introducing any generative AI into our products. Our products are always designed and developed with the idea that a human will be creating something."

Watch:

"We believe we're on the right path supporting human creativity," he concluded. Cuda's announcement comes as its biggest competitor, Adobe

A statement on the Procreate website explains further:

"Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future. We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.

We're here for the humans. We're not chasing a technology that is a moral threat to our greatest jewel: human creativity. In this technological rush, this might make us an exception or seem at risk of being left behind. But we see this road less travelled as the more exciting and fruitful one for our community."

Generative AI has been labeled as theft due to the AI models using real art from real artists to generate images. Many artists celebrated Cuda's announcement, praising Procreate for supporting and empowering artists. Others said the company was being overly sentimental and out of touch with the times.

It's important to note that Cuda specifically refers to "generative AI" which does not mean all AI. Artificial intelligence isn't just one thing—there are various AI models, some of which are used for predictions and analysis and others that are used to "create." It's the generative AI used to create that has artists, musicians, writers and other creative professionals up in arms.

The question of what "counts" as art has been debated for centuries, but we've always agreed that art comes from humans. Some see art as the creative expression of the human spirit, which makes machine-created art feel soulless. Easier and more efficient, perhaps, but lacking the intangible, inspiring, intriguing quality of individual human creativity.

As Cuda said, "We don't exactly know where this story ends or where it's going to go." Perhaps resisting generative AI is a losing battle and humans are doomed to be replaced by machines. Maybe AI-generated art will simply make 100% human-created art more valuable and in-demand. Maybe there's another possibility no one has even conceived of yet. However things turn out, it's the real choices real humans make that will determine what direction we will go.

Science

College students use AI to decode ancient scroll burned in Mount Vesuvius

“Some of these texts could completely rewrite the history of key periods of the ancient world."

When Mount Vesuvius erupted in 79 C.E., it buried entire cities in volcanic materials. While Pompeii is the most famous site affected by the natural disaster, the nearby villa of Herculaneum was also laid to waste—including over 800 precious scrolls found inside Herculaneum’s library, which were carbonized by the heat, making them impossible to open and recover their contents.

Which brings us to the Vesuvius challenge, started by computer scientist Brent Seales and entrepreneurs Nat Friedman and Daniel Gross in March 2023. The contest would award $1 million in prizes to whoever could use machine learning to successfully read from the scrolls without damaging them.

On February 5, the prize-winning team was announced.


The team consisted of three savvy college students— Youssef Nader in Germany, Luke Farritor in the US, and Julian Schilliger in Switzerland—working with each other from across the globe.

Each student had a prior individual accomplishment in the challenge before teaming up. Farritor first deciphered a word from the scroll ((ΠΟΡΦΥΡΑϹ, or “porphyras,” which means “purple” in ancient Greek), after which Nader was able to read multiple column from the scroll, in addition to Julian Schilliger creating 3D map renderings of the papyrus.

Nader, Farritor and Schillinger eventually combined their talents to train machine-learning algorithms to decipher more than 2,000 characters. Contest organizers estimated a less than 30% success rate for even less characters.

So, what exactly did the scrolls say? Turns out, the ancient cultures were just as curious about what makes us truly happy in life as we are today.

From the Vesuvius Challenge/ scrollprize.org

The translated text, thought to be written by Epicurean philosopher Philodemus, appears to be a philosophical discussion on pleasure, and how it’s affected by things like music and food. And quite possibly “throwing shade” as stoicism by calling it “an incomplete philosophy because it has ‘nothing to say about pleasure.”

“We can’t escape the feeling that the first text we’ve uncovered is a 2,000-year-old blog post about how to enjoy life,” the Vesuvius Challenge website writes.

The first Vesuvius Challenge resulted in 5% of one scroll being read. For 2024, the goalpost has been moved to being able to read 90% of all four scrolls currently scanned, and to lay the foundation to read all 800 scrolls, and possibly other texts found at the Herculaneum library.

“Some of these texts could completely rewrite the history of key periods of the ancient world,” Robert Fowler, a classicist and the chair of the Herculaneum Society, told Bloomberg. “This is the society from which the modern Western world is descended.”

Using artificial intelligence to create a future has been a prime topic of conversation as of late, but this story is a great example of how AI can give us rare glimpses into the past as well. It's pretty incredible to think about how many ancient mysteries could be solved as technology continues to advance in the years to come.

But no matter how much knowledge we gain, it feels safe to say that pleasure might always an enigma.

"The Star-Spangled Banner" has never been interpreted quite like this before.

As people worry about whether artificial intelligence (AI) will replace people's jobs, it appears at least one job is safe—the person who puts the closed captioning text on the jumbotron at sports events.

A video shared on X (formerly Twitter) shows what happened at a Portland Trail Blazers basketball game when some kind of automated closed captioning tool misheard the lyrics of "The Star-Spangled Banner." You know, our country's national anthem that pretty much every American knows by heart? And the captions it came up with were hilariously entertaining.

A guy named Brian (@brianonhere) shared the video with the text, "bro im crying lmao. of all the songs to use AI captions on." As the jumbotron captions came on the screen while the national anthem was being sung, this is what people in the crowd saw:


During, "O'er the ramparts we watched," the captions relayed the previous words in the song ("…broad stripes and bright stars, through the perilous fight") as, "STARS. PASS THROUGH THE PAYROLL. BUS FIRE."

Then it continued, changing "O'er the ramparts we watched…" to: "OR THE RIGHT. HART TWEET WOW! TOUCHED WERE SO GALLANTLY STREAMING ME. IT'S RIGHT. THE BOMBS. FIRST EVENING. GAVE PROOF. THROUGH THE NIGHT. RIGHT THAT OUR FLAG WAS STILL THERE."

You might think it was getting better, but oh no, we're not done yet. Literally.

"OH SAY. AIN'T DONE. GUYS HAD STARTED. SUSPECT ANGLE. MADHU." (That's not even a word!) "LAY-UP AND, UH, THE FRIEND."

Unfortunately, we don't know how the caption interpreted the final line, "and the home of the brave," but we probably don't want to know.

Watch:

People on the r/ripcity subreddit for fans of the Portland Trail Blazers shared their experiencing witnessing the closed caption fail:

"Captions were great tonight."

"I never laughed so hard during the national anthem. That sh-t was bonkers."

"That had to be on purpose, right? The entire section I was in was busting up reading them. Either it was on purpose by some funny intern or we have nothing to worry about with A.I. taking over any jobs at the Rose Garden."

Seriously, it's not likely the machines are going to take over any time soon if they can't even get the national anthem lyrics right. They do provide for some fabulous entertainment in the meantime, though.

Thankfully, for the deaf people who rely on closed captioning to know what's going on, the song is well known enough to recognize that the words on the screen were a total tech fail. Bring back the human typing in the words, folks! Some things machines just aren't meant to do—at least not yet.