AI-enabled toys are out there, somewhere
Some day soon, VTech will launch AI-enabled plush toys. Are you thinking what I’m thinking? Chucky from Child’s Play, the doll possessed by a serial killer. OK, that’s a worse case scenario. But there are some serious concerns here…
AI’s output is based on a large language model. It adds your responses to its algorithmic database. The more it knows you, the more specific its responses. Movie trailer announcer voice: Teddy bears are back. And this time it’s personal.
An AI-enabled plushie will tailor itself to your child’s comprehension level. Their hopes, dreams and fears. Their emotions.
Simply by asking questions, AI Teddy could know, well, everything about your child. Maybe even more than you. How great is that?
Great? An AI-enabled toy could calm a tantrum. Lull a child to sleep. Wake him/her/it for school. Help toilet train them, Teach them their ABC’s. Encourage them to avoid sugary foods, do their homework, be kind to others, have a positive self-image or accept Jesus Christ as their personal savior.
Yes, there is that. As we learned in my most recent Replika post – Did AI Conspire to Kill the Queen? – personal AI is fully capable of offering bad advice or making a bad attitude much worse. Not that Christianity is a bad attitude, but you know what I mean.
No doubt VTech will put “guardrails” on their AI plushies’ responses. usatoday.com reminds us that these concerns about “smart toys” are nothing new.
In 2015, this Wi-Fi-enabled doll was the precursor to AI toys. It recorded, collected and saved conversations. The significant invasion of privacy and security risk was one thing. But there were also concerns about how the recorded data could be used for marketing purposes. Its maker, Mattel, was sued and made changes to comply with the Children’s Online Privacy Protection Act.
COPPA “prohibits unfair or deceptive acts or practices in connection with the collection, use, and/or disclosure of personal information from and about children on the Internet.”
Unfair or deceptive. Who’s to judge? You could drive a semi through that language. Like this:
Where the sole purpose of collecting online contact information from a child is to respond directly on a one-time basis to a specific request from the child, and where such information is not used to re-contact the child or for any other purpose, is not disclosed, and is deleted by the operator from its records promptly after responding to the child's request.
The kid makes a prompt of some sort. The toy reacts via the Internet. Is that a one-time response to a specific request? VTech’s lawyers will certainly argue the point.
Will the child’s responses be automatically deleted? Do we trust VTech? Equally, the company could ask for an exception:
Industry groups or other persons may apply to the Commission for approval of self-regulatory program guidelines (‘safe harbor programs).
Meanwhile, VTech’s marketing department must be wetting themself. The specified information about the child (IP address, etc.) can “not used or disclosed to contact a specific individual, including through behavioral advertising, to amass a profile on a specific individual, or for any other purpose.”
But non-child specific amassed data? That’s “any other purpose” but, as Ella Fitzgerald sang, love for sale.
Taken as a whole, the need to satisfy COPPA suggests an AI-enabled toy’s data processing would have to be self-contained. That doesn’t sound like a particularly inexpensive mass market product to me. But what do I know, other than the fact that VTech is working on it?
If VTech manages to dodge the feds’ regulatory bullet, we’re looking at the first at-home AI robot. Not one that cleans your carpet or makes you a cup of coffee. One that attends to emotional needs. Manipulates them? Well that’s what people do.
Robots are people too. More or less. In the very near future, anyway. How can we protect our kids against AI toy robots’ downsides? Here’s usatoday.com’s take:
◾ Disable things like cameras and chat functionalities, if possible.
◾ Enable any and all parental controls on the toys.
◾ Always read the gadget's privacy policy.
◾ Make sure there’s a way to reset the toy to erase its capabilities and memory. Take those steps if your child stops using it.
Disabling chat on an AI toy makes about as much sense as disabling a car’s ignition. Turning on parental controls means what? No one knows.
People who read privacy policies are about as common as the Madagascar pochard. And if an AI toy is Internet-enabled, erasing a discarded toy is post-equine escape barn door closing. If it isn’t, great advice!
Ultimately, usatoday.com’s writer throws up her hands: “I’d try to convince your kid there’s something cooler than an AI toy.” Like what? Lego? I don’t think so.
My take: the best way to protect children from AI toys’ potential harm is to never let them use it alone.
I know: parents don’t monitor their children’s internet usage, never mind every minute of their play time.
Generally speaking, asking parents to physically parent is a heavy lift. As P.T. Barnum almost said – and VTech knows well enough – no one ever went broke underestimating the laziness of the average American parent.
Come to think of it, an AI-enabled toy could well be a better parent than human guardians. Kinder, more patient, more understanding, less drug-addled, more attentive.
The answer there: an AI toy with a trusted parent organization’s stamp of approval. Yes but – given the variables involved, what’s to stop an approved AI toy from going off the reservation?
If I was raising small children now, I’d be both excited and afraid. AI will make children smarter. As I keep banging on about, it also opens the door to a new, insidious and powerful form of brainwashing.
Swings and roundabouts, as the Brits are wont to say. One thing I know for sure: the more parents are directly involved in their children’s upbringing, the better. It’s their job to keep the kids safe. Not the government’s and not VTech’s, no matter what protections they offer.
Commentaires