AI is coming for our babies — putting their brains at risk
A baby giggles in her crib, eyes locked on the voice calling her name.
She babbles, and a soft plastic robot with blinking eyes responds instantly — mimicking her sounds, reflecting her emotions, keeping perfect time.
She smiles.
And keeps smiling.
Except there’s no one there.
This isn’t a scene from “Black Mirror,” but the next frontier of artificial intelligence. And we’re sleepwalking into it.
As policymakers weigh age-verification laws and schools debate tech rules, we’re ignoring the very start of the developmental timeline.
Meanwhile, the AI market is sprinting toward our youngest citizens.
OpenAI recently announced a partnership with Mattel to bring “age-appropriate” AI toys to market, and xAI has introduced Baby Grok, a chatbot for 6-year-olds. Infant-directed versions may be next.
From birth, the human brain is primed for social interaction.
Thousands of daily exchanges with caregivers shape lifelong systems for bonding, language, emotional regulation and cognitive growth.
These emotionally rich interactions are not optional; they are the biological foundation of learning.
In the caregiver-infant dyad — whether that caregiver is a parent, grandparent or childcare provider — babies develop in a cradle of complex duets linking touch, eye contact, words and coos.
Beneath it all, neurons and oxytocin receptors fire and form crucial connections.
These interactions literally build the brain.
If we derail them, we risk disrupting the foundation of human potential.
AI bots may sound and act human.
But they aren’t.
And baby brains might not know the difference.
Newborns are born with an innate drive to socially engage — recognizing their mother’s voice by the third trimester and immediately seeking connection after birth.
All generative AI agents are based on large language models, and while they may simulate emotional content, they lack the physical and physiological hallmarks of human beings.
Claude can’t touch. Gemini doesn’t have oxytocin receptors.
And while a chatbot’s pitch may mimic a parent’s voice, it lacks the multisensory input babies’ brains are wired to expect — from scent and temperature to micro-expressions.
We have no idea how emotionally hollow interactions with bots will shape a developing brain.
But we’re about to find out.
One key concern is timing.
Temporal contingency — the back-and-forth rhythm of communication — is vital to development.
A caregiver smiles, and the baby smiles back. As the rhythm evolves, children learn how to navigate language and social interaction.
AI interactions, however, often rely on mechanical precision or optimized rhythms, not the “just-right” variability infants need.
Will children raised with bots be equipped to handle the unpredictability of real human relationships?
Another concern is emotional complexity.
Human caregivers offer emotional depth.
They fumble.
They recover.
Children need to encounter difficult emotions — their own and those of others — to learn how to manage them.
This early emotional scaffolding predicts later academic and social success.
A chatbot, endlessly patient and consistent, doesn’t offer that kind of education.
I am not opposed to AI, which used wisely holds real promise for early diagnosis of developmental delays, personalized learning, reduced burdens for parents and educators and more.
But letting our youngest children form attachments to emotionally vacant machines — before we can even comprehend the consequences — is a risk society cannot afford to take.
We need immediate transparency standards and dedicated research into AI’s impact on infants and toddlers before these rapidly developing products take hold.
Child-development organizations like Children and Screens have long called for evidence-based guardrails to protect kids’ health in the digital age.
Now governments must draw clear boundaries around AI use with children under age 3, who are undergoing the most rapid and sensitive brain development of their lifetimes.
And no AI-powered product should be allowed to enter a nursery or daycare without rigorous safety testing and oversight.
This must stop before it starts.
One of the most powerful examples of the critical need for early relationships came not from a lab, but from tragedy: Romania’s orphan crisis, where thousands of children were raised in institutions without stable caregivers.
The result?
Long-term deficits in cognitive, social, and emotional development.
Food and shelter weren’t enough.
Nurture mattered.
Earlier this month, I joined more than 150 scientists to issue a global warning: AI threatens the fundamental social processes that shape healthy humans.
And as the United Nations this week launches its first-ever international panel on AI governance, we must ensure babies aren’t left out of the conversation.
Babies don’t need us to give them better bots, but better boundaries.
Let’s draw the line before it’s too late.
Kathy Hirsh-Pasek of the Brookings Institution is a professor of psychology at Temple University.
Credit to Nypost AND Peoples