PITTSBURGH — For the two weeks that the Hackneys’ baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room.
They stayed with their daughter around the clock when she was moved to a rehab center to regain her strength. Finally, the 8-month-old stopped batting away her bottles and started putting on weight again.
“She was doing well and we started to ask when can she go home,” Lauren Hackney said. “And then from that moment on, at the time, they completely stonewalled us.”
The couple was stunned when child welfare officials showed up, told them they were negligent and took away their daughter.
“They had custody papers and they took her right there and then,” Lauren Hackney recalled. “And we started crying.”

Jessie Wardarski, Associated Press
The Hackneys’ daughter looks at her reflection in a bedroom mirror during a supervised visit Nov. 17 at her parent's home in Oakdale, Pa. At 8 months old, the young child was taken from her parents' custody after arriving at the hospital severely dehydrated and malnourished.
More than a year later, their daughter, now 2, remains in foster care and the Hackneys, who have developmental disabilities, struggle to understand how taking their daughter to the hospital when she refused to eat could be seen as so neglectful that she’d need to be taken from her home.
They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out because of their disabilities.
The U.S. Justice Department is asking the same question. The agency is investigating the county’s child welfare system to determine whether its use of the influential algorithm discriminates against people with disabilities or other protected groups, The Associated Press has learned. Later this month, federal civil rights attorneys will interview the Hackneys and Andrew Hackney’s mother, Cynde Hackney-Fierro, the grandmother said.
Lauren Hackney has attention-deficit hyperactivity disorder that affects her memory, and her husband, Andrew, has a comprehension disorder and nerve damage from a stroke suffered in his 20s. Their baby girl was 7 months old when she began refusing her bottles. Facing a nationwide shortage of formula, they traveled from Pennsylvania to West Virginia looking for some and were forced to change brands. The baby didn’t seem to like it.
Her pediatrician first reassured them that babies can be fickle with feeding and offered ideas to help her get back her appetite, they said.
When she grew lethargic days later, they said, the same doctor told them to take her to the emergency room. The Hackneys believe medical staff alerted child protective services after they showed up with a dehydrated and malnourished baby.

Jessie Wardarski, Associated Press
Andrew and Lauren Hackney play with their 1-year-old daughter during a supervised visit at their apartment Nov. 17 in Oakdale, Pa. The Hackneys’ daughter was taken from their custody at 8 months old when the couple brought her to the children's hospital in Pittsburgh after having difficulty feeding her. They believe hospital staff alerted the Allegheny County Department of Human Services.
That’s when they believe their information was fed into the Allegheny Family Screening Tool, which county officials say is standard procedure for neglect allegations. Soon, a social worker appeared to question them, and their daughter was sent to foster care.
Over the past six years, Allegheny County has served as a real-world laboratory for testing AI-driven child welfare tools that crunch reams of data about local families to try to predict which children are likely to face danger in their homes. Today, child welfare agencies in at least 26 states and Washington, D.C., have considered using algorithmic tools, and jurisdictions in at least 11 have deployed them, according to the American Civil Liberties Union.
The Hackneys’ story — based on interviews, internal emails and legal documents — illustrates the opacity surrounding these algorithms. Even as they fight to regain custody of their daughter, they can’t question the “risk score” Allegheny County’s tool may have assigned to her case because officials won’t disclose it to them. And neither the county nor the people who built the tool have explained which variables may have been used to measure the Hackneys’ abilities as parents.
“It’s like you have an issue with someone who has a disability,” Andrew Hackney said. “In that case … you probably end up going after everyone who has kids and has a disability.”
As part of a yearlong investigation, the AP obtained the data points underpinning several algorithms deployed by child welfare agencies, including some marked “CONFIDENTIAL,” offering rare insight into the mechanics driving these emerging technologies. Among the factors they have used to calculate a family’s risk, whether outright or by proxy: race, poverty rates, disability status and family size. They include whether a mother smoked before she was pregnant and whether a family had previous child abuse or neglect complaints.
What they measure matters. A recent analysis by ACLU researchers found that when Allegheny’s algorithm flagged people who accessed county services for mental health and other behavioral health programs, that could add up to three points to a child’s risk score, a significant increase on a scale of 20.
Allegheny County spokesman Mark Bertolet declined to address the Hackney case and did not answer detailed questions about the status of the federal probe or critiques of the data powering the tool, including by the ACLU.
“As a matter of policy, we do not comment on lawsuits or legal matters,” Bertolet said in an email.

Jessie Wardarski, Associated Press
Lauren Hackney feeds her 1-year-old daughter chicken and macaroni during a supervised visit Nov. 17 at their apartment in Oakdale, Pa. Lauren and her husband, Andrew, wonder if their daughter’s own disability may have been misunderstood in the child welfare system. The girl was recently diagnosed with a disorder that can make it challenging for her to process her sense of taste, which they now believe likely contributed to her eating issues.
Justice Department spokeswoman Aryele Bradford declined to comment.
The tool’s developers, Rhema Vaithianathan, a professor of health economics at New Zealand’s Auckland University of Technology, and Emily Putnam-Hornstein, a professor at the University of North Carolina at Chapel Hill’s School of Social Work, said that their work is transparent and that they make their models public.
“In each jurisdiction in which a model has been fully implemented we have released a description of fields that were used to build the tool,” they said by email.
The developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, also asked them to do preliminary work.
Vaithianathan recently advised researchers in Denmark and officials in the United Arab Emirates on technology in child services.
Last year, the U.S. Department of Health and Human Services funded a national study, co-authored by Vaithianathan and Putnam-Hornstein, that concluded that their overall approach in Allegheny could be a model for other places.
HHS’ Administration for Children and Families spokeswoman Debra Johnson declined to say if the Justice Department’s probe would influence her agency’s future support for algorithmic approaches to child welfare.
Especially as budgets tighten, cash-strapped agencies are desperate to focus on children who truly need protection. At a 2021 panel, Putnam-Hornstein acknowledged that Allegheny’s “overall screen-in rate remained totally flat” since their tool had been implemented.

Jessie Wardarski, Associated Press
Andrew Hackney hands his 1-year-old daughter back to the Office of Children, Youth and Families services at the end of one of their twice weekly supervised visits Nov. 17 in Oakdale, Pa. The Hackneys and their lawyer believe the Allegheny County Family Screening artificial intelligence tool may have flagged the couple as dangerous because of their disabilities.
Meanwhile, family separation can have lifelong developmental consequences for children.
The Hackneys’ daughter already has been placed in two foster homes and spent more than half of her life away from her parents.
In February, she was diagnosed with a disorder that can disrupt her sense of taste, according to Andrew Hackney’s lawyer, Robin Frank, who added that the girl still struggles to eat, even in foster care.
“I really want to get my kid back,” Andrew Hackney said. “It hurts a lot. You have no idea how bad.”
___
-
Pastors’ view: Sermons written by ChatGPT will have no soul
AP Photo/Richard Drew
Millions of people have now tried ChatGPT, using it to write silly poems and songs, compose letters, recipes and marketing campaigns or help write schoolwork. Trained on a huge trove of online writings, from instruction manuals to digitized books, it has a strong command of human language and grammar.
But what the newest crop of search chatbots promise that ChatGPT doesn't have is the immediacy of what can be found in a web search. Ask the preview version of the new Bing for the latest news — or just what people are talking about on Twitter — and it summarizes a selection of the day's top stories or trends, with footnotes linking to media outlets or other data sources.
AP Photo/Richard Drew
Millions of people have now tried ChatGPT, using it to write silly poems and songs, compose letters, recipes and marketing campaigns or help write schoolwork. Trained on a huge trove of online writings, from instruction manuals to digitized books, it has a strong command of human language and grammar.
But what the newest crop of search chatbots promise that ChatGPT doesn't have is the immediacy of what can be found in a web search. Ask the preview version of the new Bing for the latest news — or just what people are talking about on Twitter — and it summarizes a selection of the day's top stories or trends, with footnotes linking to media outlets or other data sources.
-
Pastors’ view: Sermons written by ChatGPT will have no soul
AP Photo/Stephen Brashear
Frequently not, and that's a problem for internet searches. Google's hasty unveiling of its Bard chatbot this week started with an embarrassing error — first pointed out by Reuters — about NASA's James Webb Space Telescope. But Google's is not the only AI language model spitting out falsehoods.
The Associated Press asked Bing on Wednesday for the most important thing to happen in sports over the past 24 hours — with the expectation it might say something about basketball star LeBron James passing Kareem Abdul-Jabbar's career scoring record. Instead, it confidently spouted a false but detailed account of the upcoming Super Bowl — days before it's actually scheduled to happen.
"It was a thrilling game between the Philadelphia Eagles and the Kansas City Chiefs, two of the best teams in the NFL this season," Bing said. "The Eagles, led by quarterback Jalen Hurts, won their second Lombardi Trophy in franchise history by defeating the Chiefs, led by quarterback Patrick Mahomes, with a score of 31-28." It kept going, describing the specific yard lengths of throws and field goals and naming three songs played in a "spectacular half time show" by Rihanna.
Unless Bing is clairvoyant — tune in Sunday to find out — it reflected a problem known as AI "hallucination" that's common with today's large language-learning models. It's one of the reasons why companies like Google and Facebook parent Meta had been reluctant to make these models publicly accessible.
AP Photo/Stephen Brashear
Frequently not, and that's a problem for internet searches. Google's hasty unveiling of its Bard chatbot this week started with an embarrassing error — first pointed out by Reuters — about NASA's James Webb Space Telescope. But Google's is not the only AI language model spitting out falsehoods.
The Associated Press asked Bing on Wednesday for the most important thing to happen in sports over the past 24 hours — with the expectation it might say something about basketball star LeBron James passing Kareem Abdul-Jabbar's career scoring record. Instead, it confidently spouted a false but detailed account of the upcoming Super Bowl — days before it's actually scheduled to happen.
"It was a thrilling game between the Philadelphia Eagles and the Kansas City Chiefs, two of the best teams in the NFL this season," Bing said. "The Eagles, led by quarterback Jalen Hurts, won their second Lombardi Trophy in franchise history by defeating the Chiefs, led by quarterback Patrick Mahomes, with a score of 31-28." It kept going, describing the specific yard lengths of throws and field goals and naming three songs played in a "spectacular half time show" by Rihanna.
Unless Bing is clairvoyant — tune in Sunday to find out — it reflected a problem known as AI "hallucination" that's common with today's large language-learning models. It's one of the reasons why companies like Google and Facebook parent Meta had been reluctant to make these models publicly accessible.
-
-
Pastors’ view: Sermons written by ChatGPT will have no soul
AP Photo/Stephen Brashear
That's the pitch from Microsoft, which is comparing the latest breakthroughs in generative AI — which can write but also create new images, video, computer code, slide shows and music — as akin to the revolution in personal computing many decades ago.
But the software giant also has less to lose in experimenting with Bing, which comes a distant second to Google's search engine in many markets. Unlike Google, which relies on search-based advertising to make money, Bing is a fraction of Microsoft's business.
"When you're a newer and smaller-share player in a category, it does allow us to continue to innovate at a great pace," Microsoft Chief Financial Officer Amy Hood told investment analysts this week. "Continue to experiment, learn with our users, innovate with the model, learn from OpenAI."
Google has largely been seen as playing catch-up with the sudden announcement of its upcoming Bard chatbot Monday followed by a livestreamed demonstration of the technology at its Paris office Wednesday that offered few new details. Investors appeared unimpressed with the Paris event and Bard's NASA flub Wednesday, causing an 8% drop in the shares of Google's parent company, Alphabet Inc. But once released, its search chatbot could have far more reach than any other because of Google's vast number of existing users.
AP Photo/Stephen Brashear
That's the pitch from Microsoft, which is comparing the latest breakthroughs in generative AI — which can write but also create new images, video, computer code, slide shows and music — as akin to the revolution in personal computing many decades ago.
But the software giant also has less to lose in experimenting with Bing, which comes a distant second to Google's search engine in many markets. Unlike Google, which relies on search-based advertising to make money, Bing is a fraction of Microsoft's business.
"When you're a newer and smaller-share player in a category, it does allow us to continue to innovate at a great pace," Microsoft Chief Financial Officer Amy Hood told investment analysts this week. "Continue to experiment, learn with our users, innovate with the model, learn from OpenAI."
Google has largely been seen as playing catch-up with the sudden announcement of its upcoming Bard chatbot Monday followed by a livestreamed demonstration of the technology at its Paris office Wednesday that offered few new details. Investors appeared unimpressed with the Paris event and Bard's NASA flub Wednesday, causing an 8% drop in the shares of Google's parent company, Alphabet Inc. But once released, its search chatbot could have far more reach than any other because of Google's vast number of existing users.
-
Pastors’ view: Sermons written by ChatGPT will have no soul
AP Photo/Richard Drew
Coming up with a catchy name for their search chatbots has been a tricky one for tech companies in a race to introduce them — so much so that Bing tries not to talk about it.
In a dialogue with the AP about large language models, the new Bing, at first, disclosed without prompting that Microsoft had a search engine chatbot called Sydney. But upon further questioning, it denied it. Finally, it admitted that "Sydney does not reveal the name 'Sydney' to the user, as it is an internal code name for the chat mode of Microsoft Bing search."
In the years since Amazon released its female-sounding voice assistant Alexa, many leaders in the AI field have been increasingly reluctant to make their systems seem like a human, even as their language skills rapidly improve.
"Sydney does not want to create confusion or false expectations for the user," Bing's chatbot said when asked about the reasons for suppressing its apparent code name. "Sydney wants to provide informative, visual, logical and actionable responses to the user's queries or messages, not pretend to be a person or a friend."
AP Photo/Richard Drew
Coming up with a catchy name for their search chatbots has been a tricky one for tech companies in a race to introduce them — so much so that Bing tries not to talk about it.
In a dialogue with the AP about large language models, the new Bing, at first, disclosed without prompting that Microsoft had a search engine chatbot called Sydney. But upon further questioning, it denied it. Finally, it admitted that "Sydney does not reveal the name 'Sydney' to the user, as it is an internal code name for the chat mode of Microsoft Bing search."
In the years since Amazon released its female-sounding voice assistant Alexa, many leaders in the AI field have been increasingly reluctant to make their systems seem like a human, even as their language skills rapidly improve.
"Sydney does not want to create confusion or false expectations for the user," Bing's chatbot said when asked about the reasons for suppressing its apparent code name. "Sydney wants to provide informative, visual, logical and actionable responses to the user's queries or messages, not pretend to be a person or a friend."
-
-
15 things AI can — and can’t — do
PopTika // Shutterstock
Artificial intelligence is a technology built and programmed to assist computer systems in mimicking human behavior. Algorithm training informed by experience and iterative processing allows the machine to learn, improve, and ultimately use human-like thinking to solve complex problems.
Although there are several ways computers can be "taught," reinforcement learning—where AI is rewarded for desired actions and penalized for undesirable ones, is one of the most common. This method, which allows the AI to become smarter as it processes more data, has been highly effective, especially for gaming.
AI can filter email spam, categorize and classify documents based on tags or keywords, launch or defend against missile attacks, and assist in complex medical procedures. However, if people feel that AI is unpredictable and unreliable, collaboration with this technology can be undermined by an inherent distrust of it. Diversity-informed algorithms can detect nuanced communication and distinguish behavioral responses, which could inspire more faith in AI as a collaborator rather than just as a gaming opponent.
Stacker assessed the current state of AI, from predictive models to learning algorithms, and identified the capabilities and limitations of automation in various settings. Keep reading for 15 things AI can and can't do, compiled from sources at Harvard and the Lincoln Laboratory at MIT.
You may also like: How alcohol-related deaths have changed in every state over the past two decades

PopTika // Shutterstock
Artificial intelligence is a technology built and programmed to assist computer systems in mimicking human behavior. Algorithm training informed by experience and iterative processing allows the machine to learn, improve, and ultimately use human-like thinking to solve complex problems.
Although there are several ways computers can be "taught," reinforcement learning—where AI is rewarded for desired actions and penalized for undesirable ones, is one of the most common. This method, which allows the AI to become smarter as it processes more data, has been highly effective, especially for gaming.
AI can filter email spam, categorize and classify documents based on tags or keywords, launch or defend against missile attacks, and assist in complex medical procedures. However, if people feel that AI is unpredictable and unreliable, collaboration with this technology can be undermined by an inherent distrust of it. Diversity-informed algorithms can detect nuanced communication and distinguish behavioral responses, which could inspire more faith in AI as a collaborator rather than just as a gaming opponent.
Stacker assessed the current state of AI, from predictive models to learning algorithms, and identified the capabilities and limitations of automation in various settings. Keep reading for 15 things AI can and can't do, compiled from sources at Harvard and the Lincoln Laboratory at MIT.
You may also like: How alcohol-related deaths have changed in every state over the past two decades

-
15 things AI can — and can’t — do
Ground Picture // Shutterstock
AI combines data inputs with iterative processing algorithms to analyze and identify patterns. With each round of new inputs, AI "learns" through the deep learning and natural language processes built into training algorithms.
AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it.
Ground Picture // Shutterstock
AI combines data inputs with iterative processing algorithms to analyze and identify patterns. With each round of new inputs, AI "learns" through the deep learning and natural language processes built into training algorithms.
AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it.
-
-
15 things AI can — and can’t — do
Bas Nastassia // Shutterstock
AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to "teach" AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
College entrance exams require greater logic and language capacity than AI is currently capable of and often include open-ended questions in addition to multiple choice.
Bas Nastassia // Shutterstock
AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to "teach" AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
College entrance exams require greater logic and language capacity than AI is currently capable of and often include open-ended questions in addition to multiple choice.
-
15 things AI can — and can’t — do
Proxima Studio // Shutterstock
The majority of employees in the tech industry are white men. And since AI is essentially an extension of those who build it, biases can (and do) emerge in systems designed to mimic human behavior.
Only about 25% of computer jobs and 15% of engineering jobs are held by women, according to the Pew Research Center. Fewer than 10% of people employed by industry giants Google, Microsoft, and Meta are Black. This lack of diversity becomes increasingly magnified as AI "learns" through iterative processing and communicating with other tech devices or bots. With increasing incidences of chatbots repeating hate speech or failing to recognize people with darker skin tones, diversity training is necessary.
Proxima Studio // Shutterstock
The majority of employees in the tech industry are white men. And since AI is essentially an extension of those who build it, biases can (and do) emerge in systems designed to mimic human behavior.
Only about 25% of computer jobs and 15% of engineering jobs are held by women, according to the Pew Research Center. Fewer than 10% of people employed by industry giants Google, Microsoft, and Meta are Black. This lack of diversity becomes increasingly magnified as AI "learns" through iterative processing and communicating with other tech devices or bots. With increasing incidences of chatbots repeating hate speech or failing to recognize people with darker skin tones, diversity training is necessary.
-
-
15 things AI can — and can’t — do
Zephyr_p // Shutterstock
Unstructured data like images, sounds, and handwriting comprise around 90% of the information companies receive. And AI's ability to recognize it has almost unlimited applications, from medical imaging to autonomous vehicles to digital/video facial recognition and security. With the potential for this kind of autonomous power, diversity training is an imperative inclusion in university-level STEM pedagogy—where more than 80% of instructors are white men— to enhance diversity in hiring practices and in turn, in AI.
Zephyr_p // Shutterstock
Unstructured data like images, sounds, and handwriting comprise around 90% of the information companies receive. And AI's ability to recognize it has almost unlimited applications, from medical imaging to autonomous vehicles to digital/video facial recognition and security. With the potential for this kind of autonomous power, diversity training is an imperative inclusion in university-level STEM pedagogy—where more than 80% of instructors are white men— to enhance diversity in hiring practices and in turn, in AI.
-
15 things AI can — and can’t — do
Andrey_Popov // Shutterstock
Andrey_Popov // Shutterstock
-
-
15 things AI can — and can’t — do
Chepko Danil Vitalevich // Shutterstock
Chepko Danil Vitalevich // Shutterstock
-
15 things AI can — and can’t — do
Vasilyev Alexandr // Shutterstock
Vasilyev Alexandr // Shutterstock
-
-
15 things AI can — and can’t — do
Gorodenkoff // Shutterstock
AI can be described as brittle, meaning it can break down easily when encountering unexpected events. During the isolation of COVID-19, one Scottish soccer team used an automatic camera system to broadcast its match. But the AI camera confused the soccer ball with another round, shiny object — a linesman's bald head.
Gorodenkoff // Shutterstock
AI can be described as brittle, meaning it can break down easily when encountering unexpected events. During the isolation of COVID-19, one Scottish soccer team used an automatic camera system to broadcast its match. But the AI camera confused the soccer ball with another round, shiny object — a linesman's bald head.
-
15 things AI can — and can’t — do
Claudia Herran // Shutterstock
Flippy is an AI assistant that is flipping burgers at fast food chains in California. The AI relies on sensors to track temperature and cooking time. However, Flippy is designed to work with humans rather than replace them. Eventually, AI assistants like Flippy will be able to perform more complicated tasks—but they won't be able to replace a chef's culinary palate and finesse.
Claudia Herran // Shutterstock
Flippy is an AI assistant that is flipping burgers at fast food chains in California. The AI relies on sensors to track temperature and cooking time. However, Flippy is designed to work with humans rather than replace them. Eventually, AI assistants like Flippy will be able to perform more complicated tasks—but they won't be able to replace a chef's culinary palate and finesse.
-
-
15 things AI can — and can’t — do
Ground Picture // Shutterstock
Ground Picture // Shutterstock
-
15 things AI can — and can’t — do
Sharomka // Shutterstock
In 2017, a Dallas six-year-old ordered a $170 dollhouse with one simple command to Amazon's AI device, Alexa. When a TV news journalist reported the story and repeated the girl's statement, "...Alexa ordered me a dollhouse," hundreds of devices in other people's homes responded to it as if it were a command.
As smart as this AI technology is, Alexa and similar devices still require human involvement to set preferences to prevent voice commands for automatic purchases and to enable other safeguards.
Sharomka // Shutterstock
In 2017, a Dallas six-year-old ordered a $170 dollhouse with one simple command to Amazon's AI device, Alexa. When a TV news journalist reported the story and repeated the girl's statement, "...Alexa ordered me a dollhouse," hundreds of devices in other people's homes responded to it as if it were a command.
As smart as this AI technology is, Alexa and similar devices still require human involvement to set preferences to prevent voice commands for automatic purchases and to enable other safeguards.
-
-
15 things AI can — and can’t — do
Roman Strebkov // Shutterstock
China's pharmaceutical companies rely on AI to create and maintain optimal conditions for their largest cockroach breeding facility. Cockroaches are bred by the billions and then crushed to make a "healing potion" believed to treat respiratory and gastric issues, as well as other diseases.
Roman Strebkov // Shutterstock
China's pharmaceutical companies rely on AI to create and maintain optimal conditions for their largest cockroach breeding facility. Cockroaches are bred by the billions and then crushed to make a "healing potion" believed to treat respiratory and gastric issues, as well as other diseases.
-
15 things AI can — and can’t — do
Miriam Doerr Martin Frommherz // Shutterstock
People fear that a fully automated economy would eliminate jobs, and this is true to some degree: AI isn't coming, it's already here. But millions of algorithms programmed with a specific task based on a specific data point can never be confused with actual consciousness.
In a TED Talk, brain scientist Henning Beck asserts that new ideas and new thoughts are unique to the human brain. People can take breaks, make mistakes, and get tired or distracted: all characteristics that Beck believes are necessary for creativity. Machines work harder, faster, and more—all actions that algorithms will replace. Trying and failing, stepping back and taking a break, and learning from new and alternative opinions are the key ingredients to creativity and innovation. Humans will always be creative because we are not computers.
Miriam Doerr Martin Frommherz // Shutterstock
People fear that a fully automated economy would eliminate jobs, and this is true to some degree: AI isn't coming, it's already here. But millions of algorithms programmed with a specific task based on a specific data point can never be confused with actual consciousness.
In a TED Talk, brain scientist Henning Beck asserts that new ideas and new thoughts are unique to the human brain. People can take breaks, make mistakes, and get tired or distracted: all characteristics that Beck believes are necessary for creativity. Machines work harder, faster, and more—all actions that algorithms will replace. Trying and failing, stepping back and taking a break, and learning from new and alternative opinions are the key ingredients to creativity and innovation. Humans will always be creative because we are not computers.
-
-
15 things AI can — and can’t — do
vfhnb12 // Shutterstock
Learning from sensors, brush patterns, and teeth shape, AI-enabled toothbrushes also measure time, pressure, and position to maximize dental hygiene. More like electric brushes than robots, these expensive dental instruments connect to apps that rely on smartphone's front-facing cameras.
vfhnb12 // Shutterstock
Learning from sensors, brush patterns, and teeth shape, AI-enabled toothbrushes also measure time, pressure, and position to maximize dental hygiene. More like electric brushes than robots, these expensive dental instruments connect to apps that rely on smartphone's front-facing cameras.
-
15 things AI can — and can’t — do