Scientists develop A.I. to predict why children do badly at school

Scientists develop A.I. to predict why children do badly at school

Researchers have used machine learning to more accurately identify children with learning difficulties who, until now, have either been misdiagnosed, or have gone under the radar of education authorities.

Scientists at the Medical Research Council (MRC) Cognition and Brain Sciences Unit at the University of Cambridge said by using data from hundreds of children who struggle at school, they were able to identify new clusters of learning difficulties that did not match the previous diagnoses some children had been given.

The study, published in Developmental Science, recruited 550 children who were referred to a clinic – the Centre for Attention Learning and Memory – because they were experiencing problems at school.

The team build up a machine learning algorithm with a range of cognitive testing data from each child, including measures of listening skills, spatial reasoning, problem-solving, vocabulary, and memory. Based on this data, the algorithm suggested that the children best fitted into four clusters of difficulties.

Scientists said that previous research into learning difficulties has focused on children who had already been given a particular diagnosis, such as attention deficit hyperactivity disorder (ADHD), an autism spectrum disorder, or dyslexia.

Using artificial intelligence, they were able to include children with all difficulties regardless of diagnosis, and better capture the range of difficulties within, and the overlap between, different diagnostic categories.

An important landmark

According to Dr Duncan Astle from the MRC Cognition and Brain Sciences Unit at the University of Cambridge, who led the study, receiving a diagnosis is a critical moment for parents of children with learning difficulties, because it recognises their problems and opens up access to support.

However, in some cases that diagnosis and support fail to capture the specific challenges that the children face.

“Parents and professionals working with these children every day see that neat labels don’t capture their individual difficulties – for example one child’s ADHD is often not like another child’s,” said Dr Astle.

He explained that the study was the first of its kind to apply machine learning to a broad spectrum of hundreds of struggling learners.

In previous research, children’s poor reading skills have been linked to difficulty in processing the sounds in words. “But by looking at children with a broad range of difficulties, we found – unexpectedly – that many children who have difficulties processing sounds in words don’t just have problems with reading, they also have problems with maths,” he said.

“As researchers studying learning difficulties, we need to move beyond the diagnostic label and we hope this study will assist with developing better interventions that more specifically target children’s individual cognitive difficulties.”

Internet of Business says

We are the beginning of AI and machine learning’s journey deep into the worlds of health and social care. Already, by linking the technology with wearable devices, for example, AI has been able to identify common ailments such as heart problems and diabetes, and has promising applications in cancer research.

It may be also that, as in this educational research, AI promises a new approach to medical diagnosis by identifying new or commonly misdiagnosed problems, opening up a world of more personalised care.

Plus: AI could cause human rights discrimination

In further AI news, a report by the University of Toronto’s Citizen Lab has uncovered less positive news about how the technology is being applied.

It found that the Canadian government’s use of AI to process immigrants’ files could lead to discrimination as well as to privacy and human rights abuses.

The report said that automated decisions involving immigration could have “life-and-death ramifications” for immigrants and refugees.

“The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues,” said the report’s authors.

They called for greater transparency and oversight on the government’s use of AI and predictive analytics.