Women in AI & IoT: Why it’s vital to Re•Work the gender...
Photo: Jenny Donoghue

Women in AI & IoT: Why it’s vital to Re•Work the gender balance

The third Re•Work Women in AI dinner covered algorithmic fairness, advanced image recognition, and the challenges and opportunities of machine learning in wearables. Joanna Goodman, a regular IT correspondent for The Guardian, was there for Internet of Business and explains why such events are so important for our industry.

Re•Work’s third Women in AI dinner was held in London on 20 February. This regular networking event celebrates women in artificial intelligence and showcases their achievements. But although the speakers are women, these are not women-only events. This is important, because diversity is about inclusivity, not segregation.

There are not enough women working in tech, let alone in AI. In the UK, for example, 83 percent of people working in science, technology, engineering and maths (STEM) careers are men, according to figures presented at UK Robotics Week 2017. Anecdotally, it has been reported that less than ten per cent of coders are women, despite Ada Lovelace being widely considered to be the first computer programmer. (For more on these issues, see Internet of Business says at the foot of this article.)

Re•Work is attempting to improve the gender balance in the burgeoning AI community by organising a series of dinners that feature female expert speakers who talk about their work at the cutting edge of emerging technology.

Attendees are from tech giants, corporates, start-ups, and academic research institutions. Last week’s event was sponsored by Royal Bank of Canada and Borealis AI, RBC’s Institute for Research, which blends academic research into machine learning with practical applications.

Speakers were selected to reflect key themes in AI. Re•Work founder Nikita Johnson [watch her presentation video below] and her team are careful not to dwell on traditional ‘women’s challenges’. Instead, Re•Work is focusing sharply on technology and research, showcasing women in AI in a way that overrides traditional preconceptions.

However, the challenge is that bias and preconception are deeply ingrained in society, which means they are also ingrained in the data that AI applications work with. Accordingly, the first presentation by Silvia Chiappa, senior research scientist at DeepMind, was about innovating towards algorithmic fairness. But why is this important?

Curing the bias virus

Machine learning is already used to make and support decisions or processes that affect people’s lives: in hiring, education, lending, and in policing and law, where judges and parole officers use algorithms to predict the likelihood that a defendant or prisoner will reoffend.

It is therefore critical to ensure that the algorithms are not biased toward or against individuals from particular social or racial groups, as has been found to be the case with the COMPAS system in the US, which has exhibited a bias against black Americans.

The big challenge is that it is impossible take the bias out of historical/precedent data (which reflects preconceptions that existed in society at the time), so DeepMind is innovating ways to increase algorithmic fairness.

In AI terms, it is ineffective to disregard sensitive factors like race or gender, or give them a negative weighting, because this can have a negative impact on system performance. And it may not increase fairness because these factors are correlated with other attributes. For example, there is commonly a positive correlation between race and neighbourhood.

This underlines the importance of contextualising problems: identifying conscious and unconscious bias and looking for solutions. In other words, we can’t eliminate biases, but we can use them to work towards a fairer society, said Chiappa.

The second presentation came from Cecilia Mascolo, professor of mobile systems at the University of Cambridge and The Alan Turing Institute.  Her talk covered potential applications for built-in computational units on smartphones and wearables, particularly in developing countries that may have limited or slow access to cloud platforms.

These include using the smartphone’s built-in AI capabilities to support healthcare applications, such as the use of voice recognition for mood monitoring and early diagnosis – for example, of Alzheimer’s disease.

However, constant monitoring, and/or the collection of detailed location data, have privacy implications. These are analogous to the side effects of a drug, suggested Mascolo, who added that more localised computations could reduce privacy concerns while maintaining the benefits of personalised healthcare monitoring.

Self-diagnosis in wind turbines

The third and final presentation was from Fujitsu’s lead deal architect, Marian Nicholson, who discussed the application of deep learning in advanced image recognition. Examples include teaching wind turbines to recognise a defective blade.

Photos: Jenny Donoghue

Fujitsu’s work starts from the premise that humans are predominantly visual conceptualisers – i.e. babies recognise images and relate them to what’s happening around them. Today, image recognition is particularly important for autonomous and semi-autonomous vehicles, delivery drones, and so on.

Nicholson referred to recent headlines about the dangers of AI and highlighted the need for organisations to choose to use technology for good. Fujitsu’s own mission is to build technology that will benefit society, she said.

Read more: Prevent malicious use of AI, say Oxford, Cambridge, Yale

But for society to accept AI demands transparency about the data, how the system works, and – critically – why it was designed, along with the ability to identify and minimise bias. The power and potential of AI are balanced by our responsibility to ensure that it is used in a safe and fair way, she said.

The speeches and the roundtable discussions demonstrate the value of the work being undertaken by women in AI and the importance of encouraging more women to work in this space.

• Last week, I also attended Professor Alan Winfield’s lecture, AI Futures: The Societal Impact of Robotics and Artificial Intelligence, at the Ismaili Centre in London. Professor Winfield has increased the percentage of women in his own team at the Bristol Robotics Lab from zero to 40 percent. He said that the secret of his success in increasing gender diversity is to take on more senior women. Others will then follow.

The next Women in AI dinner in London will be on 11 July 2018.

• Joanna Goodman is a freelance journalist who writes about business and technology for national publications, including The Guardian newspaper and the Law Society Gazette, where she is IT columnist. Her book Robots in Law: How Artificial Intelligence is Transforming Legal Services was published in 2016.

Internet of Business says

We should acknowledge that although there is an appalling gender imbalance in technology careers, a growing number of senior industry figures are women, including IBM CEO Virginia Rometty, who has presided over IBM’s refocusing on cognitive services, HPE CEO Meg Whitman, and many more.

In the UK, along with the experts mentioned in Joanna’s article, prominent women include Dr Joanna Bryson, Associate Professor in the Department of Computing at the University of Bath, Lucy Martin, head of robotics at the Engineering and Physical Sciences Research Council (EPSRC), Prof Dr Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, and Dr Sue Black, OBE, who has done so much excellent work on stressing the importance of women in IT.

Yet these and other world-leading women in their fields are in a tiny minority. The figures speak for themselves: technology, coding, AI, engineering, and science, are overwhelmingly dominated by men: male voices, male panelists at industry conferences, and so on. Meanwhile, only one country in the world – Rwanda – has 50 percent or more female representation in parliament. These are among the challenges that women face.

At school, girls need more positive role models – and for women to have a more prominent platform – in order for them to want to pursue STEM careers when they leave school. For this reason, Internet of Business is committed to featuring women in IT as often as possible in stories and on the home page, and to creating as diverse and representative a pool of writers as possible.

While diversity is about inclusion, not segregation, the risk of this imbalance is very real, particularly in AI research, because these technologies need to reflect all of human society and not simply produce facsimiles of centuries-old social problems.

The more that these technologies are developed in closed groups of (usually white) males, the more unconscious bias is likely to be reproduced in systems, including in so-called black-box solutions. At last year’s World Economic Forum in Davos, Joichi Ito, head of MIT’s Media Lab, made these observations of his own (male) students, whom he described as “oddballs”.

Internet of Business strongly supports the work of Re•Work and other organisations to redress the balance, by focusing on expertise and insight, and leading by positive example.