AI technologist Kriti Sharma (pictured) has the ambition of bringing greater diversity and accountability to the algorithms that guide our decisions and sift through our data.
Since starting at UK software company Sage, she has been working on a gender-neutral virtual assistant, Pegg, which is designed to manage customers’ business finances. She has also published a set of core ethical principles for designing AI systems.
Sharma, 29, is now VP of AI at the Sage Group, and is one of a growing number of women with high-profile roles in the artificial intelligence sector.
For example, the UK’s new Office for AI is jointly run by Gila Sacks, director of digital and technology policy at the Department for Digital, Culture, Media, and Sport (DCMS), and Dr Rannia Leontaridi, director of Business Growth at the Department for Business, Energy and Industrial Strategy (BEIS).
Read more: Top priest shares ‘The Ten Commandments of A.I.’ for ethical computing
AI and bias reinforcement
Driving Sharma’s work at Sage is her fear that AI and the fourth industrial revolution will entrench inequality rather than provide solutions to it. Instead of emerging technologies easing problems such as gender, race, and age inequality, she believes that they risk perpetuating them by cementing biases that already exist in human society.
• This issue is explored in this external report by Internet of Business editor Chris Middleton.
Speaking to Middleton last year at the Rise of the Machines summit in London, Sharma described herself jokingly as “a token millennial” who had been brought into Sage to shake things up. She explained her belief that the technology industry’s efforts to create human-like software is a strategic error. Instead, AI should “embrace its botness”, she said.
Sharma went on to make the point that many domestic AIs tend to be feminine personalities with female voices, and are designed to respond to routine commands. Meanwhile, some industry-specific systems – in legal services and banking, for example – are often designed to be ‘male’. In this way, she suggested, we risk “projecting workplace stereotypes onto AI” and, by doing so, we reinforce them.
Sharma expanded on that view in an interview this week. “Despite the common public perception that algorithms aren’t biased like humans, in reality, they are learning racist and sexist behaviour from existing data and the bias of their creators. AI is even reinforcing human stereotypes,” she told PRI.
• The interview coincided with the establishment in the UK of the new Ada Lovelace Institute, which is intended to create an effective framework for ethical practice in AI. It is named after the 19th Century mathematician, who is widely regarded as the world’s first computer programmer.
Sharma shared an example of recent research from Boston University, in which technologists developed an AI program using input from Google News. When the system was asked, “Man is to computer programmer as woman is to X,” it responded “homemaker.”
Unchecked bias such as this both reflects the mass of data stored by human society to date, she said, and highlights the care that programmers need to take when designing software that is intended for everyone.
Read more: AI regulation & ethics: How to build more human-focused AI
Developing a gender-neutral virtual assistant
Sharma’s gender-neutral AI assistant Pegg symbolises her attempt to ensure that technology helps to tackle deeply embedded social and cultural stereotypes. Unlike the domesticated Amazon Alexa or the down-to-business IBM Watson, Pegg is designed to be a sidekick without obvious stereotypes, she explained.
“Pegg is proud of being a bot and does not pretend to be human. Initially, there was a lack of awareness within the company and the outside world of stereotypes in AI, but I found it very encouraging that I got a very welcoming response to my efforts.”
Read more: IBM launches new Watson Assistant AI for connected enterprises
Accountability and transparency
According to Sharma, the two key components in developing AIs that reflect social diversity, rather than existing prejudices, are accountability and transparency. Only by understanding the full end-to-end development processes that any artificial system goes through can we check for inherent bias and keep its designers accountable.
“AI needs to reflect the diversity of its users,” she told the Financial Times earlier this month. This means using data sets that are as diverse as possible and making software that’s applicable to everyone.
For example, the problem of racial bias has been identified many times across a whole range of AI and other systems, from the broad range of imaging technologies that have been optimised to identify light skin tones, to the MIT facial recognition system that was unable to identify a black woman, because the training data was compiled by, and among, a closed group of young white males.
The latter example was shared by MIT Media Lab chief Joichi Ito at the 2017 World Economic Forum in Davos, where he called his own students “oddballs”.
Ito suggested that many coders prefer the binary world of computers to the messier and more complex world of human beings. Most coders are young, white males, he added, and this lack of diversity in the tech community is often reflected in the systems that developers design, test, and release.
Some AI systems have also been shown to be better at identifying men than women – again, because of biases in the training data.
Despite all this, Sharma remains positive. “AI is a fascinating tool to create equality in the world,” she said. “When I’ve worked with people from diverse backgrounds, that’s where we’ve had the most impact.
“AI needs to be more open, less elite, with people from all kinds of backgrounds: creatives, technologists, and people who understand social policy… getting together to solve real-world problems.”
Plus: The five pillars of AI
In related news, analyst Ray Wang of Constellation Research today published an opinion on AI ethics, in which he suggested that there should be five pillars of development.
Wang said that AI should be: Transparent, so that algorithms, attributes, and correlations should be open to inspection for all participants; explainable, so that humans should be able to understand how AI systems come to their contextual decisions; reversible, so that organisations are able to reverse the learnings and adjust as needed; trainable, so that systems have the ability to learn from humans and other systems; and human led, so that all decisions begin and end with “human decision points”.
But he added, “Prospects of universal AI ethics seem slim. However, the five design pillars will serve organisations well beyond social fads and fears.”
Internet of Business says
We salute Sharma’s work and her commitment to both addressing these problems herself and raising awareness of the issues. It’s notable, too, that this was her personal choice, proving that one person can make a big difference if they set out to do so.
The underlying problem is easy to express: it is not that developers are themselves knowingly biased or prejudiced (necessarily) – it would be a mistake to label the technology community has inherently racist, for example; it is more that most AI systems rely on human beings to train them with data.
Any existing bias in that data – for example, in legal systems that have exhibited a bias against any ethnic or social groups over decades of case law – will be picked up by the system. Equally, any lack of diversity in the technology community itself – which is known to be overwhelmingly white and male – also risks finding its way into the systems that the community designs.
Last year, UK-RAS, the UK’s umbrella organisation for robotics and AI research, quoted figures suggesting that 83 percent of people working in science, technology, engineering, and maths (STEM) careers are male. Among coders, the split is closer to 90 percent male to 10 percent female, with an even stronger bias towards white employees.
The systems they produce must not be allowed to reflect those biases.
Read more: Women in tech: the £150bn advantage of increasing diversity
Read more: AI regulation & ethics: How to build more human-focused AI
Read more: Women in AI & IoT: Why it’s vital to Re•Work the gender balance
Additional reporting: Chris Middleton.