IoB Insiders Andy Yeoman, CEO at Concirrus, discusses the role of human/machine collaboration in the workplace of the future.
It’s just one month since International Workers Day (1 May), and given the recent and continuing hype around technologies such as artificial intelligence (AI) and, in particular, machine learning, we’ve been thinking about the role of these technologies in the workplace and their relationship with the IoT.
In January of this year, 32 employees of Japanese life insurance company, Fukoku Mutual, lost their jobs because their employer had installed a new artificial intelligence system. According to a report in The Guardian, “Fukoku Mutual Life Insurance believes it will increase productivity by 30 percent and see a return on its investment in less than two years. The firm said it would save about 140 million yen (£1 million) a year after the 200 million yen (£1.4 million) AI system is installed this month. Maintaining it will cost about 15 million yen (£100,000) a year.”
This is interesting, and at the same time, unnerving. The system, based on IBM’s Watson Explorer, is said to possess “cognitive technology that can think like a human” and is being used to process structured and unstructured information (think payment data on one hand and images and video footage on the other).
For the 32 employees of Fukoku, this is obviously an unwelcome development, and indeed, some estimates anticipate that up to half of all jobs in Japan will be performed by robots by 2035.
Read more: Morrisons uses artificial intelligence to stock its stores and drive sales
Cognitive potential
Yet in a world of increasing connectivity, where huge amounts of data are being created daily, it is worth bearing in mind the positives. Put simply, human beings can only process so much information before we reach the limits of our cognitive potential. At this point, our brains are unable to draw meaningful conclusions from the information before our eyes, and as a result, we are unable to make intelligent decisions.
For decades now, we have relied on computers to help us make sense of large volumes of information – an obvious example being the calculator. But what happens when an insurance or financial organisation is faced with 3 million lines of unstructured data from the IoT? How to make sense of it all?
Perhaps a better place to start is to ask another question: why would you need to make sense of this data? There may be many answers to this question, but one of the most pertinent has to do with behaviour. As the ‘things’ in our world (cars, buildings, ships, factories, planes and so on) become connected, the data they produce tells us a story about their behaviour: where they go, how fast they travel, how well they perform, any faults that develop.
This behavioural insight is like gold dust. To an operator or owner, it represents performance and bottom line impacts, in terms of profit margins and efficiency. At its most basic level, it allows them to ensure that the ‘thing’ is still in working order. To an insurer, it can represent risk, loss and the means of preventing both.
Read more: JGC and NEC collaborate on Industrial IoT and artificial intelligence
Coping with complexity
Yet information of this kind, such as complex time series data, cannot be processed without the help of computers and algorithmic technology. Machine learning algorithms that identify patterns of risk can help to turn this new insight into automated business process (and the decisions made therein).
If you wish to know when your onshore and offshore aggregated exposure reaches a certain limit within the port area of Tianjin, for example, an automated alert can be set. And there would be no sense in employing a human to carry out that task – as well as being extremely time-consuming, it would also be mind-numbingly boring. Imagine being sent one-thousand emails a day and trying to make sense of them all, sorting the valuable ones from those that waste your time. You’d need a filter to help spend less time sorting through them and more time engaging with the people who sent them. AI provides that filter.
Similar use cases can be seen elsewhere. In healthcare, neural networks can be trained to diagnose patients using visual information and pattern recognition (for example, eye scan images). In manufacturing, organisations such as Nanotronics Imaging use software and imaging technology to detect nanoscale defaults in silicon wafer chips, thus removing defective products from the production line – something that would otherwise require constant human vigilance (as well as the inevitable human error).
Read more: IBM Watson’s artificial intelligence to help solve complex medical cases
Creative control
Yet humans still possess a level of creativity that machines have yet to master, and it is this combination of skills that represents huge future potential. Known as ‘augmented intelligence’ it combines the heavy-lifting processing power of computers (the drudgery) with free-thinking, personable, human creativity. In this scenario, humans are free to pursue other areas of work, investing more time with customers, exploring new areas of business, and more. In the example of insurance, staff can use ML to accurately calculate risk at the click of a button, receiving automated alerts before large losses are likely to occur and engaging in more data-driven conversations (and stronger relationships) with their customers. As Disruption magazine stated this month, AI powered systems can free up “far more of each day to spend with clients and team members in order to provide better, more tailored services.”
So the level of disruption caused by IoT, AI and ML in changing corporate environments really depends on the organisation itself. Business processes (and the models behind them) will shift and job roles will need to adapt to take full advantage of these changes. Managerial prudence and improvisation will be required to help steer companies through a changing environment, and those that do this well will reap the benefits.
Finally, consideration must be given to how AI systems learn over time. As humans, we come to learn that sometimes new information trumps or overrides all previous knowledge. Learning in this way can shift paradigms of perception and change the way we act and think. AI systems will need to be able to do this in order to become truly useful within a working environment. The same goes for heuristics (rules of thumb), such as guidelines for commercially sensitive decisions and pricing for customers: “I’m going to give them a discount this time, because they’ve been a good customer”. Humans, it seems, still have a role to play.
Read more: Customers unconvinced by insurance providers’ IoT ambitions