Stanford develops AI-powered optical computer for driverless cars, drones

A new AI-enabled camera system could drastically reduce the need for autonomous vehicles to carry bulky computers, opening up the possibility of AI-powered handheld devices too, according to researchers at Stanford University in the US.

One of the challenges in developing driverless cars and autonomous drones is that they need large amounts of onboard computing power, sensors, and other systems, all of which add to weight and are a drain on scarce battery resources.

The image recognition technology alone in an autonomous car or high-spec drone depends on artificial intelligence (AI) systems that can teach themselves to recognise objects in their path. On the road, these may include pedestrians, bicycles, animals, or other vehicles – all of which are merely undifferentiated pixels to a digital vision system, until it can understand what they represent and how different objects typically behave.

Driverless or pilotless vehicles need to make split-second decisions in order to avoid collisions or deal with other unexpected events. While some of that processing can be carried out at the edge, a lot of it will remain onboard. In either case, speed will be essential.

One of the many issues that arose with Uber’s fatal crash in March, for example, was that the car’s onboard systems failed to recognise a human being wheeling a bicycle across the highway until it was too late.

Another challenge is that many computers running complex AI systems are too large and slow for the future applications that might emerge for smart imaging and analysis technology, such as handheld devices that could diagnose a range of medical conditions.

Now, researchers at Stanford have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and could one day be embedded in such devices – something that is not currently possible.

The work was published in Nature Scientific Reports this month.

Trunk full of intelligence

“That autonomous car you just passed has a relatively huge, relatively slow, energy-intensive computer in its trunk,” said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research.

Future applications will need something much faster and smaller to process the stream of images, he explained.

Wetzstein and Julie Chang, a graduate student and first author on the paper, have taken a step toward that technology by marrying two types of computers, creating a hybrid optical-electrical processor, designed specifically for image analysis.

The first layer of the prototype camera is a new form of optical computer, which does not require the power-intensive mathematics of digital computing, according to Stanford. The second is a traditional digital processor.

The optical layer operates by “physically preprocessing image data, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically”, said the university.

Since this filtering process happens naturally as light passes through the custom optics, the layer operates with zero input power, claim the researchers. This saves the hybrid system a lot of time and energy that would otherwise be consumed by computation.

“We’ve outsourced some of the math of artificial intelligence into the optics,” explained Chang.

“The result is a lot fewer calculations, fewer calls to memory, and far less time to complete the process,” said the university in a published statement. “Having leapfrogged these preprocessing steps, the remaining analysis proceeds to the digital computer layer with a considerable head start.”

“Millions of calculations are circumvented and it all happens at the speed of light,” added Wetzstein.

Rapid decision-making

In speed and accuracy terms, the university claims that the prototype rivals existing processors that are programmed to perform the same calculations, but with substantial computational cost savings – and therefore, reduced power consumption overall.

In both simulations and real-world experiments, the team successfully used the system to identify airplanes, automobiles, cats, dogs, and more, within natural image settings, according to the university.

However, the researchers are still some way from miniaturising the technology so that it can be deployed in a handheld camera or autonomous drone.

In driverless cars, in particular, having a miniature, AI-powered vision system onboard – rather than the equivalent of a heavy suitcase – would be a boon in terms of weight and energy efficiency.

In addition to shrinking the prototype, Wetzstein, Chang and their colleagues at the Stanford Computational Imaging Lab, are now looking at ways to make the optical component do even more of the preprocessing.

Plus: Will AI make medical errors a thing of the past?

In related news, AI could reduce or even eliminate medical errors, according to researchers from Université Paris-Saclay. The graduates have developed Neosper, an augmented reality and AI programme that stops orthopaedic surgeons from making life-threatening mistakes.

After interviewing over 50 surgeons to find out the issues they often face, the team designed a personalised 3D software simulation tool to allow surgeons to practice surgery before an operation. The software can also assist surgeons in real time using sensors – which is particularly useful during prosthetic implants, they said.

Internet of Business says

The Stanford research reveals that the route towards autonomous transport and other AI and computer-vision-enabled applications is very much a collaborative one.

While companies such as Waymo, Uber, GM, Ford, Tesla, Apple, and others, are testing driverless systems, others are working towards better optics, improved AI, faster communications, and new battery concepts and safety protocols.

The road is indeed paved with good intentions.


Our Internet of Health event takes place on 25-26 September in Amsterdam, Netherlands. Click on the logo for more details.

Chris Middleton: Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.
Related Post