Media releases are provided as is by companies and have not been edited or checked for accuracy. Any queries should be directed to the company itself.
  • 11 April 2017 13:01

Artificial intelligence is pushing the envelope of possibility: NVIDIA

Dr Simon See, Director and Chief Solution Architect for the NVIDIA AI Tech Center and Professor at Shanghai Jiaotong University (SJTU) and the King Mongkut'sUniversity of Technology Thonburi (KMUTT), speaks at the IoT Asia keynote, titled Leading Intelligence with Imagination.

The 'what now' and 'what next' are more interesting, said Dr Simon See, Director and Chief Solution Architect for the NVIDIA AI Tech Center and Professor at Shanghai Jiaotong University (SJTU) and the King Mongkut'sUniversity of Technology Thonburi (KMUTT) during a keynote at IoTAsia 2017.

“Over the last few decades we've seen tremendous improvement and advancement in technology, from computers to the Internet and the Internet of Things (IoT), and of course now we have artificial intelligence (AI),” he said. “Everyone has projections about the large numbers of devices going to be connected... all of these devices are going to be more intelligent and they are going to be connected one way or another. My concern is how they are going to be connected; how they are going to interact with one another and what they are going to achieve.”

According to Dr See, the technology popularised by science fiction movies such as Iron Man could well become reality soon as achievements in the field of AI accelerate. In Iron Man the J.A.R.V.I.S. AI delivers what hero Tony Stark asks – what we can already do to some extent with Siri, Cortana or Alexa today – but also makes suggestions of its own.

“We could have machines able to advise me, generate ideas for me, and at the same time provide suggestions to me,” Dr See said. “Extend that concept to being a lawyer, a nurse, a doctor, an accountant and so on. An (AI) assistant could help you do your work.”

While the first neural network was invented all the way back in 1943, it could not deliver today's results because “the technology wasn't available at that point in time and the data wasn't available (for training) at that point in time to deliver what AI has promised”, Dr See said. “We see that over the last couple of years there has been an amazing rate of improvement.” We have come very far since 1943, Dr See noted. Cases in point include Alexnet, an AI built to recognise images that made waves in 2012 at the 'Olympics' of computer vision, the ImageNet Large-Scale Visual Recognition Challenge, when it displayed much better accuracy for image recognition than had been possible before.

“The technology has become more mature. If you go to Pinterest, you can take a picture and then find out where you can buy the object. (Similar technology) is being used in self-driving cars. You want to recognise whether it's a car, a human, a cat, rubbish on the road, or open space so a car can move intelligently on the road without hitting anything,” he said.

Voice recognition and translation technologies have also seen improvements with AI techniques. Baidu's DeepSpeech 2 speech recognition platform, powered by NVIDIA GPUs, recognises both English and Mandarin accurately, while machine-based simultaneous translation capabilities have been demonstrated at conferences, Dr See said. “The next step is natural language processing – we want the context to it,” he said.

AIs can perform anomaly detection as well, a godsend for use cases such as cancer diagnosis. Hospitals are developing new applications with machine or deep learning to help doctors find medical cures faster. PathAI is dedicated to cancer diagnosis using AI technology, for instance. NVIDIA is involved as well, working with the National Cancer Institute, the US Department of Energy and several US laboratories on the Cancer Distributed Learning Environment (CANDLE) project.

“AI can accelerate discovery of cancer therapies, predict drug responses of cancer patients, and automate the analysis of treatment effectiveness,” Dr See said.

Anomaly detection is also useful with machinery, to predict or prevent catastrophic failures. GE has used machine learning to detect combustion anomalies within gas turbines, and used the data to predict the probability of failure. “With the advance of neural networks we are able to actually train those networks and detect those anomalies easily,” Dr See explained.

The field is also moving from passive to generative AI, and the sky is the limit on what can be achieved, and where. A neural network has been trained to take in artistic styles, and are able to generate art in specific styles based on real world photographs, Dr See noted. “Generative design creates complex forms that wouldn't be possible otherwise,” he said.

StackGAN can even search for pictures given a text description, Dr See shared, useful for identifying birds for instance. The next stage in the evolution of design would be solutions such as the AutodeskDreamcatcher project. Given initial requirements, the AI generates different options satisfying the requirements, allowing designers and manufacturers to pick those which are most relevant for them.

“You could simulate molecules to bind to a peptide,” Dr See suggested. “It takes a long time for a human to do this, but it is pretty easy for machines to generate different ideas on how a molecule can fit into a peptide.”

Ultimately, AI technology, supported by all the connected devices in the Internet of Things (IoT), could become even more helpful. “J.A.R.V.I.S. is intuitive and self-learning,” Dr See pointed out. “It can ask Tony 'what are you trying to do?'” It has been demonstrated that AIs can learn by themselves, and achieve more than humans, too. In 2013, Google's Deepmind showed how it canlearn how to play an Atari game called Breakout. At the time Google said, “We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.”

“The algorithm plays Atari Breakout. It has never played it before; it just knows rules and objectives,” noted Dr See, playing a video that shows the AI making mistakes in the first minutes, but progressing to the expert level and then surpassing human capabilities in a few hours.”

The techniques are not complicated, Dr See said, noting that a neural network (AI) basically has to be trained in an optimum way to learn. The results can surprise, he noted, referencing Google's AlphaGo go-playing AI which resoundingly beat a world champion last year - a feat thought impossible because go is extremely complicated, with many possible moves.

“In game number 3, AlphaGo made a move that all experts at that point thought (was) an extremely silly move. After AlphaGo had won the game, experts analysed it, and found that it was a profound move that no expert had (seen before),” he said. “AlphaGo had become imaginative.”

Real-world interactions will go a long way towards training AIs to be imaginative. Dr See concluded with food for thought for the audience by running a clip of Nadia, an AI from New Zealand firm Soul Machines which will be trained by real-world conversations from the Australian public.

A February 2017 blog post by Louise Glanville, Deputy CEO, National Disability Insurance Agency (NDIA), Australia. explains the Nadia project: “The plan is for Nadia to be released in a trial environment on the myplace portal in the next few months. Nadia will start as a 'trainee'. It will take 12 months and a great deal of interactions with NDIS stakeholders for Nadia to become fully operational. The agency will hold information sessions to inform people how they can engage with and use Nadia over the next couple of months. We hope that you will start using Nadia as soon as she is available, and help build her knowledge base, making it easier for all stakeholders to have their questions answered quickly and clearly.”

The crucial piece of the puzzle is the ecosystem required to make AI a ubiquitous reality. NVIDIA can provide the compute horsepower, Dr See explained. The AI techniques used to train AIs can require a great deal of experimentation and a lot of data, much more than normal computing requires. Graphics processing units (GPUs) from NVIDIA can cut down training times.

In AlphaGo's case it took several weeks to train the network, using a few hundred million training steps on 50 GPUs. “A lot of compute power is needed. We will need new AI data centres,” Dr See said. “We know AI frameworks are available. We can develop neural networks easily right now. We're building systems that run those networks very quickly.”

Submit a media release
[]