Dr. Ramsey Faragher has a distinguished and varied resume.
As an expert in navigation and positioning, his work has touched on everything from the Mars Rover to Covid-19 tracing apps, something that earned him the comparison with James Bond’s Q in a story by Top Gear magazine.
Between commercial projects, he still finds time to teach AI and machine learning at Queen’s College, Cambridge, and says he has definitely noticed a change in the focus of undergraduates.
“We’ve kind of moved through an era of everyone writing on their interview stuff that they’re interested in quantum computing: everyone now writes that they’ve cobbled together a neural net that does something cool. The next generation of computer scientists are all focused on being data scientists now.”
Queen’s College, in particular, has a special affinity for AI enthusiasts, being the alma mater of Demis Hassabis, the founder of AI success story DeepMind, who still comes back for talks.
AI is popular with students then, but what about the public at large? “I think public perception varies greatly,” Faragher said.
Some are excited by the potential, while others worry about an eventual doomsday scenario.
“I think one thing that the people who really pioneer it, like Demis, would be the first to note, is that when training an AI to do a very specific task well – like playing Go – [the system] can appear to be superhuman.
“In actual fact, all of the systems that we use at the moment are very stupid from the point of view of a human being.”
Faragher noted that while Alpha Go can trounce the best human player at Go, it would lose to a child at Chess.
“They’re excellent pattern matching machines, and that’s it. If you’d label it as advanced pattern matching, people would be less scared: if we use different language, we wouldn’t have people worried about the Terminator scenario.”
“Once you rephrase it as ‘advanced pattern matching,’ you can start to carve out all of those things which it has done, and will continue to be brilliant at, like pattern recognition, grammar correction, facial recognition.”
The risks of algorithms
I was talking to Dr. Faragher in August, just as the UK government was forced to perform a screeching u-turn on the use of algorithms to predict A-Level results, for exams never taken because of COVID-19.
The algorithm was fed historic data, which meant that schools in deprived areas would invariably
produce lower grades, leading to claims the system often downgraded exemplary students and led some to miss out on university places.
“It was always going to be a bad idea to just try to apply curve fitting to deciding whether people could go to university or not,” Faragher said.
“You can’t use things like machine learning tools to consider on a case-by-case basis, because the whole point of trying to develop those sorts of tools is that they’re trying to fit data through a model. And data in this instance is your own child and their future success.”
The same is true, but less obvious, in other areas of human life where AI could play a decisive role – something that academics like Dr. Faragher have long been aware of.
Take an AI tasked with finding the next CEO, based on historic records.
“Well, obviously, you can end up with AI’s recommending white males because historically, we’ve had this terrible problem with sexism and racism. And it’s inevitable that if you simply train AIs on data that we’re not proud of, you’re going to end up with AIs that are racist, sexist, or worse.”
This potential weakness need not be a problem if one of the most exciting developments in the field comes to fruition: explainable AI.
As the name suggests, it’s an approach to AI systems that can not only produce decisions but explain how they emerged.
Here, Faragher gives the example of an AI explaining its reasoning for identifying a cat in an image: “‘I’ve seen ears, I’ve seen a tail and I’ve seen four legs. I’ve seen the context of the scene, which involves, like a ball on a string and a scratching post. And I’ve seen cat food can in the corner.”
“Based on all of this information that I’ve now presented to you, I’m going to make the decision that this is a cat. And if you don’t agree with me, you can see all the stages I’ve made in the decision.’”
The example may be simple, but this kind of justification could be crucial.
“I think explainable AI is going to be the big thing that means we can trust AI systems for certification, and then we’ll have AI pilots in aircraft, and be able to get to ‘level five’ autonomous vehicles,” he said.
“I’d say that’s going to be the biggest development in the next ten years.”
In the here and now, alongside teaching and consultancy, Faragher is applying his own knowledge to Focal Point Positioning, a company he founded in 2015 with the aim of improving GPS through ‘supercorrelation,’ with an eye on smart cities and autonomous vehicles.
The company has been using deep learning methods, smartphones, and wearables to model how humans move through space.
“We can actually track you through space accurately just by determining what sort of motions your body must be going through in order for your phone to jiggle in a certain manner,” he explained.
This kind of refined positioning could impact everything from drone deliveries to driverless cars.
But while a certain amount of experimentation with in-app AI is expected, with driverless cars, it’s a whole different kettle of fish.
“In one of them crashing is normal, the other one crashing is very serious,” Faragher said.
To find out more about the exciting potentials that AI could bring to smartphone applications, download our eBook: 'AI in your pocket: Better mobile computing with smarter apps'