Jonny Ainslie

October 1, 2018

3 Min Read
JIM WATSON/AFP/Getty Images

LONDON - Trust is a strangely symbiotic thing, even by emotional standards. The choice to invest trust in another is often only embarked upon after an evaluation of their trustworthiness; such a reciprocal arrangement vests both the trust-er and the trustee with significant power. It is unwise to trust too freely, and to do so risks the total destruction of whatever trust one had in the first place. Perhaps, trust is actually impossible without any evidence of trustworthiness – a question of psychology bordering on philosophy, but one that business and AI leaders are acutely aware of.

Trust is essential in maintaining the global financial system, the fabric of democratic accountability, and a vast majority of the marriages on the planet. Consumer trust can make or break a brand, and challenge one as mighty as Facebook, who recently shed $119bn in a single day from their share price in the wake of the Cambridge Analytica scandal.

The irresponsible or Machiavellian misuse of client data is a subject that will define our future and our relationship with personalised services and businesses. AI will be at the forefront of the fight for our data, and is already embedded in algorithms that police our streets, assess our mortgage applications, and evaluate our health insurance claims.

Business leaders the world over are now playing with our trust in the pursuit of profit; how much of our data are they willing to gamble with? Because to risk it all, may be to lose their ever-more fickle customers entirely.

DeepMind-Health-patient.jpg
Related: Responsible AI Needs Radical Transparency

This is why recent investigations surrounding Facebook really matter for AI. Zuckerberg, a highly recognisable frontman of big-data tech, allowed himself to face EU regulators and the US Congress alike, placing himself in the way of what could have been very direct inquiry.

The AI industry must have thanked their lucky stars that it was not – the lifeblood of AI is data, big, and shared frequently. Had the media giant been eviscerated by global lawmakers on the issue of betraying public trust, rather than leaving that task to famously ignorable journalists, the entire AI sector might now be under fire.

That’s how trust works. British doctor Andrew Wakefield’s egregious announcement in 1998 that the MMR vaccine caused autism is still having repercussions for modern families; there have been over 21,000 cases of measles this year, four times as many as from 2016, as the anti-vaxxer trend rears its head once again. The world is also less trusting of authority figures than it ever has been – Fake News and corruption scandals spread so fast online that retracting erroneous statements is akin to pushing toothpaste back in tubes.

Consumers need to remain confident that the information they provide so freely to behemoth algorithm systems is a decision worth taking. And as AI technology is often so poorly understood, making waves with Facebook has the potential to sink a lot of smaller ships.

Is it likely that AI industries will scare the public’s representatives into spiking the flow of big data?

Doubtful. There’s a lot of money and campaign funds riding on the host of AI-driven tech out there – although companies will have to remain warier about their public image than ever before, in protection of their bottom line if not their moral character. After all, we live in age where populist reactionaries hold high office the world over.

About the Author(s)

Jonny Ainslie

Jonny is an editorial and content executive for AI Business. He is also the editor-in-chief of Journalists On Truth.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like