by Dr Clément Chastagnol 20 August 2019
Another week, another ‘AI hype’ story in the media. This time it was about a firm called Engineer.ai, who the Wall Street Journal reports has been exaggerating its AI capacities in order to secure investor’s money.
According to the Wall Street Journal, Eningeer.ai say they use AI to help clients create 80 percent of a mobile phone app in around an hour. The firm has secured around $30 million in funding. Last year, its CEO Sachin Duggal said the “majority of the work is done by AI, and then we have over 26,000 engineers around the world who add little pieces that are missing.”
Critics say the company’s claims are untrue – the company has employed engineers to do the work they claim AI is doing. The Wall Street Journal says it has seen documents and interviewed former employees who support the criticisms being made.
Such ‘AI hype’ stories come with other consequences, too. One writer at a well-known tech publication wrote of natural language processing and decision tree AI used by Engineer.ai: “Neither of those really qualify as the type of modern AI that powers cutting-edge machine translation or image recognition.”
That is a rather subjective claim to make, and approaches the topic of AI with a pre-defined set of criteria about what counts as cutting edge or not. And why should AI powering those two types of activity take precedence over others? Do all mobile apps require machine translation and image recognition, or does an app need to meet its users needs, first and foremost, regardless of what sort of AI is used?
It’s not the first ‘AI hype’ story to get wide media coverage. Several weeks back, Microsoft announced it was investing $1 billion in OpenAI, the San Francisco-based AI research firm, in pursuit of artificial general intelligence (AGI), the ‘holy grail’ of AI.
AGI refers to an AI system that is as flexible and generally intelligent as a human being. Currently, AI software can do specific tasks very well, like playing certain board games or analyzing medical scans, but these AI algorithms cannot transfer that competency from one task to another.
Promising the creation of AGI is a bold step, and the AI community is still debating whether it is even possible, and if it is, how far away are we from it. In Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women working in AI. He asked each of them when we might have even a 50 percent chance of seeing AGI. Answers ranged between the years 2029 to 2220, with the average being 2099.
It’s certainly an appealing topic for some investors at least, with such a technology offering considerable prestige and returns on investment.
The topic of companies making claims about their use of AI previously came up in May, when MMC Ventures, a venture capital fund investing in early stage, high growth tech companies, produced a report, which media outlets reported with headlines claiming 40 percent of AI start-ups in Europe weren’t using AI.
Crunching the claims
A closer look shows a few things that should make readers cautious about MMC Ventures’ claims. The report actually only says that, “In approximately 60% of the cases – 1,580 companies – there was evidence of AI material to a company’s value proposition.” MMC also says that they reviewed 2,830 purported AI start-ups in the 13 EU countries ‘most active’.
MMC only included one recent type of AI method, namely deep learning, excluding a wide range of other technologies, like expert systems and decision trees. Some of these techniques have been around for over 20 years. Many companies have been using them effectively for years without making any particular claims about AI.
The 40 percent angle was again repeated, without challenge, by the tech publication mentioned above, writing that MMC Ventures had told the Wall Street Journal they suspect 40 percent or even more of AI Start-ups “don’t use any form of real AI at all.”
Elsewhere, others, such as Olivier Ezratty, a recognised expert in deep tech, notes that the definition of ‘countries most active’ is subjective, and highlighted that British firms were over-represented while French firms noticeably under-represented compared to other, more comprehensive surveys (a joint study between French newspaper Les Echos and Sidetrade identified 333 AI start-ups, compared to 217 for the MMC Ventures study).
Of course, sometimes it pays to tell customers that you are at the forefront of artificial intelligence. AI fascinates people, and new developments attract attention. This is fine, as long as we make a difference between hype and actually getting results.
In fact, there’s no point in looking for something called ‘pure AI’. Let’s recall that, ever since its birth in 1956, the concept of AI has covered a range of theories and technologies promising to enable machines to imitate human intelligence. In other words, the promises of the field have always been wonderfully ambitious and terribly vague.
The MMC Ventures survey is symptomatic of many investors’ nervousness about AI. This is entirely understandable but misses the point. The real question is not whether a start-up ‘really’ does AI, but rather: whether their technology (whatever it’s called) creates added value.
Six good questions to ask about AI
In practice, business leaders will turn to AI if they are convinced that it can solve a problem better than human intelligence alone, and that it can provide added value to their customers.
Complicated technology is not necessarily the answer. A ‘basic’ AI system may create value whereas an avant-garde technology might not produce the desired results. On top of this, it is risky to invest in the latest technological wonder if users will have trouble adapting, and if the sustainability of the new system is unknown.
To decide if AI is worth it and will be properly implemented, ask these six questions.
1. Do my teams or customers have any processes that AI could accelerate, simplify or improve?
The main interest of AI is freeing staff from repetitive tasks (which the machine can do much faster) so that they can concentrate on value-added (and more interesting) tasks requiring human intelligence, judgement, and creativity. A well-conceived AI project will boost growth by augmenting staff productivity in a business setting. For instance, manual sorting of emails before processing is a tedious task that can be, at least in part, automated, and can free up time for workers.
2. Where do we get the right input, and how do we process it?
The most useful input comes from exclusive data, regularly renewed to reflect the latest trends in your field, cross-referencing internal and external sources. In addition, the data must be painstakingly purged of biased or aberrant information that could skew analysis of the algorithm.
Generally speaking, invest in good data. It makes better sense to feed high-quality data into simple algorithms, than faulty data into complex algorithms. Using the right data gives you a better understanding of the issues and helps you ask the right questions, leading to better performance.
Simple algorithms with suitable data give you a handle on complex systems. Better understanding of data reveals opportunities for improving pre-processing. This has been pointed out by experts such as Pedro Domingos (Professor of Computer Science & Engineering at University of Washington), Andrej Karphaty (Director of AI & Autopilot Vision at Tesla), and Pete Warden (Lead of the TensorFlow Mobile/Embedded team at Google).
3. Could a different AI technology do better?
New and improved off-the-shelf algorithm libraries are appearing. While you can’t know them all, some might save you time, provide better accuracy and simpler calculations. It is therefore important to stay abreast of the latest developments and consider whether your existing system can continue to meet your needs.
Conducting regular audits of the AI systems can for instance show that it’s possible to switch to less complicated models by leveraging the additional data collected since the initial training, without a drop in performance. This is still a net positive in terms of the maintainability of your systems.
4. Will the proposed AI system meet users’ needs?
AI boils down to this: automating decisions by teaching machines to imitate human judgement. The machines learn from trial and error as we feed them more and more data. Success depends on two factors: (a) giving the machines the right data, and (b) telling them what information is of value to users. It is therefore fundamental to run multiple tests with users to refine algorithms so that AI generates increasingly helpful output. Otherwise the machine will churn out nonsense. People have common sense, but machines don’t.
5. How will the output of my system be presented to users?
This is more of a UX/UI question, but it can have an impact on the trust users put into your app or services. Say you are selling ad spaces, and you use an algorithm to estimate the number of impressions that a particular ad would get based on its characteristics.
If you’re deploying an early version of your algorithm that is still not very accurate, it makes sense to display the estimate differently from other numbers, to convey the fact that it’s not written in stone.
It may also be a good idea to add an automatically generated explanation for the number (e.g. “the chosen time slot increases the estimated number of impressions by 1.500”). This helps build trust in the user, because explanations of this type would make more apparent the underlying estimation mechanisms.
On the other hand, if the algorithm is already very accurate, you don’t want to overwhelm the user with information that would be very rarely useful.
6. Do I have the capability to monitor the activity of my AI system, and can I override it if it makes erroneous decisions?
Once you have an AI system in place, it’s crucial to understand that the inputs it uses to make decisions, or even the very nature of those decisions, are usually not set in stone. The inputs could slowly drift over time, rendering your system less accurate or completely outdated.
One extreme example of this phenomenon are the fraud detection algorithms: malicious actors are always coming up with new attacks that don’t resemble the old ones, so the systems need constant monitoring and updating.
This all supposes that errors can be detected and tracked, which is not always straightforward to achieve. Baking into the system ways to manually override it is also critical: it can allow you to manually correct an unwanted behaviour while you’re waiting for your team to take it into account and release a new version of the system.
To answer these six questions, find out how well the proposed AI solution can solve practical problems. If you target concrete business processes and practices, algorithms can make an enormous difference in productivity. Matching an AI solution with needs requires thorough testing and careful analysis. And stay on the lookout for technological innovations that can make a difference.
The point is to take a results-driven approach, and not get lost in a futile debate on ‘real’ vs ‘fake’ AI.
Dr Clément Chastagnol is head of data science at Sidetrade, a company developing a variety of AI-based business applications, headquartered in France.