Gerald Law, chief executive at Innovation DB, explains why human intelligence can never be entirely replaced by AI in the laboratory environment

July 30, 2021

3 Min Read

In 1902, five businessmen founded the Minnesota Mining and Manufacturing Company in Minnesota.

Their goal was to mine corundum, a crystalline form of aluminum oxide commonly used as an abrasive, which they would then use to manufacture sandpaper.

However, during the Great Depression in the 1920s, manufacturing plummeted and the company, now known as 3M, needed a new venture.

Thanks to the existing skills of its workforce, 3M transitioned from producing sandpaper to tape. They stopped selling into the manufacturing industry and started selling to corner shops and supermarkets. Today, you’ll find 3M’s brand Scotch Tape in many households all over the world.

If a machine was tasked with overcoming the challenge faced by 3M during the Great Depression, its solution would probably be to find thousands of different ways of sticking sand to paper.

This is, naturally, because AI programs extrapolate only from the data within the dataset they are given and produce correlations or other analyses that their code has been written to generate.

In such a situation, working out that sticky paper could be used to, say, stick other bits of paper together for a homework project, will not come from code written to perform sophisticated analyses of data gathered on the sticking of sand to paper for industrial uses.

However, that’s not to say that AI isn’t a useful tool to have alongside humans in a scientific setting.  

IBM Watson Health is currently being trialed in several countries to support medical professionals with patient diagnoses.  IBM states that, with the vast amount of medical information available doubling every five years, physicians don’t have time to read every journal necessary to keep them up to date with the latest advances.

IBM Watson uses natural language capabilities, hypothesis generation, and evidence-based learning to support medical professionals in making decisions, and the results of such collaborations look promising.

However, we can’t forget the role of the human in this situation — talking to the patient and investigating symptoms in the first place. The situation is very similar in the laboratory environment. Without a human’s investigative nature, sophisticated technology would prove useless.

For now, the main role of AI in the laboratory is to interpret the large datasets that are often present during research and development.

But, there are technologies in the making that could move the laboratory industry further along the path in terms of data analysis, some of which we’ll be able to see at Lab Innovations.

In the future, it would be great for industries like pharmaceuticals and chemicals to follow in the footsteps of other sectors, such as automotive, where supervisory control and data acquisition (SCADA) systems that have been collecting data for decades, allowing companies within the sector to move towards an Industry 4.0 business model.

However, I believe that the laboratory industry will remain experimental and human-centric for some time yet.

Gerald Law is the chief executive of Innovation DB. The company develops databases for manufacturers, R&D departments, and venture capitalists to find relevant technologies, opportunities, and innovation partners.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like