An opinion piece from the chief technology officer of Dell Technologies UK

5 Min Read

Artificial Intelligence (AI) and Machine Learning (ML) are undoubtedly two of the most exciting technologies used in businesses today. AI and ML opportunities are only increasing, with developments in processing power now improving the ability of businesses to leverage the technology for new applications, including the enhanced management and analysis of important data. Still, there is much to learn about the possibilities presented by AI and ML, as well as how best to tackle some of the ethical questions associated with their application.

For AI and ML to achieve their potential within businesses, closing this knowledge gap is critical. Many stakeholders are still not fully aware of the potential innovative power of AI and ML and therefore struggle to understand where best to focus and invest appropriately. So what can leaders do to change this?

Making AI and ML everyone's responsibility

Some business leaders might assume that AI and ML sit siloed in specialized IT and technology departments and might struggle to connect the dots when leveraging new technology to solve entrenched business problems. For AI and ML to reach their full potential, a business-wide focus with committed team collaboration is needed. In doing so, it will become easier to identify the opportunity and work together to create a solution with AI and ML at its core. Having a dedicated AI/ML champion within each business unit is often an effective way of achieving this and bridging the gap often felt between business units and data scientists.

Related:Dell UK CTO: Improving AI model accuracy by mitigating drift

Using AI to improve ROI

Leaders can also look to examples whereby other businesses have used the technology to improve an outcome. It is the case that nearly every department in almost every business has a use case for AI/ML, and it is difficult to imagine a scenario where a cohesive AI/ML and data strategy plan would not result in increased ROI.

Taking an example from Dell Technologies, we apply AI to measure product performance and its correlation to profitability. We use AI to predict future user experience and trends by uncovering relationships between technical components, model numbers and service requests. Over time, this helps us to improve our understanding of which components in a product lead to a better user experience, serving as helpful insight for future product development. 

Additionally, the strategic analysis offered by AI can improve short-term decision-making and provide a valuable focus for investment. For instance, by using ML to assess annual customer reports more quickly, we can more easily identify trends on a case-by-case basis to generate more customized strategies for each customer.

Suppose ML has identified that enhancing sustainability credentials by reducing overall carbon dioxide input is a focus of one of our customers. In that case, we can tailor its solutions and products to assist with these CSR initiatives and help the company achieve its goals.

Understanding data governance

When handling vast amounts of data, its management and governance are vital. At the foundation of responsibly handling data, IT and security departments must continue prioritizing security by protecting data, ensuring its accuracy, and shoring up vulnerabilities.

However, the next step is ensuring each business unit takes ownership and accountability over the data in the company's possession. Everyone needs to be involved in taking responsibility. Running internal AI/ML data governance awareness courses and developmental plans is one beneficial way in which all employees can better understand their roles and responsibilities regarding keeping their companies' data safe.

There are already examples of organizations that have developed these in-house or have partnered with industry-recognized organizations for deep expertise.

Tackling the issue of ethics

Ethics is an issue that continually crops up when dealing with AI and ML. The first ethical concern is usually related to the privacy of the sensitive data used to train the technology. Often, a trusted third-party approach can help add a layer of reassurance that data is stored securely and protected around the clock.

A good example is the Sheltered Harbor approach, created to protect customers, financial institutions, and public confidence in the financial system if a catastrophic event like a cyberattack causes critical systems — including backups — to fail.

Institutions back up critical customer account data each night in the Sheltered Harbor standard format, either managing their own vault or using a participating service provider. The data vault is encrypted, unchangeable, and completely separated from the institution's infrastructure, including all backups.

Second, there are often concerns about the possibility of bias in AI/ML's judgement. Although some organizations use an approach centered on 'protected characteristics' whereby information such as gender, race, ethnicity, or age are removed from the algorithm, this can have detrimental, unintended consequences.

The machine is susceptible to producing unconscious bias based on the patterns it finds − and without key characteristics, it is more difficult to spot and correct. Instead, ensuring all characteristics are included along with regular monitoring of the model is a safer way of identifying and altering bias.

Simply removing sensitive data is not the answer, even if that might seem the most obvious solution. AI/ML needs a certain degree of human-guided intervention to ensure companies still meet the required ethical standards.

The workforce as quality control

As with issues over governance and bias, employees will need to be on hand to ensure that any AI/ ML models created and used by the business are being used for the right reasons and not in ways that may unintentionally influence other aspects of the organization.

They will also need to ensure the machine is fit for purpose at any given time and not affected by the concept of 'drift.’ Drifting is where a machine's process is not intelligent enough to adapt to unforeseen circumstances without human intervention.

A machine built in one month may not make accurate predictions six months later if not checked by the workforce working with it. This again sees the need for leaders to develop their people and add new roles, which will be positive for the organization and the industry as a whole.

Businesses will need to become AI and ML capable to remain competitive in the market. AI and ML will soon become crucial for every successful business practice, so understanding the technology and its potential is vital and should be a priority.

Now more than ever, businesses should take the necessary steps to ensure that they have an AI and ML-competent team with the right skills and knowledge to bring the potential to life. The first step is starting small, with steps to ensure all business units and employees are on board. From there, AI and ML can be extended slowly to minimize shock. Those who are taking these steps are already reaping the rewards.

About the Author(s)

Elliott Young, Dell UK CTO

Elliott Young is the chief technology officer of Dell Technologies UK.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like