Key takeaways


  • Algorithmic autonomy requires CxOs to adopt a new approach to role delegation.
  • Jason Sturman offers four possible models for accomplishing this, from implementing strategic AI advisors to augmenting employees.

By Jason Sturman

EPWORTH, UK – Charismatic business leaders find fulfillment in inspiring people, so they don’t easily welcome the idea of letting smart algorithms make important business decisions. And it’s not just them; nobody wants to be bossed around by clever code. However, that is the future of business management. Autonomous algorithms are increasingly working alongside talented people to help run some of the top global companies, including Facebook, Alibaba, Amazon, Netflix and Google.

Business leaders who are committed to data-driven performance have come to terms with the reality that increasingly autonomous smart algorithms are the key to success. Modern organizations value empowered algorithms as much as they value empowered people. However, in the absence of clear delineation of authority as well as accountability, unending human-AI conflict will result from dual empowerment.

Algorithmic autonomy requires that C-level leaders take a new approach to delegation. For example, CEOs must make it clear when smart algorithms, rather than talented staff, are to be consulted. This can be difficult. Some of the most agonizing board decisions regarding machine learning are usually about the extent of authority that highly intelligent software should have. Business leaders who would unflinchingly automate a factory now recoil at the idea of letting deep-learning AI determine their entire marketing strategy. Oddly enough, the job-loss ramifications of the algorithms’ success, rather than the risk of failure, are what make executives anxious.

What this means is that the supply chain, procurement, and data science teams will be driven by algorithmic entities that will save the companies hundreds of millions of dollars. Furthermore, compared to the current processes, the algorithms will respond multiple times faster to market changes, all with minimal intervention from humans. Executives would have to rely on their companies’ computationally exceptional autonomous AI units. This is such as a big challenge that many CEOs are yet to make the switch to autonomous algorithms.

Business leaders who are committed to taking on risk and opportunities associated with autonomous algorithms should consider the following four well-documented AI implementation approaches, which have been proven to work. A note of caution, however: algorithmic innovations and exponential growth of new data guarantee that the rise of autonomous AI will incessantly challenge oversight from humans.


infographic 1

Related: The AI Customer Revolution Is Here [Free Infographic]


1. Autonomous AI Advisors

This borrows from the BCG, Bain, and McKinsey management models. Executives view and treat autonomous algorithms as they would their most talented strategic advisors. However, unlike human advisors, the autonomous algorithms will never leave the company. The algorithms are constantly carrying out data-driven checks and recommending optimal courses of action. In addition to taking the initiative on analyses, the algorithms provide management with briefings on the findings. However, the human oversight committee is solely responsible for approving which decisions are deferred to the algorithms and how the decisions are implemented.

From a theoretical standpoint, the organizational challenges related to algorithmic autonomy will span the systems or processes that are made to become autonomous. However, in reality, the transition or handing-0ff process has presented major operational challenges as well. Inter-process and interpersonal conflicts inevitably arise from the top-down approach.

For example, at one US retailer, the whole merchandising department was replaced by an autonomous unit of algorithms. Top executives instructed store managers and employees to adhere to the directives and honor the requests of their new AI colleagues. As expected, there was palpable resistance and resentment. To ensure compliance, human monitors and audit software had to be put in place.

Data scientists are, in this model, the mediators between the committee charged with oversight and the departments targeted for implementation. In many cases, the technologies are found to be easier to manage than people. As a result, the data scientists become the buffer between the oversight committee and the staff. They’re also are tasked with ensuring staff members don’t hack the algorithms.

2. Autonomous AI Outsourcing

In this model, algorithms take over the process of business process outsourcing. The same factors that inform outsourcing become principles of management for computational autonomy.

For this model to work, you need clear task descriptions and goals. Avoid ambiguity by providing detailed service-level agreements and key performance indicators. Managers and employees responsible for various decisions and processes determine how resources are allocated and whether the algorithms will offer improved optimization and innovation. Among the important benefits that autonomous algorithms will offer are high-level reliability and predictability.

Autonomous outsourcing and traditional outsourcing share the same challenges related to interoperability, responsiveness and flexibility. Consequently, AI-driven opportunity exploration and value creation are curtailed because the initiatives that facilitate them are subverted by too much focus on defined deliverables. So a business may end up developing an excellent collection of autonomous algorithm units that don’t work well together. It is, therefore, imperative that business leaders keep in mind interoperability when formulating an autonomous outsourcing model for their companies.

In this model, data scientists play the role of project managers, outlining quality standards for algorithms and data as they bring consistency to service-level agreements. The data scientists provide support for managers and employees who are responsible for AI-driven outcomes.

3. Autonomous AI Employees

Even the most talented employees have their limits. Compared to these employees, autonomous AI algorithms are like eccentric geniuses. Therefore, the question on business leaders’ minds is whether the average manager and employee can effectively work together with exceptionally intelligent but limited autonomous entities.

Modern enterprises use AI in cases where computational autonomy can help achieve the businesses’ outcomes. Essentially, the companies train their staff how to bring on board and work with top-level algorithms. Decades from now HR managers will likely have direct control over the entire HR system, thanks to brain-machine interfaces.

Employees learn to treat the software as valued colleagues who typically provide the right answers. Variations of this model are already being used at companies such as Alibaba and Netflix. It has even been rumored that Google might be gearing to become an AI-first environment. Rather than operating as a static code, the machine learning model needs to be constantly fed with data. It requires constant updates and tweaks.

On the one hand, blending the autonomy of people and machines to a large extent weakens accountability within the organization. One reason for this may be that in rapidly evolving learning environments, it may not be clear to project managers whether they need to retrain the people or the algorithms. On the other hand, a culture of collaboration is more likely to nurture success than a culture that discourages collaboration.

In this model, data scientists play the role of facilitators tasked with adopting AI interfaces that encourage collaboration. Additionally, they may help with minimizing human-AI conflicts. Business leaders rely on the data scientists to wrap their minds around the vast cultural transformation that accompanies pervasive autonomy.

4. Inclusive Autonomy

This is a model popular with many Wall Street hedge funds. These companies allow AI full autonomy in steering the organization to new levels of risk, profitability and innovation. And the results of the algorithms would humble anyone who still prefers having humans in the driver’s seat. Executives in these hedge funds have deferred an unimaginable portion of the decision-making process to the algorithms.

Investment funds are interested in using autonomous AI to gain a strong competitive edge in the market. Currently, machine-learning software is being used to train other machine-learning software. It may well be the case that it will reach a point when humans have nothing more to teach algorithms.

In this model, autonomous algorithms are at the center of innovation and growth in the company, and HR recruitment is done on the basis of one’s ability to further the boundaries of the technology. All-inclusive autonomy requires humility and a readiness to literally have faith in the machines. Numerous fund managers and finance researchers have reported that the algorithms often make trades based on decision-making processes beyond their cognitive capacity.

Among the most popular machine-learning research areas is the development of meta-intelligence software that generates narrative and rationales for explaining to humans data-driven machine decisions.
For enterprises that employ this model, data science is dominated by risk management and the need to attain an accessible human-level understanding of complex algorithms.

Final Thoughts

Granted, these four models of incorporating autonomous algorithms into management anthropomorphizes the AI systems. The software are viewed as accountable agents rather than inanimate code. In each code, top executives justifiably agitate for improved transparency in the companies’ management principles. As the capabilities of autonomous algorithms continue to advance, increased oversight will lead to additional insight.

CEOs and board members need to closely monitor the algorithms, promoting simulations to determine the boundaries of the technology. Furthermore, business leaders should be careful about blending multiple approaches as doing so may have unforeseen implications on responsibility and accountability. They must ensure clarity in deference, delegation and direction.


Jason Sturman writes about HR, data, and technology. He often contributes to publications such as the People HR blog, and has co-authored a number of guides and white papers for HR professionals.