Hacking The Capital Markets With AI To Save The Economy—And The Planet

Ciarán Daly

September 21, 2017

10 Min Read

image001-300x270.jpg

For over eight decades, the Ford Foundation has sought to reduce poverty and injustice, strengthen democratic values, promote international cooperation, and advance human achievement. 

Graham Macmillan serves as Senior Program Officer for Impact Investing at the Ford Foundation. In this role, he is responsible for developing the team’s grant-making strategy and coordinating it closely with our program-related investment. 

Prior to joining Ford, Graham was Director of Corporate Citizenship Partnerships at Citigroup. As director, Graham worked with Citi’s myriad businesses to drive economic and social impact with clients and other key stakeholders while contributing to Citi's broader citizenship reporting requirements.

In our interview, Graham provides a dose of much-needed critical analysis. As well as noting the massive potential AI has for profitability, he points out the potential AI has to incentivise environmental and social change through improved modelling of risk. With honesty that is as refreshing as it is insightful, he calls on executives, tech firms, and the public alike to face up to the challenges of bias, power, and responsibility that must be contended with in order to ensure the Fourth Industrial Revolution benefits the planet—as well as the economy.

AI Must Address Humanity’s Challenges

With such grand promises and claims underlying its adoption, many commentators and companies alike have sidestepped the very real issues facing society as AI is adopted en masse. The technology is so revolutionary that, in many cases, it has been considered a fix-all: for productivity, for profitability, for social and environmental challenges.

We live in a time of great change. Just as there are myriad technological advances that, even ten years ago, nobody could have predicted, there are an exponentially increasing number of new risks and challenges that we are yet to truly address. Graham is right to remind us that they still must be addressed—and that AI could help that to happen.

“The planet faces a number of significant challenges; from climate change, to inequality in all its forms. These challenges, if left unchecked, threaten to undermine societies and cause significant disruptions to communities, governments, and corporations,” he argues.

He believes that investors and companies are increasingly failing to see these challenges as fundamentally systemic risks. Recounting Professor Eugene Fama’s Efficient Market Hypothesis, which states that ‘at any given time and in a liquid market, security prices fully reflect all available information’, Graham argues that, “among the $100T invested in the global capital markets, we do not at present have all the available information, and as a result are fundamentally mispricing risks—especially those related to social and environmental issues—and thus creating a distortion in the capital markets. This is a systemic, material risk—not only for investors but operating companies as well.”

Stranded Assets Pose Systemic Material Risks

This is a mistake—because, “if mitigated appropriately, these risks can potentially be converted to opportunities.” Generative attractive returns and company growth are within reach, he says. “In fact, according to the Global Sustainable Investment Alliance, there is roughly $23T of assets under management with a focus on responsible investment. This represents a 25% increase from just two years prior. The institutions that recognize this trend and position themselves appropriately will help to drive innovation in more sustainable business models that are increasingly proving themselves to be better forms of investing with attractive returns.”

Take stranded assets, which come from the effort to limit the Earth’s warming to no more than 2 degrees in accordance with the Paris Climate Agreement. “In order to not exceed this threshold, the amount of burnable carbon in the ground would need to be limited. This limitation, however, has significant impacts on energy companies that rely on proven reserves to value themselves, seek financing, and identify share prices. If these companies are unable to burn these resources, either by regulatory mandate or other enforcement mechanisms, they are left with stranded assets—assets that will be defined as non-monetizable.”

The material risk stranded assets pose is extremely real, Graham argues—but the Ford Foundation believe that AI could play a major role in providing the solution. “It was estimated recently that European investors alone are exposed to potentially $1T in stranded assets. To determine those stranded assets and the risks of climate change took decades of scientific analysis by thousands of researchers.”

The big idea

“The big idea for us now is to dramatically reduce the time it takes to model complex systems to better understand cause and effect, and then apply artificial intelligence tools to rapidly identify and price previously unknown social and environmental risks. All one has to do is imagine the ability to price water risk, food security risk, or even poverty risk for a planet that will soon have over 9 billion people. At present, we simply cannot do it accurately enough with the given available techniques.”

In other words, AI could be used as a kind of hack within the capital markets—a way of providing risk valuations to a significantly accurate degree, thus incentivising fossil fuel divestment in the long-term. This will protect both investors in those assets by exposing them to greater capital oversight, and, of course, could even save the environment itself. “The fundamental idea here is to better use rapidly advancing technologies to understand these risks in order to put a more accurate price on these risks, so investors are not exposed over the long run. These decisions will influence trillions of dollars in assets under management.”

Graham believes that this is a unique moment in time, claiming that AI has a “potentially pivotal role” to play in advancing complex models, organizing massive sets of data, and dramatically reducing analytical time by training machines to read and learn.”

“What happens if an analysis shows that laying off hundreds of workers and dumping tons of pollutants into a stream is actually going to be better for the company or the investor? What process is in place that makes the determination of what is right and what is wrong? Whose judgement or value system is it anyway?”

Addressing Bias, Inclusion, And New Opportunities

While Graham touts the opportunities for AI to better enhance capital market pricing and understanding of risks, he argues that this assumes that there are only positive outcomes in the utilization of these technologies. “What happens if an analysis shows that laying off hundreds of workers and dumping tons of pollutants into a stream is actually going to be better for the company or the investor? What process is in place that makes the determination of what is right and what is wrong? Whose judgement or value system is it anyway?”

“Increasingly, we have seen a number of private sector, frequently investor-led, decisions that create profoundly negative externalities. Oftentimes, it is the government and the taxpayer that bears the burden of these decisions. How these externalities are considered and what the role of AI was in creating is critical in ensuring a license to operate.”

This links to fundamental issues of accountability and bias. Training data bias in AI has become a hot issue on social media lately, with many commentators exhibiting concern over potentially biased machine vision or the ability of neural networks to detect an individual’s sexuality, political leanings, and more.

How can we address this? Graham believes we need to ask important, critical questions at every stage of AI implementation: “Machines need to be taught and, generally, humans do the teaching. Who are the humans writing the rules and whose rules are they using? Who is determining what is equitable and what is not? How are we accounting for implicit biases inherent in who we are? Who is to say some designer or coder in California will be able to train a machine to empathetic and non-discriminatory to a Kenyan Maasai tribesman, or a baker in Buffalo, for that matter?”

This extends to the technology sector itself, which Graham believes provides important lessons for firms developing AI. He believes that, given the pivotal role AI is going to play in enabling the new economy, issues of inclusion and discrimination must be recognised and addressed.

“The tech sector has been pretty dreadful at enabling an inclusive economy. By this, I mean that there is a huge missed opportunity for technology firms, and especially AI firms, to better understand the value and values in building a more inclusive economy. There is ample evidence that shows how having more diverse and included groups—across all sorts of socioeconomic/gender/race dimensions—tend to be better businesses and have greater economic growth. The tech sector, for all its innovation, can also be one of the most obfuscated industries in the world. How can we ensure that the public can be a meaningful check on innovations in AI? How do we design accountable algorithms that can be transparently tested for bias? We need meaningful, collaborative partnerships between corporations, government, and civil society to determine the ground rules for designing equitable AI systems.”

"Who are the humans writing the rules and whose rules are they using? Who is determining what is equitable and what is not? How are we accounting for implicit biases inherent in who we are? Who is to say some designer or coder in California will be able to train a machine to empathetic and non-discriminatory to a Kenyan Maasai tribesman, or a baker in Buffalo, for that matter?”

On a broad macroeconomic scale, he argues that the public are right to be concerned about what AI means for work. Although some argue that AI will augment—rather than replace—human workers. “Additionally, we are well-aware of the dialogue concerning the impacts of AI, automation, and other technological advances on people, especially as it relates to the ‘Future of Work’. Technology executives, investors, civil society, and government must all be engaged in a rigorous and honest debate about what the future holds and make a determination of how we create real opportunities and not just risks.”

The Ford Foundation at The AI Summit San Francisco

“The Ford Foundation has myriad interests in AI, but our participation in this year’s summit is anchored in the perception that AI has a powerful and positive role to play in shifting the allocation of financial assets within the capital markets away from negative social and environmental actions to more positive ones. We recognize and are actively engaged in broader efforts to address some of the potentially more negative consequences of AI and technology more broadly, including algorithmic discrimination, privacy rights, among other challenges.” This is reflected in the Ford Foundation's support for the AI Now initiative - a research initiative at NYU working across disciplines to understand the social and economic implications of artificial intelligence - as well as research into the ethical design of machine learning at Princeton's Center for Information Technology Policy.

“I am here to plant a flag—to raise awareness. And to learn. If just 10% of the attendees were interested and inspired to further explore how these technologies could play a vital role in not only saving our planet but creating business opportunities then we would have accomplished a great deal. I have no doubt that in the next five years, advanced modelling technologies, coupled with better datasets and faster machines, will be an essential part to empowering more and more people while ensuring our planet thrives.”

Graham Macmillan will be delivering a keynote speech at next week’s AI summit, entitled ‘How the Capital Markets and AI Can Profitably Save the Planet’

Ford-Foundation-logo12-300x150.jpg

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like