Ethical AI: More than a marketing ploy?
SAS's data ethics head on the realities of handling AI responsibly
SAS's data ethics head on the realities of handling AI responsibly
Ethical AI. Socially acceptable AI. Responsible AI. However you phrase it, the concept of a fairer, more representative artificial intelligence is gaining traction.
Some big names in tech have already made strides. Microsoft has pledged to overhaul the usage of its facial and emotional recognition tools. IBM has an internal AI ethics board. And Meta tasked its AI researchers to audit its OPT language models to remove some harmful behaviors.
But are these considerations from a genuine, heartfelt place? Or a purely cynical marketing gimmick to take advantage of a potentially lucrative consideration for customers?
To find out, AI Business sat down with Reggie Townsend, a member of the U.S.’ national AI Advisory Committee and director of data ethics practices at SAS.
Townsend suggested that for SAS, approaching AI responsibly is not a market competitive differentiator, but rather one of employee retention. “We've taken a people-over-profit approach. And this is not to say that every single potentially ethical dilemma we just completely shy away from − no. Wisdom has taught us to evaluate the dilemmas.”
From SAS with love
Townsend admitted skepticism around the concept of a company presenting its AI systems as ethical purely for marketing purposes. “That is a phenomenon that I hear about more in the popular press than I've actually witnessed,” he said.
Instead, the data ethics director revealed he worked with colleagues at other organizations that are approaching AI in the same way as SAS, with a focus on ensuring the greatest benefit for the greatest number of people.
“Are there actors that may not be as thorough? Sure. Are there folks who have different risk profiles and pursue levels of risk that others may not? Certainly. Are there evil actors out there who just want to slap a label saying, 'we are acting like the best?' I don't know if that is the case, at least that's not my experience. I haven’t seen that for myself.”
Instead of questioning the reality of whether a company is genuine in its ethical AI approach or not, for Townsend, the focus was on taking stock.
He said that at SAS, the main focus is ensuring the company’s software “doesn’t hurt people.”
And to achieve that goal, the stalwart vendor has adopted what he described as a human-centric approach.
“One of the benefits of being around for 46 years is that we've seen fast cycles and slow cycles and everything in between," Townsend explained. "And with a little bit of age comes some wisdom and you come to know what you care most about. And one of the things that we care most about is to make sure that we take a human-centric approach to our technology."
“We aren't doing [ethical AI] for the purpose of competitive differentiation, because wisdom has told us that this is the best thing to do in terms of long-term sustainability for a company. But also, that people inside of the company care about this sort of action and activity.”
We’re all in this together
Townsend’s comments on considering the view of the employee come at a time in tech when workers generally still having a multitude of potential offers in an ever-growing market despite some recent well-publicized job cuts at Big Tech companies.
Companies are increasingly investing in remote and hybrid work setups to support the demands of IT workforces. And with CompTIA projecting some 178,000 new tech jobs in 2022, the importance of maintaining the perceived values of staff is an important consideration for vendors like SAS.
To align with that view, Townsend described SAS’s approach as one with “everyone in the boat” when it comes to ethical AI.
“If the devs are the ones who bring the situation, great, but now it needs to be evaluated by a collection of folks across several functions. If the salespeople or higher-ups bring the situation, the same is true.”
“Operationally, we put an AI oversight committee in place that is responsible for making sure that we evaluate matters that are a potential dilemma. We have and are continuing to work through our methods by which we escalate that activity to the highest levels of organizations for certain decisions. It's still a work in process, but at least we've got a framework for how to operationalize that.”
Raising awareness
When asked what the biggest issue facing AI was, Townsend pointed to awareness among the general public.
A perceived fear of AI is becoming increasingly common with what he described as a “predominant” view among the general public that AI and related technologies are going to take jobs and enslave humanity.
“It gets extravagantly bizarre,” he said. “It's really important for the general public to just become aware of what is and what is not as relates to AI.”
“When people don't know much about a subject, it's easy to fear it. And when people don't trust, they don't trust because they are afraid and then they don't adopt. And if they don't adopt, is that the end of the world? Maybe not, but AI can beneficially impact our lives. Then it would be such a shame for us to not adopt (it).”
“If the greater part of the public doesn't adopt and if we drive the tech underground, the people that will take advantage will be those with malintent, and we all lose in that scenario.”
The idea that AI is going to take over humanity is not a new one. A pop culture staple, the malevolent robot or computer system is a misconception that existed long before Townsend joined SAS back in 2015. It’s one he’s keen to quash — but it’ll need everyone’s help.
“It's all of our responsibility (to inform the public). Those of us in the industry … have a duty to care. And part of caring means sharing,” he said. “We have a unique responsibility to share what we know with the general public. And one of the things that I tell them is now that you know you've got to go do better.”
About the Author
You May Also Like