Wipro Global Chief Privacy Officer on Implications of the EU AI Act

Ivana Bartoletti joins the AI Business podcast to discuss AI regulations around the world

Deborah Yao, Editor

July 27, 2023

19 Min Read
Digital rendering of scales and a woman's profile
Getty Images

With the EU's AI Act close to becoming law, Wipro's Global Chief Privacy Officer Ivana Bartoletti spoke with AI Business to discuss what makes it different from AI regulations in other countries. She also explained why the changing nature of AI makes it more complex to regulate than nuclear energy.

Listen to the podcast below or read the edited transcript.

AI Business: The EU AI Act is probably the most sweeping AI regulation of its kind in the world, for now. So what do you think of its stipulations?

Ivana Bartoletti: AI is already regulated even without the European AI Act. Actually, the first of its kind in terms of regulation in the employment sector comes from the U.S., for example, from New York. And the FTC in the U.S. has been very strong in the financial arena to say, ‘you have to be fair. We can't have algorithms that discriminate against people.’ So to an extent, AI is being regulated by previous legislation, such as in data protection and non-discrimination laws, human rights, consumer laws − all of these already apply to artificial intelligence.

What the European AI Act does is it focuses on the product. It identifies products that are high risk, and by high risk it means that they are a threat to the fundamental values binding Europe together, or they are part of products that are already high risk. So for example, machinery that is used in health care. All of these systems that are higher risk, they have to make sure that they undergo some due diligence. And also there are systems that are just not allowed. They're banned in in the EU.

So the approach of the European Union here is very much on product and very much aimed at a little bit like the GDPR − so creating shared definitions across the EU or share the market across the EU, in trying to combine the values of the European Union with innovation.

But what is really interesting is what is happening with the U.S. at the moment. The U.S.-EU Trade and Technology Council has the most interesting development in recent times with the taxonomy that the council has just published around AI. What does AI mean? What does machine learning mean? Words are very important. So the fact that the EU and the U.S. have agreed on a taxonomy, which means they've agreed on the basic vocabulary that defines the terms, that is really important. Although AI has been regulated for a while already, because it is part of systems, it does not exist in isolation. The European AI Act focuses on product.

But there is also an alignment between the U.S. and the EU on defining responsible, trustworthy artificial intelligence. The European AI Act is part of the wider development and relationship between the U.S. and the EU on AI.

AI Business: Why is having the same taxonomy important and key here?

Bartoletti: It's very important because there's a lot of confusion on what AI is and what generative AI is. Having the same definition means we can have common ground, to act together. What we have seen in recent times, especially coming from the U.S., is a call for like-minded countries and nations to align behind a benevolent, responsible and trustworthy use of artificial intelligence. … It's crucial to bear that in mind that creating the vocabulary means creating the basis for collaborations that we can build on.

AI Business: What do you think about the EU AI Act’s approach to looking at the product and also assessing it via the different risk levels?

Bartoletti: A risk-based approach is pragmatic, because it means that you don't tackle all AI together. If there are some products that have a bigger impact on, for example, our dignity or potentially our civil liberties, or there’s an algorithm that could potentially discriminate, for example, in deciding whether you can access a job or not − those things carry more weight, to an extent.

However, this comes with risks itself because some areas like privacy that are more related to human rights, sometimes I don't feel that they completely fit the risk paradigm. There is something more to that. Because the language of risk to an extent means that you accept the product, you accept the AI, and the only thing you do is to tinker at the corners, to mitigate it. But you don't question whether you should have the product in the first place.

AI Business: Do you have any thoughts on the approaches of other countries, like the U.S., the U.K., and elsewhere?

Bartoletti: We are seeing an alignment in many ways, in the sense that every single country is doing something about it. Some of the strictest legislation around algorithms actually come from China. LLMs (Large Language Models) and this entire area around generative AI is challenging for China, obviously, because they worry … about it being very strong. (However), the controls that China has put in place are stronger than the algorithms.

The Cabinet in India has approved the Privacy Bill (recently). But it also said very clearly that there will be regulation to harness the value of generative AI but do so in a responsible manner. Then you have the U.K., which has gone for a third way since Brexit, basically saying ‘we don't want to regulate but we will empower existing regulators to adapt the current systems to the new challenges that AI is bringing in.’ Then you have the U.S. where the approach is less product-based unlike the EU where the European AI Act applies to product regardless of the sector. The Act applies to a high-risk product because it's high-risk, not because it's in health care. The U.S. approach is a bit more sectoral. You’ve seen it in financial services, in housing, in employment, so different sectors that approach artificial intelligence in different ways.

(However,) we are aligned across the globe on the understanding that we need to regulate generative AI. It has brought a new dimension around the risk that generative AI can have on our own democratic viability because of this potential spread of AI-generated fake news, how easy it is to produce them, and the erosion of the public trust. With the current geopolitical situation, and the invasion of Ukraine and all of that, the risk of political turmoil is obviously adding a lot of pressure on countries to assess these risks around generative AI. We're aligned on recognizing the issues but with different approaches.

AI Business: Do you really need special rules for generative AI? Or can it be regulated under more general AI rules?

Bartoletti: There are specific considerations in generative AI. For example, you really need to reinforce the fact that AI-generated images or news have to be labelled as such. Also, it is really important to realize that there are issues around where the data is coming from. This data may be scraped from the web, from all sources, and may be replicating stereotypes that are present in society. … If this data is trained on the large data sets that are available in the world, then the language is definitely a biased language. Because most of language, especially in countries where you have a more patriarchal background, language inevitably will replicate stereotypes.

AI Business: Recently, around 150 executives from Europe signed an open letter saying that the EU AI Act would restrict them from developing AI more robustly. And as such, the EU stands to lag behind other countries in this innovation. What do you think about that?

Bartoletti: It's a complex issue. First of all, I honestly do not believe that the European AI Act can hinder innovation. Also, I have never believed that (the choice is) either innovation or the preservation and protection and even enhancement of human dignity and the protection of human beings. I cannot see this. And if there is a dichotomy between these two things, that is not innovation, in my view. We have to have the courage to bring the two together. And this is important – and in financial and economic terms, this is important − because the more the people benefit, the more people can trust this product, can use these products. And by using these products, they generate data and with generation of data, this data can be used to train the system.

To me, it makes sense to bring together innovation and the safeguarding of human dignity, privacy and all of that. That is one point. The other point is that this is what Europe does: It's called the Brussels effect. It is the idea that a group of 27 countries can negotiate and legislate in a democratic way, which is what they do, set some requirements and have the ambition to act as a global regulator. The Europe Agenda 2020-30 is about technological sovereignty, and it can be argued that the way to achieve it is through this Brussels effect because Europe at this stage doesn't have Google and doesn't have Alibaba, it doesn't have the big companies that America has, and it doesn't have to big companies that China has.

So obviously, this is what Europe does. I would encourage business leaders to really embrace your opportunity to bring together innovation and the safeguarding of rights. Of course, the AI Act is not perfect. Of course, it requires a lot to happen at the national level. We haven't seen the standards yet that have been developed at the moment. Companies are intervening now, because now the act is going to through the tripartite dialogue. That means there is a little space for lobbying.

AI Business: With different AI regulations in different countries, how can companies navigate this landscape? Surely some of those regulations could conflict?

Bartoletti: For sure. To an extent, it's a little bit like what happened with the GDPR. Note that at the beginning, GDPR was the flagship regulation from Europe. Now, there are so many legislation around the world around privacy. Even in the U.S., you have different state legislations. That's a challenge for regulatory compliance.

First of all, focus on what's in common, which is fairness. The idea is that the algorithms should not discriminate, not just for ethical issues, but also to have a correct output. That is true across every single legislation I've seen or proposal for legislation. Then meaningful transparency, such as about data provenance and whether it should be a requirement − that is discussed everywhere.

Next is interpretability, which does not mean that you have to show a code because it wouldn't be useful to a normal person. Also, it would be very difficult for companies, especially where the code is their IP. Then privacy. I wouldn't underestimate privacy because privacy is already regulated. I always say to companies, ‘we've got to get it right from the privacy standpoint,’ because half of the work will be done in the sense that if we do privacy by design in AI, which means that we solved the problem of provenance, of transparency, we solved the problem of fairness, we solved the problem of data subject rights and information rights, which are things that are present in legislation all around the world.

People have a right to access their data. Now, how you do that in AI is not easy. You may have to go and create a machine that unlearns, rather than learns, because people still have a right to be forgotten. So I think it's really important to focus on what's in common, and start from there.

AI Business: There's talk about establishing a global AI watchdog similar to the International Atomic Energy Agency or IAEA, given concerns by OpenAI CEO Sam Altman about the potential existential threat of AI. So what are your thoughts on establishing this global AI watchdog?

Bartoletti: It would be too easy and wrong to say we just want something like the European atomic agency, because the European atomic agency was done in a specific time with specific concerns. Although the parallel with nuclear is very useful to us − and in my book, “An Artificial Revolution: On Power Politics and AI,” I made that parallel in 2018 before the AI godfather made it (Geoffrey Hinton) − but what I'm saying is the parallel with nuclear is very important in the sense that academically, you could say that nuclear has got great potential, and great risks.

This is the same with AI. What did we do with nuclear energy? We created international avenues that worked – or didn't really work if you think about the Iran nuclear deal which took a long, long time. But the issue is that we recognized globally that we needed to regulate this sector.

The problem is that when it comes to AI, there is something different. What's different is that AI evolves, and the capabilities become stronger and bigger. So with nuclear, we know what the risks are. With AI, we do, but to an extent, we don't. When I say we need a global body to oversee all of this, I don't mean it in the sense that I want to replicate what the European atomic agency does, because that is a different time, a different political global situation. And also, it can't really deal with the evolving capabilities of artificial intelligence.

Stay updated. Subscribe to the AI Business newsletter.

(Beyond creating) an agency like that where there is the recognition of the potential harms of AI, which is genuine, it is important to recognize that there is also a potential of distracting us from the harms of AI that we know already, like privacy, disinformation, misinformation, and security, the exploitation of the most vulnerable − all of this that could happen already with artificial intelligence. So by talking about dramatic risks, and even the risk of extinction of the whole of humanity, I feel that is very much a distraction from sitting down around the table and dealing with the nitty gritty of the real harms that we know that we've already seen in action and that we need to not just mitigate, but we need to eliminate.

(A global agency should be created) not because AI is leading us to the destruction and extinction of humanity, and therefore we need an international body (but because) we need a global place, an avenue where we can look at what different countries are doing, where there could be some investigative powers, for example, publishing indexes of what is happening in different countries, a little bit like what Transparency International does to an extent now. You have this report coming from countries where you can go and look to see the level of protection of the individuals, the level of protection of the environment, because artificial intelligence needs to be sustainable − all of this.

That's what an international body should do, and bring together the best minds at the global level. But what it cannot do is replace the laws that we need to enact. This is why I get a little bit suspicious, when I hear calls for this body to be set up, (perhaps) as a way to avoid doing what needs to be done by now. Because these bodies may take five, 10 years before they go into action.

AI Business: Do you have any thoughts on how we deal with this evolving AI threat? With nuclear, the threats are known and it doesn't change, but AI gets stronger and smarter. Is the solution regulating for this evolving strength of AI? Or do you think the solution is in engineering?

Bartoletti: There's no one solution. There are a lot of things that need to happen together. There is of course, regulation, which is very important. But regulation is no panacea and also because most of AI is already regulated.

For businesses, before a product hits the market, the product has to undertake due diligence. That is very important and it needs to be done and ensure businesses are accountable for what they're doing. There is also governance, and that is a business responsibility where they are held accountable for the product they put in the market, the system they use and the due diligence they apply.

Then there is the responsibility of governments to enforce legislation; there is the responsibility of countries to enhance the digital literacy of individuals and ensure that we have a more diverse workforce. We still have a male dominated sector, this is not going to help in democratizing and creating better AI if we still have a very homogeneous workforce, especially in the West. So it's multi-layered. And I obviously do believe that we need an international approach. … But we also need, especially right now, an alliance that gets behind some principles around good AI, countries that are aligned behind democratic values. …

I do think we are in a crucial moment in the relationship between humanity and technology right now. We've got to get it right. I was a little bit surprised when I saw this (recent) U.N. conference in Geneva on the state of AI and they had invited robots to speak, and they were asked questions by the journalists. I thought it was a clever idea to do that.

But I also was looking at this robot and thinking, ‘why do these robots look so human?’ The female looking robots were very slim, very stereotypical. I'm thinking, ‘is this the future that we want?’ This is what I mean when I say this is a crucial time in the relationship between humanity and technology. This is what I mean in terms of the defining the big questions: Do we want this robot to be so human-looking? Do they reiterate the stereotypes and the prejudices that we have in society?

AI Business: Do you think AI poses an existential threat? Do you have a position on that?

Bartoletti: No, I don't think that it necessarily does. AI poses risks that are not new in the sense that they've been highlighted for a long time. Joy Buolamwini highlighted the risk of facial recognition failing to recognize women of color years ago. … Meredith Whittaker and a lot of American women, they've been leading on this for a very long time. Some of them also lost their jobs because of this. So these risks are not new. We've known them. And we also know some of the answers. So I'm not concerned about AI bringing catastrophic risks. I'm concerned about humans not accelerating regulation and governance so that we can make the most of AI. That's what I'm concerned about.

AI Business: You're one of the co-founders of Women Leading in AI. When you started this organization, what was your goal and what does it do?

Bartoletti: International Women's Day was approaching and we − some colleagues and academics, business women − were having a conversation about what happens with artificial intelligence. That was a time when (certain) stories were starting to emerge. So for example, less credit was being given to women, because women traditionally earn less than men; COMPAS, the algorithm in the U.S., brought to public attention to how this algorithm was giving Black people a higher degree of recidivism regardless of what crime they had committed. So these stories were starting to come out and we were thinking, ‘wow, we all consider ourselves feminists in that group and we've been fighting so much for all of this, and then now we've got AI and all these things are going to come back.”

We were thinking also, and that was very much on a personal feeling that I had, that a lot of these tools that were coming up in AI were not really benefiting us. Instead of doing three things at the same time, as often women do, I was doing 10 things at the same time with the IoT, with AI … because it was technologically possible. So instead of freeing up my time, I was actually having to do more.

When I started to share these feelings with other women, they felt the same. So we realized that ‘actually one of the reasons why this is happening is because we are not in power. We're not deciding which tools we're going to produce. We're not looking at the data sets.’ Because if we were looking at a data set, we would probably identify a problem and say, ‘if you use these parameters, if you use this data, the result is going to be biased.’

We need more women in AI policy. We need more women running companies and product development. We need more women coding. So it's not just more women in tech, we need more women where the decisions are taken around the policies surrounding AI. That is simply how everything came out. I wrote an article in The Guardian (newspaper) in England and I just convened. I said, ‘let's meet up at the London School of Economics. We got a room and 160 women showed up. That's how we started. It's been a fantastic experience so far.

Now, Women Leading in AI has partnered with Equality Now, which is a big equality-based organization and they work very closely with the UN. Equality Now and Women Leading in AI founded a campaign called Alliance for Universal Digital Rights, where we have put together some proposals for what human rights mean in the digital age. We're working with the UN Tech Envoy on the Digital Compact, which is the new tool that the UN will issue, most likely in 2024 on what human rights mean in the digital age, which will go together with the new Universal Declaration of Human Rights.

AI Business: What's next for Women Leading in AI?

Bartoletti: We will be at the Internet Governance Forum to work with what the Digital Compact means for women and how we can advocate for a tool that can be useful, and can be used and leveraged by courts on a national level, in cases that, for example, require redress for people when they they've been subjected to AI generated harm. This is the focus at the moment.

Then obviously we do a lot of convening, supporting, mentoring, helping women find their next role in transition, for example, going from maternity leave to wanting to learn how to code. There's a lot of sharing of knowledge as well; it's a safe space.

We've done a lot of work on the European AI Act. I've just written a report for the Council of Europe, which is a large organization not just of Europe but also includes observer members from all over the world. I’ve just written a report on artificial intelligence and gender, the potential risks and opportunities. So the work that we do is very much around supporting women and campaigning for more women in policy.

Read more about:

ChatGPT / Generative AI

About the Author

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like