Tech Mahindra's Chief Digital Services Officer on Scaling AI in the Enterprise

Kunal Purohit joins the AI Business podcast to talk about the opportunities and challenges of scaling AI in the enterprise

Deborah Yao, Editor

December 1, 2023

15 Min Read
tech mahindra logo

Kunal Purohit joins the AI Business podcast to talk about scaling AI for the enterprise. He cited the four stages of digital transformation and what factors, both technical and cultural, are hindrances to faster deployment. He also names the most common mistakes enterprises make in adopting AI.

Listen to the podcast below or read the edited transcript.

Tell us what you do at Tech Mahindra.

In my current role at Tech Mahindra, I wear two hats. Hat number one is where I bring to bear the power of emerging technologies to create new solutions that our clients then deploy in their environment, and drive business benefits from there. We help customers move from a traditional operating model to a digital operating model. And now more so, moving them into a cognitive operating model and using more and more insights and the power of AI to be able to deliver outcomes faster for higher client satisfaction.

I'm also on the Executive Council for initiatives where every time there is a new idea within the company, within tech, to create new businesses of our own − and where we can set up a platform and scale it and monetize it over a period of time − we try and enable that for the organization. So, over the last three to four years, we have launched about four such initiatives, which will hopefully create value for Mahindra as a company over the lifespan of these ventures. So those are the two things that I do.

Digital transformation has been a buzzword for a while. But AI really gives new life to that term. How do you combine AI with digital transformation to drive revenue growth?

Enterprises over the years have realized that they always had data, but they had never really looked at that data the way they are starting to look at it now, in terms of being able to draw insights. And in terms of being able to trigger actions based on those insights, I think large amounts of efforts over the last decade or so has been on automating processes, creating applications, which are more scalable as enterprises went global, creating more customer satisfaction through better engagement, creating value at that zone between customer and the enterprise.

And now they're realizing that with the power of AI, they can use data and not just monetize some of it, but also create insights that can get into a higher level of personalization, higher level of customer service, and higher level of predictability − whether it is for operations, customer service and what have you.

So the ability to infuse AI across the digital stack is starting to become all-pervasive, whether it is in infrastructure, application support, network operations, client engagement, customer service, or sales and marketing. Across the stack, the ability to infuse AI is what is scaling over the last three to four years and has become quite significant with some of these newer techniques around generative AI, discriminative AI, becoming very consumerized over the last month or so.

Where are your clients on their AI journey today?

We engage with enterprises all the time to assess their AI journey maturity. At a very high level, the way we have seen the enterprises traverse this journey is they've always had some automation CoE (Center of Excellence). So typically, enterprises have had efforts that they put in and sometimes a dedicated team to bring new solutions around automation, which over the years became a team that was supporting doing Intelligent Automation − bringing slightly more AI-enabled products and leading to Process Automation, enabling various aspects of business value chain automation.

So the transition has been from automation to Intelligent Automation. This transition typically was the RPA, and then the intelligent RPA side of the world, and that kind of traversed over to, ‘hey, can AI use cases now be done through that particular effort?’ Therefore, extending Intelligent Automation to help build use cases, which engulfed AI techniques, which created models, which brought in AI/ML techniques in the enterprise, and tested them out. After successful tests, you know, efforts were made to launch across the enterprise. So that's the third phase.

Now, the latest phase of maturity is where enterprises who have gone deeper with AI are now extending to leverage the same to explore possibilities of using generative AI, and not just the AI techniques to explore new cases, and also increase productivity of the organization − or the teams that are doing coding and engaged in various activities. So those are the four steps of maturity that we have seen.

If you look at where most of the enterprises are, they are between steps two and three. And, you know, we have a one to five scale across these four parameters. We believe that most enterprises today are between Intelligent Automation and use of AI techniques for the majority of their effort. Some companies who are slightly more forward looking have the foresight and the ability to spend are taking those significant steps in using generative AI techniques as well.

That’s the approach we see most enterprises taking and many of them struggle to scale AI. Many of them have tried it successfully, internally, for one division, one warehouse, one business unit, but sometimes the ability to manage that and use the same model across the enterprise becomes a bit challenging, and therefore comes support with various frameworks that we bring to bear.

Why are companies struggling to scale AI? And what are some possible solutions?

There are various parameters across which you could measure a company's efforts and their struggle to scale AI. They do not just end at the level of technology. Yes, there are some aspects of technology that is a challenge, but data culture can be an issue. How pervasive is data-driven thinking in the organization? Sometimes that is also a crucial factor, right? Is top management making decisions based on gut feeling or are they making decisions based on data? … That's one area of challenge which is not necessarily tech-driven, but behavior-driven. How do you find some champions in the organization who can … show success? Then that whole culture will grow across the organization.

The second area where enterprises struggle is the fear of not doing it because something might go wrong. We have seen many, many customers who are not even starting to experiment. Some of these technologies like generative AI today, the sooner enterprises get on board, the sooner they can experiment, the sooner they learn and the sooner they can start to drive benefits as the technology itself matures.

Yes, there are many new factors and there are some risks associated every time there is a new technology launched or a new product launch, but this fear that is stopping enterprises from taking that step is a very critical choice. We have seen that enterprises first want to finalize all the business use cases, figure out what will be the benefit, what will be the outcome if they implement all of this at scale, and then only do they start taking those first steps. Whereas if you do a few experiments, if you get a few people working on areas that can create high impact, and the viability of tech is still easy, then you can get some confidence going across the organization.

Third is the various levels of challenges on the tech side. Sometimes you build a model for a warehouse, that warehouse had different levels of technology, you went for vision tech, and that had a certain amount of infrastructure, with devices from which data was labeled and captured and things like that. Then when you try to use the same model across other warehouses, the technologies support, the various parameters support, don't integrate well, and then the outcome and the efficacy are found wanting. At the same time, the platform to manage operations of data, how do you continuously keep the data updated? How do you keep the model updated? So model management and all of that also becomes a bit of a task.

Stay updated. Subscribe to the AI Business newsletter.

At most enterprises, data scientists want to work on the best part, which is building the algorithm, building the model. But then, 90% of grunt work is on making it successful and sustaining it. Many enterprises as still struggling to understand the magic of how to scale it. I think quite a few of them have achieved success in one or two areas at some level, but there is a continuous striving to find people, and the ability to have platforms and the ability to scale model X across the enterprise, or you have multiple use cases, for that matter. How do you systematically − whether it's a technique based AI, or whether it is process based AI − how do you use these to scale across the enterprise? So sometimes these tech and scalability is seen as a challenge.

And then of course, you have the people side of the world. How many people do you have who understand architecture thinking, who understand some of these newer techniques? How many of them understand the ability to cross-link, the benefits and the use case and create an outcome that is successful, and not just an academic outcome? A lot of times you end up looking at academic outcomes, which are not deployable in production. And then you kind of lose confidence.

And lastly, there is one other factor, which is significant. … The cost of training, cost of building versus cost of outcome, and how much money do we have within enterprises today to deploy AI becomes a big, big question mark. So the ability to have a view on cost, and yet be able to take those first steps, and take those second steps, from experiments to POVs (proof of value), to scale, if that is the journey, how do you consistently do that? That’s where a lot of enterprises struggle.

Those are really legitimate concerns. So what do you say to these executives who bring up these worries?

We were perhaps the first in the industry amongst our peers to launch something called the Generative AI Studio … that enables enterprises to take those first baby steps in understanding the power of generative AI and doing some experiments … without putting a lot of money behind building new models. You look at 30+ capabilities that we put in the studio including code generation, text, content generation, vision, video and image − these six variables, and we tell them to experiment.

At the time that we launched it, several enterprises had challenges with understanding the critical elements of generative AI: Will it create data privacy issues? How secure is the platform? Am I going to send everything back into cloud? Will it send back some phishing, ransomware, etc.? Will there be copyright issues? Who is responsible if somebody raises a question on the copyright? And the code that may be generated, is the quality of that code good? So these factors were top of mind for all our customers.

(We told them that while) all of the technology evolves, if you could test it with some of your use cases, then you will start to increase your maturity levels. That is where we had a tremendous amount of traction. Quite a few wins since then were regarding the specific use cases.

Can you share some examples of use cases your clients have launched? And how was the performance?

There are many horizontal use cases, which are generally applicable across industries. And then there are very nuanced use cases that are specific to a particular vertical or an industry. So to give you an example of horizontal use cases, we are seeing almost all of the enterprises looking at knowledge search or knowledge management for generative AI. How can I engage my employees, my customers, my channel partners better? How can I manage the knowledge in whichever format it is, and use techniques to provide better answers that are slightly more empathetic to the person who's asking, and answers which are more accurate? We have seen chatbots not being accurate enough. And then that goes into individual vertical.

So for one of our group companies, we created a model for a resorts company that provides vacation resorts to consumers. And there are many queries around, ‘what does the facility have? Does it have a heated swimming pool?’ And then there is the user experience whether asking questions on the website or mobile app, there is a bit of to and fro and sometimes they would not get accurate answers. The efficacy of that engagement was close to 63%. We went ahead and did a pilot to bring more generative AI techniques there and the efficacy of the model in responding to questions − of course, we trained it with some detail − is significantly higher at 91%.

At the same time, we are also now involved in upgrading their data infrastructure, the hardware for them, because initially there were some performance issues. And this is the journey of many customers. While the model performs but the performance of the platform, the application goes down, because there are latency issues and that creates disruption. So, that is one example where aggregating documents that have answers to queries, aggregating various forms of insights, and then being able to sift through and provide answers is what we are seeing as one big horizontal use case.

If you look at verticals, we have had oil and gas industry companies come to us and say, we spend a tremendous amount, top dollar, in drawing up contracts. A very significant amount of money goes into engaging high cost resources, like lawyers. Can a base contract be generated by using some of these techniques? So, for a very large oil and gas pipeline company, they are now engaged in extending their automation … to say, ‘let's build some of this’, so they can save money, and not have to hire law firms to draw up contracts from scratch. That is an example nuanced to that particular industry. You have models, helping them build 60% of the contract upfront, and you do need to bring human effort in for accuracy and the final closure.

… We believe that enterprises who have been putting a lot of money in just aggregating data in a tech platform and paying subscription license fees for the particular platform − that will also go away. So they will end up saving money in just having platforms that only provide integration of documents or collating of documents in one place – and they can use this to provide insights and create trigger action. We are seeing some of that happen in contact center agent use case, for example, where the agent has multiple applications, multiple listings to go through.

What are some mistakes you have seen clients make, as they get deeper into AI or get started with generative AI?

One mistake is they take too much time to start because if something goes wrong, … (they ask) who will be responsible? … To a certain extent, there is significant development in that area. You probably heard that Microsoft said they will indemnify customers if GitHub Copilot has some copyright issues. … The second one is not fully understanding how much effort goes into creating the desired outcome versus (just seeing the first step in the process). You do need a lot of training, you need models to be fine-tuned.

… (We tell customers,) ‘Here's the start, and here's how we would do the first few things properly.’ And then implement a few things very quickly immediately after it. That's very crucial as well. We have seen that it comforts clients. … Otherwise, the customers end up having a bad experience, and suddenly you are engaging with technology that has not been trained well.

You oversee Garage4.0, your startup incubator at the company. How do you pick startups to join this program? And is the goal an IPO or you want to absorb them into Tech Mahindra?

Being a technology company that provides technology services and digital services to our clients, we also realized that there is a market to create. Some of the software creates new platforms that can solve newer problems and considering the way India as a country is growing, our chairman had a vision: Is there a possibility of Tech Mahindra having new ideas and employees who have new ideas coming forth at Tech Mahindra and the Mahindra Group providing seed funding? These ventures will sit outside of the tech fold, operate as independent startups. And then as this scales, they will go to the external funding round. The vision is to have, over the next seven to 10 years, anywhere between three to five such ventures operating in segments that we believe where capabilities can be created externally, and then over a period of time either brought inside or, the group exists to create valuation benefits.

So both options are true in some cases are driven by the investment thesis, where we believe that this area is going to expand with India as a target population or target market, and therefore, let's build and then go to external funding so that is the fundamental premise − create value … and we could either monetize it or bring it back into the tech fold to help the existing customers.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao


Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like