Schneider Electric CDO: Merits of an AI Hub-and-Spoke Structure

Peter Weckesser joined the AI Business podcast to discuss how his company put generative AI through its paces

Deborah Yao, Editor

October 11, 2023

18 Min Read
Illustration of a hub-and-spoke model
Getty Images

Peter Weckesser, chief digital officer of Schneider Electric, joins the AI Business podcast to talk about his company's hub-and-spoke AI configuration and shares their generative AI ROI. He also offers advice for other executives looking to deploy AI in the enterprise.

Listen to the podcast below or read the edited transcript.

Can you tell us about your role at Schneider Electric?

My role is called chief digital officer. Schneider Electric is a world-leading company in energy management, industrial automation and also industrial software. At Schneider, I have the pleasure to oversee a couple of things. One is I have all internal IT at Schneider. We have given ourselves the ambition that we want to drive more productivity than any of our competitors, across all of our business processes. And here, IT works very, very closely with our business process owners to identify where we can improve our business processes and how do we deploy the best-in-class tools to really drive this productivity improvement.

The second responsibility is around our product portfolio. The product portfolio of Schneider is really divided into three layers. One layer, is a layer that we call connected products. These are products that are really on the field level of our customers in industrial application, in infrastructure, and also in buildings. The next layer up is called Edge. This is where we have edge control and edge compute products that collect the information from the layer below and process them. And then the next layer up is called cloud and digital services where information gets aggregated, gets provided to service organizations, and where we have software applications that generates insights into that data.

Related:Building a Foundation for Trustworthy AI

Now, as you can imagine, in these three layers, there needs to be a seamless connection or seamless flow of data. And my organization owns this technology backbone. That also means my organization works with all other product owners in Schneider to integrate their products, no matter whether you are on the connected products level, on the edge level or on the cloud level with this backbone. This backbone is actually called the infrastructure.

How does AI play into digital transformation at your company? What are the key pillars?

AI plays a major role and this role is rapidly increasing. We started our journey on AI with a very dedicated and focused activity about two and a half years ago. We actually created an AI organization; we nominated the chief AI officer in Schneider Electric. We also gave us a structure how we want to work on AI.

Related:AI at Scale Ensures a More Sustainable Future

Before we created this organization, we did a survey in Schneider of what we already do. We found around 30 initiatives, which were all quite small, very fragmented and they didn't know of each other. But we had a good number of activities around AI already. Now they were all small, as I said, so we decided we really need to build a more critical mass for AI and bring these activities together.

The first discussion was for us, what does it really take to create business success? We decided our approach to AI should not be a technology-driven approach, but it should be a business and use case driven approach. Then we thought about what is the organization that we put in place that are best suited to drive these use cases? And we came up with a model that we call hub-and-spoke.

We created a hub, a central organization in Schneider that has all the technology expertise around AI, an organization that works with all the spokes. Who is a spoke? A spoke can be every function or every business. Actually, the use case owners are the spokes. And you can look at the AI hub as the enabler, providing the technology and data platforms, providing the expertise − how to set up an AI use case, providing the technical expertise of data scientists to help the spokes build, and then operate the use cases.

Besides this use case-driven approach, we also thought about organization, and talent and skills, which was actually in the early phase, at least in the first year of building this AI organization. One of our biggest challenges was to find the skills and the people in the market that we can attract to Schneider. And this took some efforts. We really had to build a dedicated recruitment organization to recruit the necessary talent and skills in the four key locations of our AI hub, which is North America, Europe, India and also China.

We started our journey assessing these use cases. The use case assessment starts with a business case description, which means a description of what is the business problem that should be solved in this use case. It doesn't start with a technology discussion. Once you have that, the hub and spoke work together on how do we solve this problem? The spoke owns the business case, the hub is the supporting function that helps you to deliver this now.

What the hub does is clearly provide the right technology platforms, provide the right expertise to then deliver these use cases. The hub also provides a bit of oversight and governance because in a use case approach, we really want to measure the impact of what we do around AI.

For more stories like this, subscribe to the AI Business newsletter.

We look at this impact in three dimensions. One dimension is an internal one, where we use AI across all of our functions like finance, like supply chain, like sales, to drive productivity. Productivity means how do we get better, more efficient in in executing our tasks. We want to measure that. We strongly believe that the outcomes need to be measured as we are investing quite a bit of money and resources. In this internal part, productivity is the key measure.

Then there's a second dimension, where AI becomes more and more part of our products. You can think of AI as additional functions and features of our products. That means a building controller all of a sudden has an AI-based algorithms to optimize the heating systems in the buildings in a better way. The product owners are in Schneider, what we call our lines of businesses, they own the product portfolio and they also own the additional functionality that goes into the product.

The AI hub, again, works with these lines of businesses to create that additional product functionality. Train the algorithms and also provide the right technology platforms. Let me zoom in for one second on technology platforms. This is an important aspect. Because here the AI hub also has a clear governance responsibility. Technology platforms do evolve very, very rapidly. You could be using different technologies in every product. We didn't want that right. We wanted to have a certain governance that we standardize on some key technology platforms that we become experts on, that we know how to use and deploy quickly, and that it can scale across many, many use cases. This is one responsibility of the AI hub to also govern the technology that's being used.

Let me give you some insight into the third dimension of that AI hub. We learned from our customers that they need more support from us across their solutions. We help our customers on optimizing the whole business process, provide them the whole solution which consists actually of Schneider products, but then also integrating them and providing an AI-based optimization across full processes. This was a bit of an experimental approach, we are now seeing this generate more traction and a lot of customers are asking for this kind of support.

The hub-and-spoke approach is interesting. How do you personally work with this framework and what is your relationship with your chief AI officer?

Organizationally, the hub or the chief AI officer is part of my organization. The AI hub is the central entity providing expertise, providing process guidance, providing a common approach across these AI use cases and projects, providing a common technology platform. The AI hub works with a very, very large number of internal stakeholders, and also external stakeholders. I can say today literally every function, and every business of Schneider is working with the AI hub. So the hub is a little bit the spider in a very large net within Schneider. And we were able to establish a very clear understanding of the roles and responsibilities between the hubs and spokes and this works extremely well.

We came to the conclusion that the initial model we created really works well for us at Schneider. It requires a little bit of discipline, because that discipline means that the roles between hub and spoke need to be understood very well. Everybody has to play by the rules that we have given our ourselves.

What we already see today, about two years into this, is that we were able to generate significant value in the various work streams −  internal AI − which is a significant value contribution of productivity improvement across these business processes. We also have a large number of products that are enhanced by AI functionality. We have a relevant business volume that has AI-enabled functionalities today, and also the third work stream where we do some consulting and support is starting to pick up and get traction.

Let me zoom in on the role of the hub as providing governance and also a controlling function in a use case approach. The use case starts with an idea. So we start with an ideation phase where we do a use case assessment, then we decide if we want to move forward with this use case or not. There are a good number of use cases where we said, the idea is just not attractive enough, from a value-generation perspective. The ones that (passed our assessment) went into a pilot implementation phase to see if it is technically feasible. This phase ends with a milestone where a very conscious decision is made. Are we really able to deliver the value that we're expecting?

The ones that pass the milestone we go into a phase that we call industrialization. So this is where we basically have validated the use case, validated the technical solution, and where we start to industrialize and commercialize this use case. We measure both the investments that we are making and what is the outcome. The outcome is additional product sales, particularly on the external side. On the internal side, it is the measurement of productivity that we can generate.

Cost is a factor companies are concerned about for AI deployment. I know it depends on the project, but can you give me a sense of how costly it is?

It is a significant investment that we are making. there was a conscious decision two years ago − I can't share detailed numbers because we don't disclose this − but that's a significant investment. And this investment is going to be increased now with the emergence of generative AI, or large language models.

I cannot share numbers, but I can share a ratio. What we are seeing across our internal and external use cases, and also what we are seeing across the latest developments around large language models and generative AI is a return on investment of 1-to-3. For every dollar that we invest, we see a productivity improvement of $3 that is possible. This is a huge opportunity that we cannot miss. From my viewpoint, no company can ignore this because there is so much potential being made available through these technologies.

That's astounding – 1-to-3 ratio. Can you share how you incorporated generative AI in an industrial company like Schneider Electric? And should you?

The answer to ‘should you?’ is clearly yes, you have to. But you have to be selective. For me, generative AI is more than the trend of the year. I believe it is the most disruptive innovation since the invention of the internet in the in the 90s. Schneider believes, and I personally deeply believe, that generative AI will totally change the way we work. It might not change Schneider's product portfolio that quickly, but it will change very quickly how we work in operating Schneider.

What we did when generative AI came up earlier this year was we decided to take a very close look at it to understand where generative AI can really help us. We asked our AI hub to work with all the functions and all the businesses to assess what is the potential of generative AI for us as a company.

The disciplines that will see the biggest benefit out of generative AI are software R&D, and sales and marketing. Let me explain why that's the case. Software R&D is very much about generating code, generating test cases, and generative AI is actually extremely well suited to support software engineers and the software engineering process to drive productivity. We have started pilots very early to use these tools such as Copilot for GitHub. This generates somewhere between 15% to 20% productivity increase in software engineering. This is the result from early pilots. So we made the decision this is a productivity boost that we can't miss.

We have roughly 10,000 people in Schneider who are in software engineering and write either code, product requirements or test cases. If you can generate this amount of productivity with this very large number of people, then this is a pretty significant improvement for the company. Now, our goal at Schneider is clearly not to reduce the number of our R&D people, but to have a faster time-to-market with the product that we're generating. So this is one of the most promising application areas of generative AI.

Stay updated. Subscribe to the AI Business newsletter.

Secondly, Generative AI will replace our classical chatbots. We are using chatbots in customer service to support our customers, to get answers to their questions around our product portfolio and around pricing, and many other questions. Now with the capabilities of generative AI, I strongly believe this will totally replace classical chatbot approaches because it's simply much better particularly if you train the generative models with your own proprietary information and knowledge that you have in the company.

We are starting now to augment the large language model with Schneider-specific information to build large language model-based chatbot technologies, which are really significantly better than in the past. We already see today that this helps us to provide better service to our customers – provide better information, answer our customers’ questions more quickly, and solve a problem that we couldn't find enough knowledgeable people and hire them for our customer service.

Then there's an additional use case in in sales and marketing, which is very much around creating content around product descriptions and collateral around our products and systems. What do we need that for? We, of course, have a strong web presence. Our customers are searching for our products on the web; they shop online. It is our ambition to provide a better experience, provide more up-to-date, accurate product information in a better way. Very clearly, we can see that these generative AI technologies can significantly help us to do a much better job in this area. This requires not only the off-the-shelf generative AI but you need to augment the large language model with company proprietary information.

Are you using an open source large language model or what LLM are you using?

We are experimenting with multiple large language models. We have been playing with OpenAI and Bard in the early phases. Now we believe that we need to go to an enterprise-grade large language model where anything we ask the large language model when we put basically Schneider proprietary information into the prompt is not being shared with the outside world. So this means we need to go with an enterprise grade model − we are for the most part working with Azure-OpenAI. That's the Microsoft solution. But we are looking at all the other solutions in the market as well. … Also, we are looking at China. China is a very relevant market for us and we are looking at local solutions out of China.

How do you handle hallucinations? I don't think you can take it out entirely for generative AI. And also cybersecurity.

I don't have the full answer for that. I think first is you need to be able to acknowledge that any of these technologies also has some challenges that come with it. If you think of AI risks, and particularly generative AI, it is hallucinations, it is cybersecurity, it is biases that might be in the in the models, copyright infringement. We are aware of these challenges; we can't rule them out. But we also put the right responsibilities in Schneider in place around cyber, the ethical use of AI, the right governance models. This doesn't save you from never having a problem, but you have the people who deal with this in a professional way.

Does that mean you have a separate governance board, or is it something that sits in the AI hub?

It actually doesn't sit in the AI hub. We have a governance organization, and now with the emergence of generative AI, we have made the decision that we need to empower this governance organization to also deal with these challenges of AI. So we have a separation of church and state. The AI hub would always have the ambition to deploy more AI and generate more AI use cases that makes sense for Schneider, and they should not be governing themselves. This is why we made the very conscious decision that governance for AI will not sit in the AI hub but within the governance organization. But it is a dedicated role in the governance organization.

Can you tell me about some of the use cases that are not suitable for generative AI or other forms of AI?

Within our product portfolio, we mostly use non-generative AI technologies. I'll give you examples on this in a second, but we have at least two areas where we are looking at generative AI. We have a portfolio of products which we call PLCs or programmable logic controllers. These are basically embedded devices that control a manufacturing line or a machine. They need to be programmed in a programming language that is a very specific to these PLCs. We are looking now at whether we can use generative AI to help our customers to get more efficient in how they generate the code for this and that means we need to train the large language models with our own code repositories.

One other example is we have engineering tools for our customers to build electrical schematics. Here we are looking at how generative AI can support our customers to more efficiently generate these schematics and single line diagrams. The first results are very promising. We have not released this as a product feature yet. These are the two very specific features where generative AI most likely will become part of our portfolio.

If I look at other applications in our portfolio, we actually use many other AI technologies, which are not generative AI technologies. Generative AI is not really the solution for everything. When it comes to control algorithms − a building controller that controls the temperature in a building and optimizes the energy consumption needs a stable temperature in a building, then generative AI is clearly not the solution for such a use case.

How does sustainability play into your role at Schneider Electric?

Sustainability is one of our strategic pillars in Schneider and has been for, I think, 17 years already. That was long before many companies even thought about sustainability. It's really part of our DNA and we have improved and evolved our sustainability program and ambition year over year. We have a very clear sustainability commitment for Schneider as a company. We have also created and built a sustainability business where we work with our customers to help them through their sustainability journey.

Our sustainability business really starts with a consulting-based approach where we help our customers analyze their business processes and identify the potential of reducing energy consumption, reducing carbon emissions in their business processes, and then helping them in measuring these and their improvements. We then go in with our product portfolio to help them improve energy consumption and carbon emissions. Now we have a pretty strong product portfolio that starts with the whole energy management portfolio, our metering portfolio, and so forth.

We have built strong domain expertise and knowledge for the business processes of our customers so that we really can consult with them. Now part of that portfolio actually is enhanced with more and more AI based-based algorithms. Sustainability is a very important strategic value proposition of Schneider in our go-to-market in working with our customers, and it's equally important when it comes to the sustainability ambitions that we have set for ourselves.

What advice would you give other executives in your position at other companies in terms of approaching AI and generative AI?

My advice would be approach AI as a business opportunity and not as a new technology in the first place. Bundle your AI activities so if they are fragmented activities across the organization, defragment these organizations and build critical mass around AI. Always use a business case-driven approach. I would also recommend standardizing on not one but a few technology platforms. It is not only one discipline of AI so you need multiple platforms. But do not introduce your organization to everything that is being offered in the market. Make conscious choices on the technology platforms and also build around this use case and business-driven approach and governance that allows you to drive your AI program as a business improvement program.

About the Author(s)

Deborah Yao


Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like