Tim Russell joins the AI Business Podcast to discuss generative AI security, FOMO and confusion around the tech

Tom Taulli, Contributor

February 28, 2024

10 Min Read
Illustration of a circuit board with the letters AI
Getty Images

Tim Russell, the U.K. chief technologist of CDW, a Fortune 500 integrated IT solutions provider, joins the AI Business Podcast to discuss customer concerns around generative AI, including security, FOMO, and confusion about its use.

Listen to the podcast below or read the edited transcript.

Tell us what you do at CDW.

I'm the chief technologist for the modern workspace in our U.K. organization. I lead our U.K. modern workspace engagement covering everything from productivity and collaboration through to IoT − Internet of Things − and smart technology.

CDW has over 100,000 products and services. It's a massive organization with 250,000 customers all over the world. So given that you have this big base, what do they talk about when it comes to generative AI? What are some of the themes that are emerging?

It is a hot topic. A lot of people have their own ideas, but quite often they ask what everyone else is doing – that is logical. With the U.K. subset of customers I speak to, it really falls into three areas. One, security of data: Where is my data? Who has access to it? How do I prevent inappropriate use or release of that data?

Second is FOMO. (Generative AI) is such a big buzzword (and there is a sense that) everyone is doing it. (The questions that arise are) ‘are we behind the curve? Are we late?’ And the answer to that last question is just because everyone is talking about it, it does not actually mean everyone is doing it, or is in any way ready to do it. And if they did start doing it, it would probably be a bad idea because all of the background stuff is not being done.

The third one is confusion. Generative AI has been in the forefront of awareness because of ChatGPT but businesses could not fully grasp why they needed a generative capability. There are a lot of … notable conversations around time to release. When should we and how quickly should we get there? Which engine do we use? How do we know it is the right one? These conversations I think are going to continue on for some time, especially as development continues.

The last thing is we ask our customers ‘how do you benchmark AI effectiveness? How do you understand when you are doing it well enough to understand that AI made a difference?’ We want to drive a wider adoption of things like RPA without creating a shadow IT environment and if something has been going on for a long time, and we want to drive efficiencies and improve an individual quality, how do we understand that we are being successful? How can we be subjective in that matter? So here are the three main topics but then also understanding that if you are on that journey, how do we then measure our level of success.

Those are all excellent points, especially the one about confusion. Every day there is something new when it comes to generative AI. When it comes to CDW's own journey in AI or generative AI, what are the lessons that you have learned?

Technology for the sake of it is not a healthy business decision. You need to be really clear on what you are going to get out of AI, before you even start looking at the who, the what, the where, and the why.

CDW has a long storied history that goes back to the PC revolution of the 1980s. But just like any company, there are legacy systems that have been built up over the years. So what does that mean when it comes to implementing AI?

You are never going to fully remove legacy, or you are going to find that it has been moved into new areas. It should not stop your journey. It is not always going to be a case of rip and replace for the sake of a new tech. Just doing it for the sake of it is not really the right business decision. And I always go and look at things like contact centers; they are not things you change quickly, because they are the absolute cornerstone of your business, particularly where you are having customer interactions, either internally or externally.

So those legacy systems have been born out of necessity, but they have remained because of a reliance on them. So when it comes to the journey to AI, first ask the why and how you can reduce risk to your business from changing a legacy environment. How can you implement or include levels of AI? And AI is not something you buy and it is red or blue; it is a level of integration that you decide on, that is relevant to your business to deliver against your business goals.

When that is put up against a legacy system, we have to understand what we are going to get out of combining these two technologies, or in some cases (when) we cannot make these two work together, we have to accept that we are going to start a new journey. But take the experience, the knowledge and the reason for the existence of the legacy system to actually improve our level of AI adoption and development in our next level, technical solution on that journey.

Samsung has recently announced that they are implementing generative AI in their phones. So it will do real time translation for foreign languages, which actually seems like a pretty neat feature. It can help write texts and can help with images and so forth. How could this impact smartphones and is it a new avenue for growth for that market?

I was actually in Japan recently, with an Android device with the capability to offer real time translation. … I was in a foreign country not connected to the network and the device still had the capability to translate. The NDU, CPU and GPU could do all of this. It could in real time translate the language being spoken around me. Not only that, I can look at a menu and I could use it in real time to tell me what that menu was offering. And if I wanted, I could also use the device to take my voice and speak in a native language to the individuals there, telling them what I wanted or what I needed. I could not judge the accuracy, but considering people were not confused, it was amazing. …

(But) it is not just about this translation. It is the fact that the AI capability in that device can also look at you as an individual and how you use the device. Which applications do I offload? Which applications do I back end? How do you use your phone throughout the day? For example, how can I protect the battery charge so on your commute home you can watch the Netflix film you have downloaded? AI will start looking and adapting. The neural learning comes in to change how your device provides its services to you. That is going to start being a game changer in how these devices are used.

You have on-chip capabilities – the CPU, GPU and NPU (neural processing unit) within a single chip, within an extremely small, powerful form factor that people are taking for granted in their hand. … All of that capability will start delivering a massive impact to us as end users. And as they start getting better and better, we are going to find more and more use cases for this power that is now in our pocket.

AI PCs are coming down the road and Intel is building generative AI capabilities into its own chips. AMD and Qualcomm are doing the same. What are your thoughts on the changes in the PC market especially laptops?

It is going to go beyond the hardware. Take a look at how Microsoft is positioning Windows 11 and Copilot, which is Microsoft's (AI assistant). Now, Windows 11 has become synonymous with Copilot; it  is baked into it and we get an indication of where the PC market is going to go. Because the OEM providers use Microsoft software, they are going to align their development to this.

Now if we encompass what I have said about smartphones, intelligent power, management application optimization, etc., I believe we can expect more overarching, AI-supported interface for users. AI in a lot of cases is application-focused or specific. … I believe we are going to be moving up to an application-independent approach where your interface, your operating system, your mobile device, whatever it might be, that will become your overarching AI capability that looks at your applications.

It looks at how you are operating across multiple platforms, and can deliver productivity improvements, productivity enhancements and advice in your way of working. Just imagine you are using your mobile device or your physical laptop or your desktop PC, and you have a repetitive action, you copy an email into a calendar invite across two applications, you open Gmail, you copy an email, and you put it into a calendar invite in Outlook.

AI could find ways to automate that process to make you more productive because it sees an email with a subject line addressed in a certain way. It can apply this level of capability that says, ‘Hey, I can do this for you.’ That capability being baked into the device, be it a PC, tablet or phone, I believe that is where we are going to start seeing the biggest change.  

Humane is a buzzy startup that created this AI pin that is a wearable and there are other companies developing these new form factors. What are your thoughts on this convergence of AI with these new devices and rethinking the form factor?

We are going to see generational change as we move through the next five to 10 years. People are going to become more used to not having a physical phone in their hand … or not actually having to physically look a screen but having it omnipresent, always available. And Humane is the playing quite well to that. … I honestly think they are on to a good thing. It is just getting the level of adoption up.

For AI, it is about adoption, readiness and security. If I look out a little bit further, I see NPUs becoming personalized. It is a different way of learning and processing − very important to productive AI. What if the NPU became personally identifiable with you? What if it was virtualized across all of your interaction devices? What if it became synonymous with you regardless of where, how or what device you interact with? Like a blockchain NPU that is linked to you and it is absolutely specific to you? From its first point of interaction, it has learned who or what you are and has become molded to become the second you.  

What if the NPU gained that capability across all of those devices? How much more powerful would we as an individual be if we had that level of capability? For some, this may be a little bit science fiction, a little bit out there. But we are looking at how the NPU learns as a neural unit and the capabilities that come with that. And we do not want to lose it on one device and not have it on another.

So a lot of the progress we are going see in the short- to medium-term is building up to this capability of an identifiable NPU profile that will follow you across multiple devices, giving you the same look and feel. This could also be driven into interactions in government bodies, how we interact with other organizations because you have become known for your preferences, how you like to work − and as such, I believe that is honestly going to make a difference.

Read more about:

ChatGPT / Generative AI
Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like