Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Any tech leader looking to explore potential AI use cases must first assess their current data and infrastructure estate
As we head into 2025, CIOs face a growing list of priorities, many of which will focus on creating value from their organization’s data and AI-led digital transformation. Yet some of the most important challenges to take full advantage of their data – and to get value from strategic AI investments – are dependent on understanding their data and technology estates from a business perspective. This is especially critical as AI becomes integrated with more IT systems. For instance, having the right hardware, networking and cloud-based storage solutions is vital to process large datasets with AI.
The global AI infrastructure market size was estimated at $5.42 billion in 2023 and is projected to grow at a CAGR of 30.4% from 2024 to 2030. Looking to 2025, Gartner predicts that “the surge of building out AI-related infrastructure by technology providers is driving high levels of spending on data center systems in Europe”. Any tech leader looking to explore potential AI use cases must first assess their current data and infrastructure estate.
The best way for CIOs to take a reality check on their architecture and delivery technology is through a tech architecture review. Taking a comprehensive view of all elements will ensure it is streamlined and can cater to current and future business needs. It will help to clarify what they should have in place to be able to take full advantage of their data and the AI they are investing in.
Particularly for AI investments, tech leaders will need to have a full understanding of their data and consider storage requirements or risk planned AI projects failing as a result. An effective review will assess IT applications, data, integration and infrastructure. Looking at elements in isolation isn’t enough - everything should be considered to understand your full technology estate.
Tech environments have become much more complex, with more software and applications meaning more things to maintain and to keep secure. The more servers and components in your environment, the larger the surface exposed to cyberattacks and the harder it is to secure your data. This increases the risk of a cyber security incident or data loss.
Organizations should have their data retained and classified consistently across their estate. This will help drive innovation and security across the business as a whole, and if not you can end up with a data swamp where the management, governance, and quality of data are either lacking or entirely absent.
Most often a technology architecture review process can reveal the same problem being solved in more than one place in the tech environment, which is a waste of time due to the need to maintain two systems. This kind of complexity is particularly a problem, as it leads to errors that can significantly hold back proactive AI projects.
Another potential issue with unchecked tech estates is the high costs of hardware and spiraling costs of cloud storage, which can pose a threat to new projects getting off the ground. Then there is also the need to have the right framework in place to successfully build an AI platform, otherwise dozens of developers could spend months doing the wrong thing because the data isn’t in order.
While many organizations are exploring or implementing AI in business functions, there are some key considerations tech leaders must make to get their infrastructure in shape for AI:
Senior tech leadership knowledge: Data is one of those areas of knowledge that senior leaders could always know more about. It’s a legal requirement to know all the personally identifiable information (PII) in your platform - where it is, how long you’re holding it and why you’re holding it. Classifying data and labeling is important. Other information that you might consider commercially sensitive is part of this, such as price lists, what you pay staff, what you charge customers and where contracts are. There’s a good chance not all senior tech leaders know the answers to those questions.
Data process and compliance: To meet regulations, organizations should know what data needs storing, whether they are allowed to store it, process it, who can see it, how it is classified, and where it should reside. Just as you know that an email application runs locally on a user’s devices as well as in the cloud, you know the terms and conditions and what it’s compliant with, this should be the same process with AI. It is important to know where the data flows from and to so that it can flow seamlessly across all systems and it’s secure in transit. Organizations can spend a lot of money training AI on their data but many overlook whether it’s available, backed up and stored properly, or haven’t even calculated storage capacity requirements.
Creating effective agile teams: The team implementing AI must be also invested in other areas of tech architecture, to ensure they’re working cohesively. The creative element of designing and training AI must have a link back to the other systems. Taking an Agile iterative team approach will ensure that people are communicating well and all heading in the right direction. The Agile development approach of starting small and building on it iteratively and incrementally will maximize the chances of success.
A team that understands the implications of the development work beyond just writing and shipping code will be most effective. The IT team should be briefed on the business’ vision and strategy over the next five years and the rationale for the project, rather than having a team that is working in isolation. An effective tech architecture review will involve and engage with everyone, from the CEO to developers and everything in between. Just building an AI platform, without identifying how it meets business objectives and the expected ROI, will be a waste of investment.
Security – Most importantly, the AI platform must be built securely, and that’s about having the right DevSecOps practices in place, where security is embedded in the development lifecycle. This involves making considerations around data governance, secure data practices and cybersecurity, to even include ensuring employed staff are trustworthy. Working with a digital partner who is ISO 27001 certified gives reassurance that the AI tool being built is secure.
Responsible software development - Having well-defined, controlled and risk-managed software development processes is essential to meeting customer and regulatory requirements whilst mitigating risks. An ISO 9001-compliant digital partner can ensure quality management systems (QMS) and continuous improvements in processes and products.
If a business is planning to implement new technology such as AI, it is essential to perform a comprehensive assessment of how everything currently works before building this new layer.
A technology architecture review will answer the question of “Where are we?” and provide the best foundations for the next question of “Where do we want to get to?” Ideally, this review should be performed at least annually or whenever the business makes or considers introducing large changes, such as AI, to the system.
When carried out effectively, the business ends up with a highly performant system, high-quality data, and less technical debt, and empowers its tech team to move cohesively in the right direction for successful AI implementations.
You May Also Like