With the vast amounts of Enterprise data continuing to increase at an exponential rate, the need to better understand and utilise the insights drawn from a range of structured and un-structured data sets is growing.

Organisations spanning a range of industries including Finance, Legal, Healthcare, Energy, and even Public sector are striving to draw insights from their data that can better aid and support decisions on investment opportunities, processing documents, prescribing medical treatments or simply to be better informed.

Hewlett Packard Enterprise has been a pioneer in this space, enabling over 80% of the global Fortune 500 businesses to draw actionable insights from their data and with Haven On Demand is taking this to the Cloud.

We spoke with Dr. Sean Blanchflower, VP Product Development for Hewlett Packard Enterprise to find out more about the capabilities and subsequent opportunities for businesses to utilize the platform.  

Sean, I understand you were involved in the development of IDOL/or in the early days of analyzing unstructured data, can you tell us a little about this background and how it all started?

I joined the industry when a lot of emphasis was starting to be put on creating new technology and I was involved in working on the creation of a system that brought pattern-matching techniques to human information and consolidating it into a scalable, versatile server. That’s when IDOL was born. I then joined HPE in 2011 when it acquired the company I worked for. Since then, plenty has changed and the scale and expertise of HPE has helped significantly. We’re now in a position where we can call on parts of HPE that would never have been accessible to us before, such as the huge HPE Labs team, or the expertise in the rest of HPE Software. However, we’ve also ensured that we’ve stuck to our roots in the IDOL team, keeping the same setup that we used to create the technology in the first place; we’d be crazy to change that!

How have you seen the Enterprise appetite evolve over the past couple of years in terms of adopting AI enabled solutions to support their data analysis?

AI is certainly hot at the moment, and is only going to grow further! But also I think AI is largely misunderstood, as for many people it brings to mind the image of robots doing most of what humans can do. In HPE, we don’t see it like that and instead talk about Augmented Intelligence. Our aim has been to equip humans with the information to determine the best course of action themselves – not to take the decision away from them. Speaking for myself, I don’t want a computer to decide where I’m going on holiday this year, but I’d love a system that knows the kind of places I like going and throw up-to-date suggestions at me. By approaching it this way, the technology helps me make a smarter decision but I’m still part of the process. And judging by the reaction we get when we show such systems to enterprises using IDOL, the market agrees with this approach.

Haven OnDemand is the Cloud platform, which incorporates both IDOL and Vertica products, could you explain a little more on how they work?

IDOL and Vertica make the perfect team, in that Vertica – HPE’s massively scalable columnar database – is able to ingest and analyse machine and structured data, and IDOL handles the rest of the data – the video files, audio files, images etc – the unstructured data. The two combined created Haven, the only platform that can bring its huge range of analytic capabilities to 100% of data in the world. With Haven OnDemand we’ve made all of this available in the Cloud, meaning the lead time to getting a project underway drops to minutes, ensuring that your system always keeps up with demand.

Which verticals are you gaining most traction with for the push toward Cloud VS on-premise deployments?

There are far too many to mention! The main common thread we see between our Cloud engagements is that we deal far more with the technical users – the application developers, the data scientists, the machine-learning experts. More and more, these individuals are getting a say in their company’s choice of platform.

Are there any use cases that you can reference for how certain businesses are using the platform already?

We’re working with a major international bank who is revamping its website search to improve customer experience, and they found that Haven Search OnDemand allows them to create things like promotions without the hassle of having to host the infrastructure. Or one of the world’s largest media companies who is wanting to use our speech capabilities to make the audio on its radio stations searchable, again without the need to set-up the servers.

What are the barriers to entry for this level of deployment – are they centred on data privacy and security?

The first thing to say is that we really don’t encounter privacy and security concerns; companies have been archiving their emails in the cloud for over a decade, and thousands of the biggest companies store their HR records with third parties. If anything, our main barrier is one of educating potential customers, as many don’t realise that this level of analytics is now available, or the benefits of that they can bring.

Sean, can you give some background on how the Yad Vashem partnership came about and what the Hackathon has enabled for The Yad Vashem centre?

It began over a year ago when a partnership was set up between the centre and HPE to work on ways to make 125,000 testimonies gathered over 70 years more accessible. Even digitising the content only solved part of the problem, as the data contains a lot of unconnected facts. Often these facts relate to the same person even though the name may appear quite different. Other times, facts appear to refer to the same name, but it’s actually a different person. In technical terms, this is an entity resolution problem. With over 100,000 sources for the facts, it may be the most challenging entity resolution out there. But it’s looking good as we’re currently getting 96% correct identification. Yad Vashem are thrilled. Based on this they expanded the relationship and trusted HPE to run a hackathon.

For the hackathon Yad Vashem took their testimonies – which make up the best part of a petabyte of data – and fed it into IDOL with APIs to our functionality that hackers could use for retrieval and analytics. The hackathon projects demonstrated the type of applications and user experiences that can be created with this data. And, just as importantly, we saw that there’s a community of top-class developers who are engaged with the material and would want on-going access in order to create even more complex projects.

How can this use case be applied in other settings?

Yad Vashem is certainly not the only organization out there with an archive of information whose full potential is not being realised!  More and more organizations are realizing if they make their data available then someone will do something amazing with it, and we’ve already run dozens of hackathons using our Haven OnDemand platform in partnership with other companies.

What’s next for the winning project of the Hackathon; ‘Testimonies Become Searchable’?

The winning project dealt with access to the many audio and video testimonies collected by Yad Vashem, parts of which were transcribed. The team created technology that uses IDOL to align the text of the transcription to the media file so that a search will be able to jump to the correct location inside the media and the user can listen to the relevant part of the testimony. We’re already working with Yad Vashem to create permanent online access to their digital archive, and the best projects from the Hackathon will certainly be part of that.

Yad Vashem

Looking ahead, there’s a large amount of activity among the large Technology providers in developing new Enterprise AI enabled products, where is HPE currently investing most heavily in R+D?

We’ve already set 2016 as a year of Innovation across HPE, but in IDOL we have a number of AI areas in which we’re particularly active at the moment. The first of these is in core video processing, with CPU and GPU advances enabling techniques that were previously impossible, and we’re exploiting that with a lot of work in areas like Deep Neural Networks and SLAM. And then we’re already seeing real interest in setting more of our image processing functionality up for real-time mobile processing, as cameras are moving away from static CCTV locations to mobile devices. For example, UK police forces are already wearing lapel cameras and want to process the feeds out in the field, and the explosion in drone numbers makes this an interesting market also.

2015 has been a breakthrough year both in new tech start-ups entering the scene, M&A activity, and real-world Enterprise use cases across a range of verticals – what does 2016 have in store for HPE?

We’re living in what we call the Idea Economy – the ability to turn an idea into a new product; new capability, new business or a new industry has never been easier or more accessible. This presents an opportunity and a challenge for most enterprises and to thrive in this environment requires a new style of business which in turn demands a new style of IT. With our customers looking for our support in this transition, our focus for 2016 and in the future is on helping them transform to hybrid infrastructures – such as the cloud – protecting their business from cyber threats, enabling workforce productivity through mobility and empowering them to embrace the data they have. It’s definitely going to be a big year for Cloud, both on-demand cloud services, as well as private clouds within the enterprise. Overall, I would say expect more of what we’re best at: new technology that fits data to the needs of the human user, not the other way round.