Sorry, you need to enable JavaScript to visit this website.

Unleashing Intelligence from Latent Data Assets

Unleashing Intelligence from Latent Data Assets
November 07, 2017

A consistent theme across modern market leaders is their ability to cut through the clutter in a connected and competitive world by leveraging the power of data. As product and service markets become more saturated and once differentiated offerings now becoming commoditized, successful organizations have deployed a focused use of processes and technologies to tap into the transformative potential of data and drive competitive advantage.

With the number of devices as well as connectivity between devices increasing rapidly, companies have enormous data sets at their disposal, but are struggling to manage it. Mobile data traffic, across the world, is projected to increase sevenfold between 2016 and 2021. And global IP traffic will witness a threefold rise within the same timeframe.

According to Forbes, most companies use only a fraction of the data they store and collect. Only 1% of global data is being analyzed, and it is questionable whether that 1% is truly the ‘right’ 1% relative to improvement of the business. Organizations are clearly spending vast time, money, and resources to store and manage the vast data sets, yet are often lacking a well-defined plan to gain value out of that data tied to business outcomes.

Only 1% of global data is being analyzed, and it is questionable whether that 1% is truly the ‘right’ 1%

Consequently, enterprise Data Lakes run the risk of becoming Data ‘Swamps’ due to the exponential increase in volume from disparate sources. In fact, users are often left wondering, “We’ve got all this data in the lake, but there’s so much I can hardly get a handle on what’s in there let alone make use of it”. A Gartner report had warned about the risks associated with Data Lakes, highlighting the need for effective governance.

enterprise Data Lakes run the risk of becoming Data ‘Swamps’

It therefore comes as no surprise that the quality of insights has suffered despite the abundance of data sets. There’s a clear disjoint between the input and return on it; even data accessibility of data is becoming a major pain point.

The issue is not isolated to the volume of data alone, at the root of the problem lies a siloed organizational and technology landscape.  As organizations have evolved over the years it has often resulted in split staff and operations across technical and functional disciplines. This is not perhaps the best approach as in order to convert data into a value-added asset the following functions must be performed in a unified way across business and IT functions:

  •  Aggregate data from multiple sources
  • Evaluate and manage its quality with direct ties to outcomes
  •  Enable users to easily search and subscribe to data
  • Run advanced or predictive models on data tied to business function
  • Enable business intelligence (BI) and advanced visualization

A fragmented approach results in these initiatives resting with different departments within IT itself, let alone across the enterprise in business and shadow IT organizations.

The core tenet of digital transformation is the conversion of data sets into business insights and assets. The stress is on driving intelligence rather than churning out information. This necessitates a collaborative approach for a cross-section of users from developers to analysts to data scientists and visual designers. Collaboration is best augmented by user-friendly and integrated tools that help demystify data. The intent is to let users focus on outcomes, rather than focus on the technical means.

To address this gap and more effectively address real-world problems, HCL has designed and executed the Enterprise Intelligence Hub (EIH) on big data lake as a single platform enabling a diverse set of users to work together with diverse data sets. Users can view information in online catalogs by function, add it to a shopping cart, and subscribe to what they require from internal sources (i.e. sales/marketing, finance, and supply chain) and external sources (i.e. social, weather, and syndicated data). With a focus on user-friendly tools, EIH on big data lake facilitates users to set up initiatives that put data into business relevant and bite-size chunks. The aim is to demystify available data and make it more easily accessible.

Once subscribed data is easily available for use in advanced models, it simplifies a data scientist’s ability to derive actionable insights in an iterative and collaborative way with the business. That data can also be leveraged by visualization tools to enable self-service dashboards in a flexible framework that accommodates both open-source and licensed technologies.

In the current state across many industries, available data assets are precariously poised. Organizational data is flush with potential but also has a limited lifespan. And data doesn’t get a pass from the need to be easily accessible and user-friendly – time has proven that if it’s not it simply won’t get used.  The most successful of organizations will have laser-focused efforts to ensure that data assets are effectively leveraged while they are still relevant with strong collaboration between the data and the people, processes, and technology that interlink it to business differentiation and competitive advantage.