There is a new awareness concerning the need for service assurance, and as 2016 draws to an end I wanted to talk about this global need. Because converged services are the new service standard, communication service providers are faced with multiple challenges in supporting new digital services with the legacy tools they have used for the last decade plus. The problem is that service providers are still deploying silo based tools that manage a specific function yet these tools actually add complexity and cost in managing a new or converged digital service. The fundamental challenge is how can a service provider ensure optimal service, i.e. service assurance across its customers and markets, whether they are consumer or commercial, while also driving operational efficiency? Many large enterprises and managed service providers certainly have this challenge of silo based tools also.
It is clear that the network monitoring space has in effect atrophied as I review the white paper, Operational Support System – Transforming Businesses into Thrivers. I recently read that the market leaders do not innovate – they acquire a portfolio of disjoint products, and then atrophy hits. The atrophy becomes evident by customers getting hit with increased maintenance and new pricing plans which incur new costs for software that is basically a decade old or more.
To address these challenges, a service provider will need a next generation solution, one that supports end-to-end service assurance for converged services while also driving operational benefits. The key to this next generation solution is to have end-to-end, cross-domain correlation of resources to services to customers. To do so, a solution must support topology management, topology for physical, logical, and virtual environments. And it also must be protocol agnostic, in other words, support the transport layer to the application layer. It also must be integrated to the IT and OSS environment including CRM for SLA requirements, inventory for accurate resource views, ticketing for incident management, and more. It also must be able to be protocol agnostic for topology. Historically the fault management space was able to support any protocol – this is what made Micromuse the standard in that space. When one looks at the performance management market, it is clear that it is still protocol dependent, like the legacy root cause analysis vendors for IP networks. In addition, a larger problem in my opinion is how does one do cross domain correlation, across diverse software instances, from multiple vendors, all which have differing time stamps for each “event occurrence” which may be affecting a service?
To effectively support true service assurance, a solution must be unified across silos and domains, able to normalize data from any source, be able to support any form of topology regardless of the domain, and enable real time visualization in a multi-tenant manner. For the record, I just described our Assure1 solution.
Happy Holidays, and we are looking forward to helping you address your challenges in 2017 of assuring new and existing services to meet your customer experience management goals, while also driving new levels of operational efficiency.