This week I am at the TMForum’s Digital Transformation World in Nice. Having attended Carrier events throughout most of my career, this particular opportunity however is my first to get hands-on with a TMForum Catalyst project.
The Catalysts, which develop over time and mature in deliverability by being showcased at multiple TMForum events bring together companies to work collaboratively to build innovative solutions to challenges. This year Federos was eager to participate alongside AT&T, BT and Orange to collaborate on the AI LEAP Catalyst project. Along with these champions of industry, we were joined by Galileo Software, Arago and Wavelength Communications, to deliver a robust and holistic solutions for issue discovery and resolution using artificial intelligence, machine learning, event analytics, automation and prediction technologies.
With each of these unique capabilities driving essential elements within the solutions, you can understand why we selected the name “AI LEAP” to represent our collective work at this year’s event. The project itself was designed to find solutions using cognitive computing and automation for the discovery and resolution of service issues as part of a zero-touch operations process using TM Forum data models. So what can you expect to see from us at this week’s Catalyst showcase?
Based on real-world use cases provided by our champions, we majored on several challenges that they are currently seeing or foresee happening in their service operations environment, specifically:
- The quality and accuracy of inventory data. This becomes even more critical when using data to feed event analytics to enable automation and improve customer experience.
- Detecting anomalies and deviating signals. With the ever increasing amount of data being produced, being able to process all of it to find anomalies and deviations from the expected ‘norm’ is extremely important to the success of ongoing digital transformation initiatives.
We also looked at two areas of specific interest for TMForum themselves:
- Service Management standards. This area is of particular interest to many organisations and we explore which changes and inputs into the AI applications should be tracked. This is important to ensure that the AI applications can be rolled back or pulled out or be retrained if anything unusual is detected, like the AI all the sudden making bad predictions.
- AI Data Model. We have investigated using the latest TMForum data model for AI by representing the data supplied by one of our Champions. We have provided feedback to the TMForum on a number of areas where we believe that the model can be improved.
Using solutions from Federos and Arago with data analysis supported by algorithms developed by Galileo Software, we have designed and built proof-of-concepts focussed on meeting these use cases and areas of interest. With Wavelength Communications, we have then investigated how the solutions could be used in a 5G network.
To cover what we have achieved in one blog article would be a challenge, so this is part 1 of a few articles which I hope will provide some insight into the TMForum Catalyst project. First up, let’s look at Use Case 1:
Topology and Inventory – Detection and Reconciliation
Firstly, with regards to the quality of inventory data, we have prototyped an end-to-end process using AI, event analytics, machine learning and automation to detect possible un-documented topology relationships, to automatically validate and update the data if needed in the topology database.
- Step 1: Events are received by the Federos Assure1 platform from 2 separate network elements.
- Step 2: Assure1 Machine Learning detects that these events have occurred 3 times in a 5 minute period (we have implemented this logic to improve confidence that these events are likely to be linked in some way).
- Step 3: Federos Assure1 automatically raises an incident.
- Step 4: Arago HIRO detects the incident and determines the context, situation, network elements involved and what information is needed for processing further.
- Step 5: HIRO uses Machine Reasoning AI to evaluate how to deal with the issue. In this case, HIRO will gather additional device and topology data from the Federos platform and the devices themselves.
- Step 6: HIRO analyses the information gathered and, based on this, performs the next steps. In this case, it will confirm that an undocumented relationship appears to exist in the topology and a Change Request (CR) may need to be raised to link the elements in the inventory system.
- Step 7: HIRO will check whether an existing CR may already exist (open or recently closed) and opens a CR if the no conflict is found.
- Step 8: The CR is processed (in the prototype, we have added a manual approval step, but it could be totally automated if needed) and once approved, HIRO automatically interfaces with the Federos platform to execute the transactions required to update the inventory and topology.
- Step 9: Once the changes have been made, they are automatically verified to ensure they have successfully been made.
- Step 10: Once verified, the incident and change request are updated and closed including a full audit-trail of what actions have been taken and what has been updated.
Based on the small dataset we have received from one of our Champions, we have successfully detected over 600 potential undocumented relationships and over 40 which triggered the above process (matching the confidence logic in step 2).
So why is this so important? Having accurate and up-to-date data on which to perform analysis is critical to being able to perform root cause and impact analysis. Incorrect or missing data impacts the ability of machine learning and other analysis techniques to identify causes and trigger automated actions – increasing effort for investigation, delaying fix times and impacting on services and customer experience.
The figure below shows a service topology and indicates two separate incidents and their root causes as the data linking the two network elements in the topology or inventory database is missing. By showing the inferred topology relationship between the two elements, you can see that the actual root cause (shown by the green area) can be identified. Without this relationship, time and effort would be expended investigating both incident 1 and incident 2 separately without identifying the actual root cause of both issues.
The business driver for this use case is all about data accuracy. Without having accurate data on which to perform analysis and drive automation, the risk of basing decisions on invalid analysis results increases. With the ever increasing quantity of data available, especially with the arrival of SDN, IoT, virtualisation and 5G, the accuracy of data will be more critical than ever. Managing this manually is impossible, so the use of AI, machine learning and automation to improve accuracy must be used.
Please check-back here for Part 2 of this blog, where we’ll look at Event Analytics and Machine Learning.
For more information on the project, please visit us at our Kiosk in the AI Catalyst Zone and Digital Transformation World Nice May 14-16th or contact us here.