Logo

Table of Contents:

User Guide

The FLOWS dashboard

The FLOWS Dashboard lives alongside the other dashboards in the Gekkobrain cloud solution. Locate the dashboard by clicking “Dashboards” which is found on the upper left.

Then secondly click on “FLOWS”.

The Dashboard is divided into 4 sections:

  1. Explore
  2. Scenario Detection
  3. Classification
  4. Processes

The Search Area starts the search and the query details are listed directly below

The Process Information Area contains labels assigned to the dataset either by process definition, by classification or by automatic identification of a problem area for a process known as a “scenario”

The Business Dimension & Metric Area contains the document values extracted from the source system including computed metrics for the values and to which a degree they modify/control the nature of a process

The List of Documents in Graph Nodes Area contain all the documents below the node in the below graph which has been clicked.

All values displayed in the UI are clickable and will result in a new search and narrower search.

Explore

The “Free search” is where searching system wide for a given kind of event type happens. The events are displayed in the contexts of their document flow. An example of an event is when a Purchase Order is created.

It is important to underline that all matches to the event “Purchase Order” will be returned without any additional filtering and usually produces a significant, which is why a date span is required (more about dates later).

Any document creation or change constitutes a system event for FLOWS regardless of where in a document flow it belongs.

This also means that searching with “Invoice” which is the creation of an invoice will display all flows - in aggregated and abstracted form - that contaisn this type of event.

As mentioned earlier, when free searching events is conducted a span of dates must also be provided. The search defaults to the first date and last date in its database.

It is also possible to narrow the result-set based on enterprise values, for example Sales Organization, Document Type or Material Number. Actually drilling into the dataset and thereby narrowing the search is done by clicking on the values presented. Either in the graph by clicking the arrows or in the information boxes above the graph by clicking values there which then gets populated into the query detail of the search area.

The result set will produce a graph of flows that have a great deal of variance. If the event chosen is invoice, then the invoice event is something which touches almost all types of flows. This is a robust and flexible capability of FLOWS but if a certain scope of flows is of interest then the tools for that are leads into the use of Business Processes and/or classifications.

To understand the data that makes up the graphical several pieces of information is available. The fields captured for that flow is called “Event Modifiers” and by selecting one the values for that dimension is displayed and the graph changes from monochrome to colors to reflect that the values of a dimension have a great deal of control over how the path and sequence of events unfold from START to FINISH of a document flow. Any document flow is a business process in SAP, but organizing is the next step.

Business Processes

Creating a filter for a business process is important. The more business processes are created the more value FLOWS will provide in further analysis.

Classification Sets

The dataset can be limited to one or more classifications. Think of a classification as a label fixed to a certain event in a flow. This is particular useful if a certain collection of events is under investigation. Classifications are not a tool for marking only unwanted flows, but it is simply a way of annotating certain event (mostly documents) and by extension the particular document flow to be part of a certain type of flows.

When you as a domain expert add classification sets to flows, you help inform the learning algorithms that identify sources of variation in your business processes. It is the ability to annotate events or observations in a dataset which allows the algorithm to look for similar objects in the entire dataset and extract patterns that are characteristic of the classification set, identifying other parts of the dataset that exhibit a similar structure.

As an example, a set of documents which are not compliant with a payment schedule may have been paid too late. Another example could be all documents belonging to a certain variant found in the flow explorer view. In the example below that is “Parked invoices”. Any variant can be classified i.e. labelled by the user.

Annotations do not corrupt the dataset, but they are powerful signals to the model steering the learning in a certain direction. This is why the content of a classification dataset should be carefully chosen. Think of the example where only some of the invoices classified as paid actually was paid late. This would lead to wrong weights being introduced in the model of data describing flows that contain invoices.

Scenarios

A common scenario that is often monitored is the “Order to Cash” business process. Although the definition seems obvious it actually isn’t. This is why FLOWS deals with Scenarios as a combination of Classifications and filters on business dimensions presented in the tool as (potential) event modifiers. Scenarios can easily be overlapping and one Scenario can also be almost fully contained inside another scenario. Think of these 3 examples where the first 2 are the common assumption based leading to an over-simplified functional specification, maybe not in and of themselves but simple assumptions are challenged when they interconnect.

Scenario A = “Webshop Orders Scenario” AND “Cancelled Orders Scenario”

Scenario B = “Webshop Orders Scenario” AND NOT(“Order to Cash Scenario”). (note that “Completed Orders” and “Cancelled Orders” are disjoint entities).

Manually maintaining Scenarios is either done through exploration in the FLOWS user interface or through manual uploads. Often times uploading a classification offers a good starting point to an exploration perhaps leading to further sub-classification as we saw with Scenario A & Scenario B or through exploring the heat-map which will be explained shortly.

Heat-map as means of doing Root Cause Analysis

To understand a heat map is to first understand the underlying computations. In this case FLOWS uses heat-maps in 2 steps.

  1. To display the average mutual informativeness score between 2 dimensions, meaning event-modifiers or flow metrics or classifications
  2. To display the point-wise mutual informativeness score between the discrete values of one event-modifier and the same dimensions (remaining event-modifiers, flow metrics or classifications).

The mutual informativeness scores is based on a calculation of the likelihood of a dimensional value in a flow increasing the probability of another value in that same flow also being present.

A real-life example would be that changing the Supplier No seems to determine (inform) if the invoice is paid on time. The heat-map will display a darker color indicating this high mutual informativeness score and clicking on this the vendors for which the invoices are most done can then determine this. Further exploring the heat-map, simply “looking to the sides” will reveal that these vendor numbers also inform greatly on purchasing organization. Further looking at that purchasing organization reveals that updates for this purchasing organization are frequently done to quantities after the MRP has created the PO.

This root cause analysis has revealed that one purchasing organization needs adjustments to the Material Requirement Planning. Furthermore is has illuminate that this problem is a general problem.

The FLOWS Graph Area

The graphical part of FLOWS displays a directed acyclic graph of the flows found matching a given query.

It thickens the edges/lines between the objectified events to illustrate the frequency/commonality of a certain path in the flow. The colors on the edges correspond to the path that a given event-modifier/business dimension takes.

An example here is that the plant 1000 dominates one variant/path in the graph while only much less frequently visiting other flow variants.

Overlaying Multiple Systems

If data has been sent to the Gekkobrain Cloud from multiple sources, it is possible to overlay the graphs by selecting “(All)” in the dropdown in the upper right corner.

The graphs will now be multiplied for the different systems. By first defining a business process which is comparable (e.g. using UART) on both systems, these can now be compared across the two systems.

Secondly by adding an event modifier which is descriptive (such as e.g. MANDT) you can now see the systems overlayed an analyze the discrepancies.

When the business processes in the various systems are overlaid each system has its own color coding making it easy to compare.

This view allows you to assess and analyze the possibilities for merging multiple systems in an S/4HANA scenario and is an important tool in understanding greenfield vs. brownfield transformation scenarios.

Explaining the Metrics calculated for each dimension

The metrics given attention in the UI are largely computations of a “LEAN” nature.

These metrics are calculated for all the dimensions of the event modifier which means that vendors, materials, sales organizations and any enterprise dimension can be sorted and subsequently chosen for further exploration.

The UI displays all the classifications which match the returned flow set. This is a valuable tool for either further limiting the flow or if say a single document id has been entered then the UI will present the user with the classifications matching this particular flow.

One example could be that a Purchase order number is entered and the UI returns the amount of dimension for that flow, i.e. the material numbers, vendors, customers, cost-renters, users involved, the metrics and then in addition it will also identify that this flow has been classified as belonging to a certain sub-scenario of procure-to-pay flows.

The Data foundation & daily operation of FLOWS

FLOWS uses all the update and database events in the system as a heuristic approach to extract the proper data. This means that FLOWS is also capable of extracting any custom field in a table not belonging to SAP standard. This is important because custom tables can be very descriptive when it comes to understanding business document flow variants and behavior in general.

The extractions done by FLOWS are organized and planned by algorithms rather that by static field lists.

This is a great strength of the tool as it once again is capable of extracting these fields without human interactions.

Data which is sensitive, i.e. usernames and amounts can be omitted.

New data either from the SAP backend, another source system or from new labels and PMI tables being computed will require system downtime. This is a plannable event, but otherwise defaulted to night-time.