Logo

Table of Contents:

Installation

Import the components into your SAP system

The components are delivered as a Transport Of Copies transport, consisting of one data- and one co-file. The files are zipped and can be downloaded from the menu “Download Extractors” located in your account in the Gekkobrain cloud solution. It’s located towards the bottom on the menu just above “Account Information”.

Please always download the latest version. Refer to the installation note text-file that is included in the zip-archive for the exact instruction as they might change with each release. The solution consists mainly of a number of data extractors in the various module areas of SAP along with some extractors that are written for cross application purposes. The solution lies in the /GEKKOBR/ namespace and is protected from any changes once installed. The software can run all of its extractions via RFC, so transporting it to production is only needed if your security policy does not allow RFC connections from a Development system to a Productive system. If needed also import the Transport into your productive environment as you would any other transport.

Deciding on the need to install the “Transaction aware recorder” feature

Also located with the zip archive there are instructions on how to enhance your change document function group. In order for the tool to be at its full potential you must allow an implicit enhancement of a SAP standard function module. You can choose not to install this feature and FLOWS will still be able to display an equal amount of process flows, the difference being that without the enhancement FLOWS will be limited to treating every update as an individual update. With the enhancement FLOWS can detect if many updates are conducted within the same transaction. This ability is helpful in understanding context in addition to what was updated (content). FLOWS has other techniques to detect context but the transaction aware enhancement is very effective in understanding your particular system behavior. This enhancement targets change-documents ie updates which means it will understand the relationship of updates primarily, but since the majority of document flows are subject to change that relationship maps out the majority of concurrency related scenarios in your SAP environment. This component is optional and as mentioned only able to tell context from the day it was activated in the system.

Setting up your cloud instance

If you are already a Gekkobrain customer then your productive system is likely already setup. Otherwise please create that system in the cloud here: From the left-hand menu, select “New System”:

Provide the system ID and a short description for example “Logistic Prod System” and since this is a productive system please select “OPS” in system type.

After creating the system please navigate to “System List”.

This is where you will find the API key which you will need later when sending data to the cloud.

Setting up the Cloud Daemon for automatic extractions

The Cloud Daemon is an optional component. All data extractors can be set up manually, but it is advisable to let the Gekkobrain Cloud control what data is needed and when. The daemon serves the purpose of scheduling extractions in regular batch-jobs viewable in SM37 and serves as cloud connecter, making sure that other larger extractions jobs are not running if your SAP system cannot “see” the cloud. It also monitors the health of the batch-jobs already scheduled in earlier runs and communicates this to the Gekkobrain Cloud allowing the cloud not to schedule more concurrent extractions or reschedule extractions if data was missed in the transmission.

Goto SE38 Run report /GEKKOBR/DAEMON

Field Description
RFC Destination If data is fetched via RFC ie this job is running remotely. Left blank the system reverts to “local mode” and the system-id will default to sy-sysid
SSL Id Running this over https requires an import of the SSL certificate our Cloud
Proxy Host Http proxy IP address if your network department has firewall on outbound http
Proxy Service The http port that the proxy is listening on
API Key Find this API key under Systems in your Cloud (refer to the online help)
SAP System ID Defaults to the current SAP system-id. If running remotely manually enter this according to your RFC destination
Account Number Find this on the left side menu in the cloud under “Account Information”

In the automatic scenario the daemon asks the Cloud if data is required. If that is the case then the daemon will simply schedule an extractor to run once. The SM37 job-log reflects if any extractor was batchjob was triggered.

The extractor receives information from the Gekkobrain Cloud on what specific table and what fields are needed. The SM37 job-log of the extractor reflects exactly what data was extracted.

Choosing the “landscape location” for the Cloud Daemon and Extractors

The Cloud Daemon should run for every environment that data is needed from. Choosing a central deployment and using RFC or choosing a local non-RFC makes no difference for how many “systems-setups” are needed for the Daemon.

Example: There are 2 SAP systems that needs to send data to the Gekkobrain Cloud. One is a system devoted to Retail and one is Devoted to Wholesale. In this case the Daemon must be activated for both these systems meaning that it will ask the Cloud for each system if the Cloud needs data from that system or not.

The Cloud Daemon has 2 modes. RFC and local. Just like the extractors have RFC and local mode.

It’s a matter of opinion which mode is preferred. Some customers choose to deploy one copy “centrally” of Gekkobrain. Others chose to deploy to all the SAP environments that are going send data. Others choose to not deploy to production at all.

Example: There is only one system that needs to send data. It’s the productive client of an ECC system. The customer choses to deploy the Transport of Copies initially to only the Dev system and opens an RFC connection to the Productive system. This way the solution can start extracting data as soon as the Transport of Copies is imported.

Example: There are several ECC systems that needs to send data and the customer does not wish to open RFC connections between these systems. The Gekkobrain software is imported and deployed to all systems and run in local mode in each system.

Choosing the job frequency for the Cloud Daemon

The daemon should run at a minimum of every hour. Recommended frequency is every 5 minutes.

Stopping data transmission via the Cloud Daemon

If you wish to sever the connection from one of your SAP systems to your Gekkobrain Cloud instance it is easy. Simply cancel or postpone the Daemon batchjob for that system.

Example: There is a system maintenance break for 12 hours on a Sunday for the productive system. The batchjob starttime is adjusted to run only after the maintenance window has closed.

Manual setup of extractions

The instructions given in this section is only required if you do not run the Cloud Daemon.

Run the extractor initially

Depending on the planned use of FLOWS is can be of ´benefit to load e.g. last year’s data.

Running the extractor periodically

Once data has been loaded initially the extraction can be setup to run every hour, day, week or month.

Data load time for initial load

Initial load from your SAP system can take up to 24 hours but usually will take only a few hours, depending on the hardware specifications of your system, the transaction volume and the number of months you have decided to load, but most importantly for many customers, the upload bandwidth as many hosting providers conserve on upload bandwidth speeds. Recommended retrospect is at least 366 days of data. After the data has been loaded please allow for another 36 to 48 hours of cloud processing time.

Defining Extractor batch-job parameters manually

Goto SE38 and type in: /GEKKOBR/FLOWS_EXTR_DATA

Gekkobrain does not monitor jobs when the extractor is setup manually. It only discovers data as it arrives to the cloud.

Please reach out to your system integrator before you manually setup the generic data extractor. In the below table all values are explained, but please do not attempt to run this without assistance.

Field Description
RFC Destination If data is fetched via RFC ie this job is running remotely. Left blank the system reverts to “local mode” and the system-id will default to sy-sysid
Date for first select Either set this manually or use dynamic date selection for nightly run. Dynamic date selection allows for the this date to dynamically be set each time the job runs
Number of days The amount of days starting and including the above date to be extracted
Send at every date pass In order to conserve system resources this flag transmits data for every day it has extracted effectively breaking the extraction into as many LUWs as there are days
SSL Id Running this over https requires an import of the SSL certificate our Cloud
Proxy Host Http proxy IP address if your network department has firewall on outbound http
Proxy Service The http port that the proxy is listening on
Cloud Job ID Not relevant for a manual scenario. This value is submitted by the Cloud Daemon in automatic scenarios and corresponds to a unique queue number in the Gekkobrain Cloud server. By not providing the Job ID this extractor knows that the job variant is manually configured
Cloud API Key Find this API key under Systems in your Cloud (refer to the online help)
SAP System ID Defaults to the current SAP system-id. If running remotely manually enter this according to your RFC destination
Cloud Account Number Find this on the left side menu in the cloud under “Account Information”
Number of cloud-transfer retries If the Cloud is unavailable or the network is congested http timeout can occur. This setting lets this extractor wait and try again X number of times per message
Allowed # errors per 1000 After the number of retries has been exhausted it is possible to allow for the extractor to abandon a particular message/(days worth of data) and try with the next message. The level of tolerance for skipped messages per 1000 can be specified here. Once the level is reached the batchjob exits without further processing.
Header table SAP table name for the first table to be extracted
Header table Field list The list of field to be sent to the Gekkobrain Cloud. If left empty only fields from the joined table will be sent
Fields joining Header and Item Specify the fields that join the Header table with the item table
Item table SAP table name for the secondary joined table to be extracted. OPTIONAL
Item table Field list The list of fields from the secondary table to be sent to the Gekkobrain Cloud.
Filename prefix This indicates the content of the messages