Skip to content

User Guide#

Messages, Modules, And Flows#

The ADP Edge node is a real-time engine designed to process Messages. A message can be pushed from an external source, such as a MQTT or HTTP client, or it can be pulled from the ADP Edge node using connectors for different protocols/services. Messages can also be generated internally, for example, by a data generator. All messages are handled by modules. Getting data from external sources, processing messages, and delivering processed data to external receivers are all handled by different modules available in the ADP module library.

Processing of messages is defined by combining modules into Flows, where the output of one module is sent to the input of one or several other modules. In this way, a sequence of processing events can be configured. Flows are designed using the Flow Studio editor using a graphical drag-and-drop interface. A flow starts with a data source module, typically an input module or a time trigger. Processing in modules starts as soon as a message is received and the result is delivered to the next module as soon as it is available. A flow ends with one or several output modules where the result is delivered to an external system. No data is stored in the ADP Edge node, data is processed as soon as it is received and the results are delivered as soon as they are available. There is one exception to this rule and that is the MemoryBuffer module which can be used to temporarily hold messages until external connectivity or services become available.

The messages sent between modules are dynamic C#-objects that can handle any data type and structure. Message formats can also be changed within a flow using modules such as the Property Mapper. This allows for normalization of formats on the input and adapting results to the formats expected by the receiving ends. The internal format should not be mistaken for JSON, even though it may look similar. If JSON data is received, it must be converted to the internal format before any processing is applied. If the external receiver expects JSON formatted data, it must be converted on the output. These conversions are in most cases handled by the input/output modules but explicit conversion can also be done using the JSON module. This module can convert back and forth between JSON and the internal format.

Flow

The Flow Studio#

The Flow Studio is the design tool to build and test flows. It's a drag-and-drop editor where you combine modules from the library into processing flows to describe the sequence of operations you want to apply to your data.

Flow Studio Editor

No Description
1 MONITOR:
- Dashboard:
Opens the ADP Cloud dashboard where you find a summary about the messages, the traffic, the ADP Edge node status, and the log events.
- Events:
Opens the Events page. Here you can see the events created automatically by the ADP Cloud or the ADP Edge node.
MANAGE:
- Flows:
Opens the Flows page. Here you manage your flows.
-FlowApps:
Opens the FlowApps page. Here you can manage your FlowApps.
- Resources:
Opens the Resources page. Here you can add resources from the library to your flow or create new resources.
- Parameters:
Opens the Parameters page. Here you can add flow parameters to use the same flow on multiple ADP Edge nodes or to use the same setting in multiple flows.
- Credentials: Opens the Credentials page. Here you can add credentials needed by flow modules when accessing external services, for example, user name/password or API keys
- Modules:
Opens the Modules page. Here you can explore the module library and display more information about the module.
- Nodes:
Opens the Nodes page. Here you can manage the ADP Edge nodes.
2 Left-side menu:
- Modules:
Opens the module library. Here you can view all modules or just modules of a specific type (Input, Analytics, or Output) by using the tabs to the right of the module list. You can also search for modules by entering parts of the name at the top of the list.
- Flows: Opens the flow library. Here you can display you a subset of available flows, based on your favorite flows. You can find any flow by using the search function or by selecting a category
- Flow Apps: Opens a FlowApp (template) to use as a starting point for a new flow.
3 Drawing canvas:
In the center of the screen you have the drawing canvas where you build your flow.
4 Right-side menu:
- Test & Debug: See the output of any module where debugging is enabled.
- Notifications: See error and warnings from the flow when running a debug session.
- Settings: Here you can change the description of this flow version, add notes, and specify settings that are common for the whole flow, such as the ADP Edge node version required.
- Resources: Add resources from the library to your flow or create new resources.
- Annotations: Displays the added annotations for that flow.

Module Settings#

Each module has settings that in most cases need to be configured for each use. To access these settings, you hover with the mouse over the module to show the action menu. Click to open the settings. You can move the settings windows by dragging them and you can have settings panels open for two modules at a time (when you open a third, the first panel will be closed).

The Drawing Canvas#

The drawing canvas is the large space in the middle of the Flow Studio page. This is where you build your flow by adding modules from the library and connecting them together to specify the sequence of operations you want. You add modules by dragging them from the library and dropping them anywhere on the canvas. The modules are designed with inputs on the left side and outputs on the right side. Therefore, it's natural to start with input modules to the left and then add all the modules needed step-by-step until you reach an output module where the messages leave the flow. With one exception (the Split module) modules have one input and one output. You can connect any number of inputs to the output of each module, for example, if you want to send the same data to multiple destinations.

Panning And Zooming#

When working with large flows, you can use the panning and zooming to focus on a specific part. There are several ways you can navigate in the flow: - Pan the flow using the mouse by clicking and holding on some empty space. If you have zoomed in, you can also pan using the mini-view in the lower-right corner of the canvas. If you don't want to use the mini-view, click to close it. - Zoom by using the mouse scroll wheel, the zoom controls in the lower-right corner of the canvas, or the zoom menu in the right-side menu.

Selecting And Moving Modules#

Individual modules on the canvas can be moved around by clicking and holding, while dragging. You can also move multiple modules by first selecting them and then hold-and-drag somewhere inside a selection. You select modules by either clicking on them, use shift-click to add modules to the selection, or by dragging a rectangle around them. Rectangle selection is either done by holding down the shift key while clicking and dragging in some empty space on the canvas, or by switching to Select mode using the button at the top of the canvas. When in Select mode, click-and-drag will do select instead of dragging.

Adjusting The Layout#

Sometimes the automatically drawn connections between modules don't look as you would like. You can then change the layout by first clicking on a connection to select it and use the handles to change the layout. There is also a button to delete a connection.

Annotations#

You can document your flows by adding annotations to the canvas. An annotation is a text area that can be freely placed anywhere on the canvas. Markdown syntax is supported to style your comments.

You add annotations by using the button at the top of the canvas and then drag the annotation to the desired location. Annotations can be set to always show its content (pinned) or to only show the content when hovered over.

In the Annotations panel on the right-hand side you see all your annotations and can control how they are displayed and delete them.

The order of the annotations in the Annotations panel can be changed by changing the sequence number assigned to each annotation, using the menu on the annotations

Module Actions#

When hovering over a module with the mouse, you will see some action buttons and get access to the module menu (see picture below). The quick actions are, from left to right: - (Remote debug)
Use this button to toggle the debug status of this module. When enabled, the output will be shown in the debug window when running the flow in a remote session. - (Module settings)
Use this button to open the settings panel for the module. You can open settings for two modules at the same time. If you open settings for a third module, the first panel will be closed. - (Delete module)
Use this button to delete the module.

Click to open the module action menu that provides additional actions:

  • (Duplicate)
    Creates a copy of the module with the same settings. The name will be reverted to the default name, and if you have multiple modules of the same type and name, a number will be added to the name to make all module names unique.
  • (Redraw input path)
    Resets the layout changes of the input path.
  • (Redraw output path)
    Resets the layout changes of the output path.
  • (View module)
    Use this button to open the module details panel to get more information about the module and its versions. You can also install a new version of the module via the "Administration" tab.

Module Notifications#

The status of a module is indicated using icons and colors on the outer circle. The following symbols are used (see picture below) - Debugging is enabled. Output from this module will be shown in the debug window when running remote sessions. - The module has parameter overrides. You can manage parameters through the flow action menu, see Flow Actions below. - There are notifications from the module. The color will indicate the most severe type of notification: Red for errors, yellow for warnings, and blue for information. Click on the symbol to see a list of notifications. Warnings and errors generated during remote sessions will also be shown in its own panel and the outer ring will change its color and starts to pulsate. Large messages can be opened in a pop-up window by clicking on the symbol. Closed messages can be opened again from the notifications list. Notifications from remote sessions are also shown in the Notifications panel accessed from the right-side menu.

Creating A Flow From Scratch#

Let's start with a short end-to-end tour of a flow creation to get an overview. For an example, see Create Flow.

To create a flow from scratch you have three choices: 1. Click FLOW STUDIO to open the Flow Studio. You will then get an unnamed empty flow in the editor that you can start to work on. When you save it, you need to give it a name and a description at least. 2. On the Flows page, click . This will open a dialog where you have to provide a name for the flow and a description. You will then end up in the Flow Studio editor with an empty drawing canvas. 3. If you are already in the Flow Studio and have another flow open but want to start working on a new flow, click to the right of the tabs above the canvas. This will open a new tab with an empty flow, just like option 1 above.

You start building your flow by adding modules from the library browser (left-hand side) by dragging or double-clicking the modules you want to use. You then connect the modules to each other to specify the sequence of operations you want to perform. To connect two modules, start by clicking and holding on the output of the first module. All possible inputs that you can use will now be highlighted in green. Drag the mouse over the module you want to connect to and release the mouse. A line will now be added to show this connection. To remove a connection line, click . In many cases it’s a good approach to start with just a few modules and verify this part of the flow before adding more modules. Otherwise, you risk ending up with lots of errors that may be hard to locate.

Creating A Flow From A FlowApp#

Instead of starting with an empty flow, you can use one of the existing FlowApps (templates) as starting point. You get to these by switching to the FlowApps tab on the Flows page. Here you will find examples for different use cases. By selecting a FlowApp in the list, a description will be shown. If you want to use a FlowApp as starting point, you click on the branch icon in the Actions column to the right of the FlowApp. This will then create a new flow based on this FlowApp and opens the Flow Studio editor. You can now adjust the flow to your specific needs by changing settings or adding/removing modules.

It is also possible to open FlowApps from within the Flow Studio. Open the FlowApps panel using the menu on the left side of the canvas.

Test A Flow#

Once you have completed a new or modified flow, it’s time to test it. You do this by connecting the Flow Studio tool into an ADP Edge node. There are two options here:

  • Sandbox - If your flow only access systems that can be reached from the internet or has no external connectivity, you can use an ADP Edge node hosted by Balluff to test your flow. We call these "Sandbox".
  • Local node - If your flow connects into on-premise systems, for example, PLCs or databases, you have to test the flow on an ADP Edge node that has access to these systems.

To connect the editor to an ADP Edge node for testing, you click in the right-side menu bar. This will open up the Test & Debug panel which has two tabs: Connect and Debug. On the Connect tab, you will see a list of ADP Edge nodes that are available in your organization. Some ADP Edge nodes may be grayed out because they are not compatible with the version requirements for the current flow (see Flow Settings). Click on the ADP Edge node you want to use or expand the sandbox section and click Connect. When you are connected, the ADP Edge node will get a green checkmark and the name is shown in the top-right corner above the canvas. Next to the name of the ADP Edge node, click to start the flow. The flow will now be downloaded by the ADP Edge node and then start executing. The panel then switches to the Debug tab and the area with the ADP Edge node name will get a green background to show that the flow is running. Messages from any modules that have debug enabled will now show up in the debug panel. You can freeze the debug window which will stop new messages to be added if you want to look more carefully at the present messages. If you have complex messages, you can also open them in a separate pop-up window using the menu in the upper-right corner of each message. Here you can also copy the message as JSON text. You stop the flow using the stop button next to the start button.

Switching Tabs During Remote Sessions#

A remote session is always tied to a specific flow and you can only have one remote session at a time. Once you start a flow, the session will be tied to that flow. If you switch to another flow tab, your session will remain active with the previous flow but you will not see any debug messages and the Start/Stop buttons are disabled. When you go back to the tab with the flow connected with the remote session, it will be active again. If you want to switch your remote session to another flow, go to that tab and then connect the ADP Edge node again.

Flow Settings#

Each flow has some settings that controls its behavior. You find flow settings by clicking in the right-side menu bar. These settings are tied to the specific version you are working on, that is, each version can have its own flow settings. There are also settings that are common to all versions, like the name of the flow. These settings can only be accessed from the Flows page. The following settings are available for flow drafts and versions:

  • Short description
    This is a description that will be shown when listing flow versions, for example, in the Flow panel inside the Flow Studio and on the Flows page. Our recommendation is to use this to describe what has been changed in this version. It will help your users to track changes.
  • Notes
    Here you can add more detailed information on changes or list specific instructions for running the flow and so on. These notes are only visible within the Flow Studio.
  • Node Version
    Specify the minimum ADP Edge node version required to run this flow. This information will control which versions of modules can be used as well as the list of ADP Edge nodes available for remote sessions. All new flows will get the latest ADP Edge node version by default. If your ADP Edge nodes are not upgraded to a matching version, you will not see any ADP Edge nodes in the remote session Connect panel and you will not be able to deploy your flow to any of your ADP Edge nodes. If you have modules in your flow that are not compatible with the selected ADP Edge node version, you will get validation errors, indicated with a notification symbol on the modules.
  • Halt On Error
    This setting controls what happens with this flow when there are errors. If enabled, any error from a module or the flow itself will cause the flow to stop. If disabled, the flow will continue to run unless the error is so severe that the flow crashes. It will then be restarted automatically. [Only in Node +2.6] If you don't want the flow to continue to run when there are module errors but still want to the flow to be restarted, you can use the Halt On Error setting available on each module. If enabled, an error in the module will cause the flow to stop, but since Halt On Error on the flow is disabled, it will be restarted by the ADP Edge node host.
  • Disabled Queues
    If enabled, all queueing of messages in modules will be disabled and modules will not accept new messages until the current message has been processed and delivered to the next module (back pressure). This can be useful when debugging flows with high message rates and a mix of fast and slow modules, for example, when iterating over an array and processing messages with some modules and an external connector. With queues enabled, all messages in the array will be queued in the modules before the external connector module, which typically process messages much slower. It can then be hard to follow the exact sequence of processing through the modules. With queues disabled, one message at a time will be processed through all the modules.
  • Update Modules
    This is a convenience function you can use when you want to update a flow that was built earlier. With this function, you can update all modules to the latest version currently available. It is a good practice to always do this when you start working on a new version of a flow that was created some time back, since modules are updated continuously with bug fixes and new features.

Resources#

The Resources library allows you to store additional files that may be needed by your flows. These files can be of specific types, like Python scripts or OPC tag lists, but you can also store generic files, for example, ML models or XML templates. Once a file has been uploaded, you can add it as a reference to any flow. When the flow is downloaded to an ADP Edge node, the ADP Edge node will also pull down any referenced resources into local storage. You use the Resources panel to add resources to your flow version. To open this panel, click in the right-side menu bar. Here you can select from the available resources and add them to your flow. Once added, you can also view the content of each resource. To add a resource, click ADD RESOURCE and then select the type and then one of the available resources from the list. If the resource you want to use is not yet available in the library, you can also create new resources by clicking inside the Add panel. This will open up a separate window where you can create a new resource, either by entering the data in the UI or by uploading a file.

Notifications#

All notifications reported while running the flow in a remote session will end up in the Notifications panel that you open by clicking in the right-side menu. This is - in addition to the display inside the canvas - where module notifications will be shown on the respective modules, and flow notifications will pop up as a toaster at the bottom of the canvas.

Flow Actions#

In addition to designing and testing flows in the Flow Studio, you also have access to the most common flow management actions through the menu in the tab for each flow. These are also available from the Flows page. Before you can use any of the actions in this menu, the flow must be saved. You will be notified if that is not the case.

  • Manage Deployments
    This will open up the flow deployment tool where you can deploy the flow you just built to any of your ADP Edge nodes. Note that if you deploy a flow, it will be converted into a read-only version; so no further edits can be made.
  • Manage Parameters
    Opens up the parameters tool where you can add or change parameter overrides on any of the modules in your flow.
  • Create FlowApp
    Create a new FlowApp from your flow. The new FlowApp will be added to the FlowApps library for later re-use.
  • New Flow from Draft/Version
    Use this to create a copy of your flow, as a new flow.
  • New Draft from Version
    This action is only available on read-only versions where there is no editable draft. Use it to create a new editable draft.
  • Delete Draft/Version
    Delete the current flow draft or version. Note that a flow version cannot be deleted if it is deployed on ADP Edge nodes. If you want to delete the whole flow including all its versions, this can only be done from the Flows page.

Using Flows built with the old Flow Studio#

The new Flow Studio has been designed to be compatible with existing flows and module versions. There are a few things to consider though and if you plan to make major changes to a flow we strongly recommend that you upgrade all modules to the latest version.

In the following sections we will go through the most important things you need to consider.

Layout#

The layout engine that draws connections between modules is re-designed. We have tried to get as close as possible to the previous layout when you open an existing flow, but there may be minor differences. Modules will most likely end up in the same place, but the connections may have changed paths. You now have the option to manually adjust any connection to get the layout you want, see Adjusting Layout.

Module Settings#

In the old Flow Studio, some modules had the settings UI generated automatically from a schema, while others had custom-built UIs, resulting in inconsistent layout and UI functionality. In the new Flow Studio, the module settings panels have a new look-and-feel which is automatically generated for all modules. If you open settings on an old version of a module, we will try to generate a UI based on the information available, but it may not always be a perfect result. Therefore, we strongly recommend that you upgrade all modules to the latest version to get the best possible user experience. At least, if you plan to continue working on a flow. If you just want to make minor changes to some settings, you might stick with the existing versions. There is now a convenience function in the Flow Settings panel that you can use to update all modules to the latest version.

Node installation#

The ADP is delivered with a global registration key (.env file). When the ADP compose file is executed, a node is automatically registered with this key. But beware, every time the compose file is executed, a new node is registered. You can also create a so-called single node, which can only be used once. To register a single node:

  • Navigate to ‘Nodes’ in the menu and select the tab ‘Register Nodes’
  • Enter a name for your node and click ‘Add’
  • The node is now added and by clicking ‘Show Credentials’ you can get the Id and Access Key for the Node (keep them to adapt both the .env and the docker compose file must be adapted accordingly!)

Flow Deployment And Versions#

Flow Deployment#

Once you have built and tested a new flow in the Flow Studio, it’s time to deploy it on some ADP Edge nodes for production usage. There are two ways you can do this:

Deploying Flows From The Flows Page#

To deploy a flow, open the Flows page. Click the 3-dot menu for the version of the flow you want to deploy and select Manage Deployments. This action will open up a dialog that lists all ADP Edge nodes currently registered in your organization. You can select one or more ADP Edge nodes from the list and then click on DEPLOY THIS VERSION. The flow will then be installed permanently on the selected ADP Edge nodes. Alternatively, you can select a label from the Labels list instead to select a label and then click on DEPLOY THIS VERSION. Now the flow will be deployed to any ADP Edge node that has the selected label, which can be any number of ADP Edge nodes. When you click on DEPLOY THIS VERSION, the deployment process will start, and you will see the progress of the deployment.

You can access an overview of all deployments of the flow on the DEPLOYMENTS tab. When completed, the status of all nodes will show Started. From this tab you can also manage your deployed flows by selecting one or more nodes from the list. The bottom of the panel will then show some actions, like Stop (temporarily stop the flow), Start (start a stopped flow), and Delete (remove a flow from the selected nodes).

Deploying Flows From The Nodes Page#

Another option is to deploy ADP Edge nodes from the Nodes page. You then start by selecting an ADP Edge node from the list or filter out a group of ADP Edge nodes using the Labels column filter. In the detailed view to the right of the ADP Edge nodes table, you can then click + Add Flow. This will open a list with all your flows where you can select a flow and a version of that flow. Finally, you click Add flow and the flow will be deployed to the selected ADP Edge nodes.

Flow Versions#

ADP Cloud has an integrated version control of flows. When a flow is deployed on some ADP Edge nodes, the current version is locked (read-only). To make changes, you have to create a new version. You do this by clicking the 3-dot menu of the desired previous version, and then selecting New Draft from Version. This will create a new draft which is opened in the Flow Studio editor. This version remains editable until you deploy it into an ADP Edge node. Then this flow will be versioned and locked as well.

To upgrade a deployed flow to a new version you can either just deploy the new version using any of the above methods, or you can select the ADP Edge node you want to upgrade in the DEPLOYMENTS tab, for example, you can both upgrade and downgrade flow versions. In the list you see all nodes that run the previous version and if you want to upgrade them all, just select all nodes and then click CHANGE TO THIS VERSION.

Analytics Modules Overview#

This section presents the analytics modules available in the ADP module library. Analytics modules are used to transform and harmonize message formats from different sources as well as operate on the actual message content. The modules are presented in groups with related functionality.

Transformation#

Data And Property Mappers#

The Data Mapper and Property Mapper modules are the Swiss army knifes for modifying message structure. Use them to:

  • Rename properties
  • Move properties to new hierarchy levels
  • Remove properties
  • Add new properties with static values
  • Copy existing property values to new properties

You can basically do the same thing with both of these modules. The difference is that the Data Mapper uses payload templates for input and output formats and then provides a visual interface for defining the mappings. The Property Mapper has a more configuration-oriented interface where each individual mapping is defined explicitly in the module settings.

Working With Arrays#

Arrays are common both on inputs and outputs but may not be the optimal internal format when applying streaming processing. For example, to apply processing on individual array elements in a streaming pipeline, you need to split up the array into individual messages. Another example is when you want to write data into columns in a database. Then you need an object with key/value pairs that can be mapped against the database columns.

There are several modules in the ADP module library that help you convert back and forth between these different formats. In this environment, an array is typically an array of objects, that is, each element contains multiple values and even hierarchical message structures.

Names in bold indicate the most common array modules.

Name Description Input Output
Array Split Breaks up an array into individual messages. Array Messages
Array Join Combine a stream of messages into an array by time or message count (opposite of Array Split). Messages Array
Array to Object Convert an array into an object with key/value pairs. Each item in the array must have a property holding the key value and another property with the value for that key. Array Message
Object to Array Convert an object with key/value pairs into an array. Each element in the array will have a property containing the name of the key and another with the value belonging to that key (opposite of Array To Object). Message Array
Array Regex Filter out some elements from an array by matching the value on a specified property with a regular expression. Array Array
Array Property Pick Selects some of the properties available in each array element. A list of properties to pick must be provided. Array Array
Array Property Omit Remove some of the properties available in each array element. A list of properties to remove must be provided. Array Array
Array Sort By List Sort array elements by comparing the values on a specified property against an ordered list of values provided. Typically used together with the Array Property Get module to create lists of values where the position in the array is used to identify a source. Array Array
Array Property Get Select the values from a selected property. The output is an array with only values. Useful when feeding, for example, ML models where the position in the array is used to identify a source rather than key/value mappings. Array Array of Values
Join This is a variant of the Array Join module that ensures that data for a specified number of sources are always available in each output. If no data has been received for a specific source within the given time period, an output will be generated based on the strategy selected. This module is more complex to set up but is useful in cases where you must ensure that each output has a value for each source, for example, when feeding an ML model. Messages Array

Working With Strings#

Name Description Input Output
Text Template Generates text messages based on a template provided in the settings. The template syntax can be used to insert data from the message together with static text. Message String
String Replace Replaces a substring with another substring. String String
String Generator Appends or writes over a property with a randomly generated string of certain conditions. String String
String Substring Select a substring based on start and stop positions. String String
CSV Line Parser Breaks up a line of text (string) into substrings based on a delimiter, for example, “,”. The output is an object with one property per substring. These properties are named based on the position in the string, for example, col1, col2… String Object with substrings
CSV Text Parser Same operation as the CSV Line Parser module but operating on multi-line strings. The output is an array with one element per line. Each element has the same format as with the CSV Line Parser module. String Array of objects with substrings

Format Conversions#

Messages sent between modules use a custom .NET data type (FlowMessage) that supports dynamic structures with hierarchies of objects and arrays which can contain basic .NET data types. Sometimes, for example, in the debug window, messages may look like JavaScript objects (JSON) but that is only for presentation purposes. JSON is never used internally. When communicating with external systems, you may get data in JSON or XML format, or the output is expected to be in one of these formats. To work with these types of data, you must convert between the internal format and the external formats. This can be done with conversion modules.

Name Description
JSON Converts to and from JSON strings in case that the input is a string and it will be interpreted as JSON and the output is a FlowMessage.
If the input is a FlowMessage, it will be converted into a JSON string.

This module operates on a selected property of the incoming message and the result can be assigned to a new property.
XML Same as the JSON module but working with XML text.
Time Stamp Converts DateTime values into another format. Predefined formats are ISO8601 and Unix timestamp in seconds or milliseconds. Custom output formats can also be defined.
If no input is specified, the module will add the current system time on the output.
Base64 Encode Converts a binary array or a string into a base64 encoded string. For string inputs, the output can be set to use base64url format, to create URL safe strings.
Base 64 Decode Decode a base64 encoded string into a byte array or string.

Condition Logic#

Message Filters#

In addition to the below modules, each module has a configurable message filter which can be found on the Common tab. By default, modules will process every message they receive. By adding a message filter, the module will only process messages matching some criteria. Messages not matching the filter criteria can either be dropped or passed on untouched.

Message filters can be set to match conditions on numeric property values (=, <, >), string property values (contains, does not contain, equal/not equal, isNull, isNotNull, regex), and boolean values (isTrue, isFalse). Multiple conditions can be used.

Filtering Modules#

In addition to the message filters above, the following modules can be used to control which messages are passed on to the next module(s), based on different criteria.

Name Description
Range Filter Only lets through messages where a selected value is either within or outside of a specified range. The same effect can be achieved by using message filters.
Report By Exception Only lets through messages when the value of a specified property changes. Can be used to remove duplicates.
Split Splits a stream of messages into multiple paths by defining conditions on message data. Multiple conditions can be specified for each path. Each path will get a separate output on the module.
Throttle Limits the rate of messages, for example, only let through a maximum of one message per second. Can also be used to spread out messages in time by enforcing a minimum delay between messages which can be used to smooth out bursty traffic.

Intelligent Logic#

Operating On Message Data#

The following modules will create new data by processing the values in messages. They all work on numerical values and the result can either be assigned to a new property or the original value can be overwritten.

Name Description
Aggregate Calculates average, min, and max on a selected property from a group of messages. The group of messages to use can either be selected by specifying a time period or a number of messages. Calculations can be performed independently for messages coming from different sources by specifying a property that indicates the source. At the end of each interval, a message is delivered for each source with the calculated statistics.
Math Expression Executes generic mathematical expressions on message data. The expression is specified using template syntax where message data can be inserted into the mathematical expression, for example, ‘Abs({data.cur_temp} - {data.prev_temp})’.
Range Classifier Adds a text classification based on ranges of values on a selected property, for example, ‘Low’, ‘Med’, ‘High’
Scale Scales a value on a selected property ‘(value*scale)-offset’, for example, to convert temperature values from Fahrenheit to Celsius. Note: This operation can also be performed with the Math Expression module.
Smooth Applies an exponential smoothing filter on a property value, for example, to remove noise.
Statistics Same as Aggregate module but operating on a rolling window of messages, that is, an output message is generated for each input message. This module will also calculate the standard deviation of the values.
Toggle Toggles the value of a Boolean property or toggles the value of an internal variable and adds it to the output message.
Code Modules#

With code modules, you can apply a custom processing of messages using either C# or Python code. The code can either be entered directly in the settings UI or code files can be uploaded to the Resource library and then referenced from within the modules.

Name Description
Csharp Executes C# code. The code is compiled at runtime. Standard .NET libraries can be used but not 3rd party libraries.
IronPython Runs Python code using the IronPython interpreter which runs in .NET. Standard libraries can be used but not 3rd party libraries. Supports Python 2.7 code.
Python Bridge Runs python code in a standard Python 3.9 environment outside of .NET. On Windows, the Python environment must be installed separately and the local node configuration must be updated accordingly (see installation docs). Standard and 3rd party libraries can be used, including ML frameworks.
Counter Modules#
Name Description
Message Counter Counts the values seen in a selected message property over a time period. At the end of the period, the number of occurrences of each value is listed together with the relative count of each value. Useful for KPI calculations, such as "Yield".
If no source is specified, the total number of messages received is reported.
Time Counter Counts the time spent in different states by looking at the values received on a specified property and the time between value changes. At the end of the period, the total time spent in each state (value) is reported together with the relative times. Useful for KPI calculations such as "Availability".
Timeout Measures the time since the last update on a selected property. If no update is seen within a specified timeout period, a message is sent out. Use this module to monitor expected traffic patterns. For example, if you know that data should arrive every second, this module can be used to trigger an alert if no update is seen within 2 seconds.
By keeping track of individual property values, the module can operate on multi-source streams and trigger an alert as soon as one a value is missing from one of the sources.
Storage Modules#

Sometimes you need to store message data temporarily and then these modules will come in handy.

Name Description
Memory Buffer Keeps messages until acknowledged by an external signal. It is designed to work with any output module to keep messages until successfully delivered. Output modules produce a success value indicating if the external delivery was successful or not. This value can be fed back to the Memory Buffer module. The size of the buffer as well as retry strategies can be defined.
State Stores any message data including complex structures and adds the stored value to all messages it receives. This can be used to store data that is updated infrequently but is needed on every message from another source. For example, to add configuration data to each new sensor message.

Universal Connectors#

Introduction#

Universal Connectors are generic modules used to connect to REST APIs. Using the Universal Connector (UC) wizard, you can build your own reusable modules to simplify getting or sending data from/to external APIs when building flows in the Flow Studio. A UC can use settings provided by the user or data from flow messages when building the API calls. You can also provide your own custom icon and documentation for UCs. Once published, a UC will look just like any other module in the Flow Studio library. UCs also support versioning so that you can update the module over time and then select the version to use when configuring the module in the Flow Studio editor.

Note 1: You must have permission to be allowed to make your own or modify existing UCs. However, all users can use the UCs when creating flows.

Note 2: This tool is intended for users with basic understanding of REST APIs and API documentation.

Parameters#

A key concept when building a UC is the parameter syntax. When you enter a name inside curly braces somewhere in your API request configuration like {param1}, "param1" will be treated as a configurable parameter and the actual value will be assigned at runtime. Parameters can be used in URLs, in headers, in query parameters, and in the message body for POST/PUT requests. There is no limit on the number of parameters that can be used. For each parameter, you then decide from where to get the actual value. The options are:

  • User setting - The parameter will be presented in the module UI so that you can enter a value. You can provide a default value. Note that user settings are not allowed to be left empty when using the module in the Flow Studio.
  • Message parameter - The value will be taken from an incoming flow message. The property on the message must have the same name as the parameter and be located at the root of the message.
  • User setting if set - The parameter will be presented in the UI. If no value has been entered, it will be taken from the incoming message, same restrictions as above.

The Wizard#

To simplify the setup of UCs, a wizard is available that will take you through six steps to create your own module. The wizard can be found on the Universal Connectors page. The table shows all currently defined UCs. To build a new UC, click on +Add Connector in the upper-right corner. This will open the wizard. Follow the seven steps described below to complete your new module. Inside the wizard, you can use the Next/Previous buttons or click on any of the icons at the top to jump between the steps.

UC Table

1. General#

In this step, you define the following: - Name - The name of the module. This is used in all references to this module like in the UC table, in the module library, in the Flow Studio and so on. Since this name will be shown together with the icon in the Flow Studio, you should preferably keep the name short. Only characters, numbers, and space are allowed in names. - Version - The version number of the module in x.y.z.-syntax. When creating a new version of an existing UC, the version number must be higher. - Module Type - Here you specify the connections on the module depending on whether it’s an input or output module. All modules have an input connection since an API request from an input module needs to be triggered by an incoming message. If this is a UC that will get data from an external API, check Has Output. - Icon - Select a custom icon for your UC. Load a PNG-file from your local drive using the Select icon button. The image will be scaled to fit on the module in Flow Studio, but for best results, your image should be square and not have a higher resolution than 100x100 pixels. It is optional to specify a custom icon. If no icon has been loaded, the ADP icon will be used. However, using your own icon is definitely recommended since it will make it a lot easier to find your module when building flows. - Description - This is an optional field where you can add a description of your UC for future reference. It is not used outside the wizard. - Categories - Currently not used.

2. Authentication#

If your API requires authentication, it is set-up in this step. If no authentication is needed, you can skip this step. If your API uses basic, bearer, or OAuth2 authentication, you have the option to get the credentials from the central credentials store, see Test for more information. The only thing you need to do then is to select API key for bearer authentication, Username and Password for basic authentication, or one of the supported OAuth-methods in the Credential drop-down list. When a user uses this module in the Flow Studio, the available credentials will be presented in a drop-down list and added to the API request.
If your API uses some other kind of authentication, you can add the required headers or query parameters by filling out the relevant fields and then clicking the + button on the right. Any number of headers and query parameters can be added and the parameter syntax described above can be used to fill-in any part of the authentication settings when the module is used. If your API uses data from one of the supported credential types but in a non-standard way, you use the option Use credential as setting. Then the tool will not automatically use the credentials with the standard methods, but you then have the credential values available so that you can use them as parameter values to create custom authentications. Build the request as required by your service by using parameters. Then on the Configuration step, select User setting as source, Credential as type, and then the value you want to assign to this parameter from the list of available values in the credential from the Requirement drop-down list. The Credentials drop-down-list appears when using the module and the available credentials of the type you have selected will be shown. The content of the selected credential will however be used in a customized way, based on your configuration. If you are connecting to a service that uses, for example, self-signed certificates, you can enable Allow untrusted certificates to disable certificate validation.

3. Configuration#

In this step, you configure the main parts of your API request. The following settings are available:

  • URL - The URL for your endpoint. This must be a complete URL including path, for example, https://api.example.com/ is a valid URL, while https://api.example.com is not. URLs can be configured using parameters, for example, https://api.example.com/{resource}, where "resource" will be replaced at runtime depending on how this parameter has been configured (see next step).
  • Action - The HTTP verb (action) to use. In an action where a payload body can be used like POST/PUT, a "Body" field is shown as well.
  • Headers and Query parameters - Add any headers or query parameters needed by filling out the relevant fields and click the + button. When the module is used, multiple headers and query parameters can be added and configurable parameters can be used by using the parameter syntax described above to adjust the request.
  • Body - For actions that require a body payload, the "Body" field is used. You can enter any text here and also combine it with configurable parameters. If you want the whole body to be taken from the incoming message, just enter for example {body} and set the "Body" parameter to Message parameter. Then any data assigned to the body property on the incoming message will be inserted as the body payload.

4. Usage settings#

In this step, you configure any parameters you have used when configuring your API request in the previous step. If you haven’t used any configurable parameters, you can skip these steps. If your UC has an output, there will always be one default parameter called "targetPropertyParameter", so that the user can chose the message property to write the result of the API request to. The only thing you can change on this parameter is the default value and its position in the UI, if there are more parameters, see below. For each parameter, you must specify the following:

  • Source - Where to get data from for this parameter. Available options are "User settings", "Message parameter". If "User settings" if set, see the introduction below for more information on these choices.
  • Display name - [User settings only] The text displayed above the parameter input field in the module UI.
  • Type - [User settings only] The expected data type. Available options: String, Number, or Bool.
  • Requirements - [String and Number, User settings only] Specify min/max lengths for strings or min/max values for numbers. These settings are optional.
  • Default value - [User settings only] Default value for the parameter. May be left empty but note that a value must be entered when using the module. It is not allowed to have empty input fields on a UC module in a flow.
  • Help text - [User settings only] Text written under the input field in the UI. Use this to help the user enter the expected data. This setting is optional.
  • Purpose - [User settings only] Text used to describe the parameter in the Settings table on the Info tab in the module UI (see also the Documentation step below). The order of the settings in the UI can be changed by using the up/down arrows to the right of each setting.

5. Documentation#

The documentation step is used to add in-module documentation, for example, documentation available to the users in the Flow Studio. The structure is setup in the same way as for modules in the standard library, so that users can easily recognize the documentation. The left-hand side of this page is the input for markdown text and the right-hand side shows the result. There are two sections for the documentation:

  • Connector description - This is the text shown when hovering the module in the Flow Studio module browser.
  • Module documentation - This is the text shown on the Info tab of the module settings window. It is divided into three sections where the first is the introductory text. Here, you typically describe what the module does and when to use it. The second part is the Settings table. It is generated automatically based on information already entered. The last section is used to describe what input messages this module expect, what the output will look like, and preferably some examples showing the use of the module.

6. Release notes#

The release notes will be displayed for all versions when the new draft is published.

7. Summary#

The Summary step combines all information entered and shows what the module will look like in the Flow Studio, with the module icon, hover text, settings and documentation UI. If you are happy with the result, click Create and the module will be built.

Publishing Versions#

When you have created an UC, it must be published before it can be used in the Flow Studio. In the UC table, hover over the UC version you want to publish and click . The UC will now be published and you will see it in the list of modules on the Modules tab, listed as a "Custom" module which means that it is only available to users within your organization.

It is not possible to modify a published UC but you can update it by creating a new version. Click . A new version will be created and opened in the wizard. When you are done with updating the new version, it must be published as above, before it shows up in the Flow Studio. When adding a UC module to a flow, it will by default get the latest version. If you already have a UC in a flow and want to update to the latest version, open the settings UI and use the drop-down list at the top to select the new version. When you create a new version of a UC, the following settings cannot be changed:

  • The name of the module
  • The type (Input/Output) of the module. If you want to change any of these settings, you can create a new UC and use a version of an existing UC as the starting point. To do so, click . To edit a version, click .

Credentials#

Note: Permission to manage credentials is needed to use the functionality as described. However, all users can use credentials when configuring modules in flows.

The credentials library is a central repository for storing credentials needed by flow modules when accessing external services, for example, user name/password or API keys. Once a credential has been added to the library, it can be selected in the module settings by referencing the name of the credential. In this way, only the credential managers will ever see the actual data used. Regular users will only access credentials by name referral. Credentials are stored encrypted in the ADP Cloud and are also delivered to and stored in the ADP Edge nodes encrypted. When configuring a module, only credentials supported by that module will be available. A module can support multiple credential types.

Credentials

The Credentials page lists all currently available credentials and their type. To add a new credential, click the + Add Credential button on the Credentials page or click the New button next to the credential selector in the settings UI on modules that use credentials. In the form that is opened, start by giving the credential a name. This is the name shown to users when configuring a module, so you should preferably use a name that helps the users to select the appropriate credential. Then select a type and enter the actual credential settings, see Credential Types below for information on supported types. There is also a description field that can be used to enter any relevant information about the credential, for example, expiration times, endpoints supported, and so on. The description can be seen by expanding the row for a credential in the table on the Credentials page.

Credential Types#

The following credential types are currently supported:

  • API Key - Used for HTTP Bearer authentication. Just enter the key here. The authorization header and the bearer keyword are added by the modules when using this credential.
  • AWS Credential - Credential for access to AWS services using access keys. Enter Access key, Secret access key and Region. The region is specified using the short-form format found here https://docs.aws.amazon.com/general/latest/gr/rande.html.
  • Azure access key - Credentials for accessing Azure services using an access key.
  • Azure Device credential - Credential for devices connecting to the IoT hub or IoT edge. Provide the Device ID and Device Key.
  • Azure Shared Access Signature - Credentials for accessing Azure services using a shared access signature. These must be generated in the Azure portal or using the Azure CLI. If using the portal, copy the SAS token but without the initial question mark (it is provided as a query parameter).
  • Certificate - Upload a certificate file either from local storage or by copying the data into the editor window. Provide a password to use with the certificate.
  • Connection string - Credentials for services using a connection string.
  • Data - This type allows you to upload any type of credentials data, either as a file from local storage or by entering data in the editor. ADP Cloud will not try to interpret the data and it is provided as is to the modules. It is then up to each module to use this information appropriately. For example, it can be used with custom modules that uses an authentication method not covered by any of the other types.
  • InfluxCredential - Credential for accessing the InfluxDB. Provide Token = BallufSecretToken, Organization Id = balluff.
  • OAuth Authorization code grant - Provide: ClientID, ClientSecret, Refresh token, and Token renewal URL. Scope is optional. See example below how to get these.
  • OAuth Client credential grant - Provide: ClientID, ClientSecret and Authorization URL. Token renewal URL and Scope are optional.
  • Username and Password - Credentials for services using user name and password authentication. Also used for HTTP Basic authentication.

OAuth Code Grant example - Google#

The following example shows how to create credentials for accessing Google services, such as Docs, Drive and Gmail, using the Web server variant. To get the right credential, you need to give user consent by following the steps below. You can skip steps 1-3 if you’ve already completed them for another Google module in ADP.

Prerequisites#

  • Google Workspace
  • Access to the Google API Console for your organization
  • Access to a Google Account within your Google Workspace organization
  1. If you don’t already have one, create a project in the Google API Console and enable the APIs you want to use, for example, the Gmail API for emails or the Drive API for working with documents.
  2. Create an OAuth Client ID from the Credentials page. Choose Application type: Web application and add https://adp-cloud.balluff.com as an authorized redirect URI. Click Create.
  3. Save the "Client ID" and the "Client Secret" in a safe location.
  4. Open this URL in the browser, but fill in the "Client ID" you just saved and the access rights you want to enable in the scope, for example, gmail.readonly for reading emails or drive to access documents: https://accounts.google.com/o/oauth2/v2/auth?client_id=YOUR_CLIENT_ID&redirect_uri=https://cloud.balluff.io&response_type=
    code&access_type=offline&scope=https:// www.googleapis.com/auth/[YOUR_ACCESS]. You can add multiple access rights by combining full scope URLs with “+”.
  5. Sign into your Google Workspace account on the landing page opened above and grant Balluff the permissions you have specified.
  6. When you are redirected to adp-cloud.balluff.com, copy the Code parameter from the URL and save it in a safe location.

By the end of step 6, you should have a Client ID, Client Secret, and Code saved in a safe location.

Create Credential in ADP Cloud

  1. Create a new credential in the ADP Cloud Platform. Select type: OAuth Authorization Code Grant.
  2. Enter {refresh_token} as the refresh token.
  3. Enter your Client ID and Client Secret.
  4. Enter https://oauth2.googleapis.com/token as the Token Renewal URL and the same scope that you used above, for example, https://www.googleapis.com/auth/drive.
  5. Create a request by clicking the + sign above the settings. In the URL field, enter: https://oauth2.googleapis.com/token.
  6. Click Action POST.
  7. Select Content-Type: application/x-www-form-urlencoded.
  8. Under Body, paste the following and provide the code you saved above:
    code=[YOUR_CODE] &client_id={clientId}&client_secret={clientSecret}&redirect_uri=
    https://cloud.balluff.io&grant_type=authorization_code.
  9. Click Execute to run your request and make sure you get a status of 200.
  10. Exit the request popup.
  11. Click Test Credentials and make sure you get a status 200.
  12. Click Add Credential.

Resources#

The ADP Cloud service supports the use of centrally managed resources. A resource is something that is either used when configuring modules in flows, such as OPC UA tag lists or Modbus register mappings, or files that need to be downloaded into the ADP Edge nodes when deploying flows. File resources can then be accessed from within modules when they run on an ADP Edge node, for example, Python or C# scripts that are imported in the respective code modules, machine learning models that are used inside a code module or a CSV file with static data that is accessed with the CsvReader module. The Resources page in the ADP Cloud web UI is used to manage resources. The main page shows all currently available resources. New resources can be uploaded from here. A resource can be created by uploading a file from a local storage or by entering the information directly in the UI.

Some general things to note about resources:

  • All resources have a name. This name is used when referring to the resource in the UI, for example, on the main resource page and when selecting the resource for use in a flow. Resources that are downloaded into the ADP Edge nodes such as file and script resources also have a local name. This is the local file name used to refer to the file from inside a module. See below for more details on how this is used for the different resource types.
  • Once successfully created, a resource cannot be changed or removed. However, it can be archived. The reason is that an uploaded resource may be used in a flow and changing or removing the resource may have undesirable effects on the running flows.
  • An archived resource cannot be used when configuring a new flow version. Existing flows that use the resource will still get access though.
  • Resources can be downloaded to your local computer, for example, to modify it and upload a new version. There is however a size limitation and files larger than 10 MB cannot be downloaded.
  • Small resources (<1 MB) can be viewed in the UI and the content can be copied into a new resource, which is useful for small changes. Note: You must have permission to manage resources to be allowed to add resources. However, all users can use the resources when creating flows.

Add A New Resource#

To add a new resource, click the + Add Resource button and fill in the necessary information in the dialog. All modules must have a unique name and a type. Except for the Modbus and OPC types, a local name must also be provided.

Add new ressource

Alternatively, you can add resources on the Resources page by clicking Add Resource.

Resource Types#

The following sections describe how to create and use the different types of resources.

File Resources#

A file resource is any type of file that is needed locally on an ADP Edge node by the flow running on that ADP Edge node. Examples are machine learning models that are loaded into some ML framework in a PythonBridge module or a CSV file accessed by the CsvReader module. The service makes no assumptions on format or content of these files. It will just make sure that these files are downloaded to any ADP Edge node that runs flows that reference these resources. File resources must have a local file name, that can contain a path. A local name can only include alphanumeric characters plus . and /. For example, bug.csv is a valid name while bug_2.csv is not. When accessing a file from within a module in a flow, for example, the CsvReader, it should be referenced using a base path of data/flowresources/, that is, the file above will be available as data/flowresources/a/b/c/test.csv. To use a file resource in a flow it must be added on the Resource tab in the flow settings:

File ressource

Start entering the name of the resource to display a list of available resources. Select the right resource and then click the + button to add the resource to the flow. The Type drop-down list can be used to limit the choices shown. A resource that has been added here will be downloaded to any ADP Edge node where this flow is deployed. The local name to use when referencing the resource from within a module is shown in the Path column.

Python And C# Script Resources#

Python/C# script resources are very similar to file resources, that is, they will be downloaded to the ADP Edge nodes where they are referenced. The cloud service will not validate the content. The difference is that these resources are intended for use in a specific module, namely the PythonBridge and Csharp modules respectively. It is therefore possible to setup these resources directly from within each module, that is, they don’t need to be added as flow resources.

The UI is the same as described above for file resources, but now available directly from within the code modules. Also, the proper import statement needed to use these files is automatically added to the code window if a script file is added. It is also possible to load generic file resources, for example, ML models, from within the code modules. In this way, all resources needed by code modules can be managed from within the modules.

OPC Resources#

OPC resources are tag lists that can be used when configuring the OPC UA Subscriber and Reader modules instead of typing each tag in the module UI. When using an OPC resource, all the tags in the resource will be used by the OPC UA modules. Any tags entered manually in the UI or loaded dynamically from a flow message will be added to the list of tags in the resource file. The format of OPC resources is a JSON array with an object per tag:

[

  {

    "nodeId": "ns=4;i=1244",

    "path": "/4:Boilers/4:Boiler #1/4:PipeX001/4:FTX001/4:Output",

    "customType": "tempSensor",

    "customScale": 1.5

  },

  {

      ...

  }

]

The nodeId property is mandatory and should follow the OPC UA syntax for node IDs.

Any other properties added will be included in the output message for the tag. Custom properties can, for example, be used to add metadata to allow filtering/grouping of tags in other modules in the flow or apply per tag specific scaling/conversion parameters that can be used in other modules to change the values.

This format is also used when loading tags dynamically from flow messages and matches the output produced by the OPC UA Browser module. Note that the output of the OPC UA Browser module contains several properties in addition to nodeId and these will be treated as custom properties by the Subscriber and Reader modules. If you don’t want to have these properties in the output, they must be removed with a Property Mapper module between the Browser and Subscriber/Reader module.

Modbus Resources#

Modbus resources are used to specify the register mappings for different PLCs. When the resource is used in the Modbus Reader module, all of the listed tags will be retrieved. Additional tags can be added in the module UI. A JSON file is used to define these mappings. This is an example of such a file:

{
  "name": "PLC1",
  "unitId": "1",
  "tags": [
      {
          "id": "tag1",
          "modbusDataType": "Int",
          "modbusFunction": "ReadHoldingRegisters",
          "address": "0001"
      },
      {
          "id": "tag2",
          "modbusDataType": "Byte",
          "modbusFunction": "ReadDiscreteInputs",
          "address": "0101"
      }
    ],
    "byteOrder": {
    "twoByte": "01",
    "fourByte": "0123",
    "eightByte": "01234567",
    "string": "01"
   }
}

A device (PLC) is specified as an object which must have at least a name and at least one tag in the tags list. Optionally, the unitId can be specified if something else than the default value of 1 is needed. A tag (register) is defined by the following parameters:

  • id - Unique name of the tag that will be included in the message.
  • name - Optional name of the tag that will be included in the message.
  • modbusDataType - The type of data to read. Could be one of Byte, Int, UInt, String, Short, UShort, Float, and Double. This setting will affect the number of bytes to read and how they should be interpreted by the module.
  • modbusFunction - Defines what function code to use (address space). Could be one of ReadDiscreteInputs, ReadHoldingRegisters, ReadCoils and ReadInputRegisters.
  • address - The starting address of the data.
  • length - Integer (>=1), only used with String types to specify the number of characters to read.

In addition, the byteOrder object specifies how to deal with multi-byte data types, such as Short and UShort (twoByte), Int and Float (fourByte), Double (eightByte) and String. Accesses to Modbus registers are always 16 bits/2 bytes but the byte ordering is not specified (little or big endian). With data types that combines data from multiple registers, for example, Int, the word ordering also needs to be defined. The byteOrder object is used to add the information needed to convert data from a specific PLC into the corresponding data types by specifying in which order the bytes/words shall be used. The following settings are available:

  • twoByte - [Default: 01] How to interpret the two bytes of a single 16-bit register. Used by data types such as Short and UShort. Set to "01" for little-endian (LSB stored in bits 0-7) or "10" for big-endian (MSB stored in bits 0-7). For example, the hex value 4F52 is stored as 52 4F in the register if little-endian is used and as 4F 52 if big-endian is used.
  • fourByte - [Default: “0123”] How to interpret the four bytes of two 16-bit registers, used by data types such as Int and Float. Set to "0123" for little-endian (LSB stored in bits 0-7 of the first 16-bit register) or "3210" for big-endian (MSB stored in bits 0-7 of the first 16-bit register). For example, the hex value 4F5235AB is stored as AB35 524F in the registers if little-endian is used and as 4F52 35AB if big-endian is used.
  • eightByte - [Default: “01234567”] How to interpret the eight bytes of four 16-bit registers, used by the 64-bit float data type. Set to "01234567" for little-endian (LSB stored in bits 0-7 of the first 16-bit register) or "76543210" for big-endian (MSB stored in bits 0-7 of the first 16-bit register).
  • string - [Default: “01”] Strings can be of ‘any’ length and hence span several 16-bit registers. This setting specifies how to order the two bytes of a register into two characters. Set to "01" if the PLC uses big-endian ordering and "10" if little-endian is used.

S7 Resources#

S7 resources are used to specify the register mappings for different Siemens S7 PLCs. When the resource is used in the S7 Reader module, all of the listed tags will be retrieved. Additional tags can be added in the module UI. A JSON file is used to define these mappings. This is an example of such a file:

{
  "name": "Conveyor belt controller",
  "tags": [
    {
      "id": "tag1",
      "name": "ValveTemp",
      "s7DataArea": "Input",
      "s7DbAddress": 0,
      "s7StartAddress": 0,
      "s7Type": "Int",
    },
    {
     "id": "tag2",
     "name": "ValveOpen",
     "s7DataArea": "Input",
     "s7DbAddress": 0,
     "s7StartAddress": 2,
     "s7BitAddress": 5,
     "s7Type": "Bool",
     "tagCount": 1,
    },
    {
      "id": "tag3",
      "name": "Machine",
      "s7DataArea": "DataBlock",
      "s7DbAddress": 0,
      "s7StartAddress": 10,
      "s7Type": "String",
      "length": 25,
    }
  ]

A device (PLC) is specified as an object which must have at least a name and at least one tag in the tags list. A tag (register) is defined by the following parameters:

  • id - Unique name of the tag. Will be included in the message.
  • name - Optional name of the tag. Will be included in the message.
  • s7DataArea - One of Input, Output, Memory or DataBlock.
  • s7DbAddress - Integer selecting the Db (only used when s7DataArea is DataBlock).
  • s7StartAddress - Integer setting the start address within the selected data area.
  • s7BitAddress - Integer (0-7), only used with Bool types to specify the bit to use within the selected byte.
  • s7Type - The type of data to read. Could be one of Bool, Byte, Word, DWord, Char, SInt, Int, DInt, USInt, UInt, UDInt, Real, or String. This setting will affect the number of bytes to read and how they should be interpreted by the module.
  • length - Integer (>=1), only used with String types to specify the number of characters to read.

Rockwell Resources#

Rockwell resources are used to specify the register mappings for different PLCs. When the resource is used in the Rockwell Reader module, all of the listed tags will be retrieved. Additional tags can be added in the module UI. A JSON file is used to define these mappings. This is an example of such a file:

{
  "name": "Conveyor belt controller",
  "tags": [
    {
      "RockwellTagName": "xyz123",
      "RockwellType": "real"
    },
    {
      "RockwellTagName": "xyz123",
      "RockwellType": "real",
      "name": "ValveTemp",
      "scaleFactor": 1.5
    }
]

A device (PLC) is specified as an object which must have at least one tag in the tags list. A tag (register) is defined by the following mandatory parameters: - RockwellTagName - The name of the tag as defined in the PLC. Will be included in the message. - RockwellType - Data type of the tag as defined in the PLC. One of sint, int, dint, lint, real, lreal, bool, string. Will be included in the message. Any other properties added will be included in the output message for the tag. Custom properties can, for example, be used to add metadata to allow filtering/grouping of tags in other modules in the flow or apply per tag specific scaling/conversion parameters that can be used in other modules to change the values.

Flow Parameters#

Flow Parameters#

Flow Parameters address the following two use cases: 1. You want to use the same flow on multiple ADP Edge nodes but some settings are different. 2. You use the same setting in multiple flows, for example, credentials for access to some external service, and this setting sometimes changes. An example is a user name/password credential where you have to update the password every 6 months. Instead of changing every flow that use that credential, you can now use a parameter and update all flows by just changing the parameter.

You first need to create a parameter. This is done on the Parameters page by clicking + Add Parameter. You then give the parameter an ID and specify what data type the parameter has. The data type will control which module settings can be overridden by this parameter. If you want, you can also assign a default value. This is how you implement the second use case above.

Once a parameter has been defined, it can be used to override module settings. On the Flows page, click the 3-dot menu for the flow whose parameters you want to change, and click Manage Parameters.

The Manage Parameters window shows all modules used in this flow. Go to a module where you want to override a setting and expand the table using the arrow (>) to the left of the module name. Below the module name, you will now see a list of settings that can be overridden and their current values based on the module configuration. To override a setting, you click on the "pen" icon to the right of the setting and then select one of the available parameters from the list.

If the parameter you selected has a default value, this value will now be assigned to this setting whenever you deploy this flow. If you want to reuse the same setting in multiple flows, that is, the second use case above, this is all you need to do. Just add the same parameter to some other settings in other flows.

If you want to assign different values depending on which ADP Edge node the flow is deployed to, that is, the first use case above, you have to open to the Nodes page. Click in the Parameters column on the ADP Edge node for which you want to assign new parameter values. In the panel on the right-hand side, you add a parameter to this ADP Edge node by clicking + Add Parameters. A popup will show all parameters that are currently defined in the system together with their data type and possible default value. Go to the parameter you want to use and click on the "pen" icon to edit the value for the parameter. You can add multiple parameters at once by editing multiple parameters. When you close the pop-up, these parameters will be added to the ADP Edge node with their unique values for this ADP Edge node. Repeat this process for any ADP Edge nodes where you want to modify some settings.

When parameter values have been added on ADP Edge nodes, these values will replace the corresponding module settings when flows are deployed to the ADP Edge nodes. The mapping between module settings and ADP Edge node specific values is done through the parameter name.

Debugging With Parameters#

When you run a flow in a remote session in the Flow Studio, the settings that are set in the module will be used by default, that is, the settings you see in the UI. If you want to test your flow with the settings that will be used when deploying the flow to a specific ADP Edge node, there is an option on the flow settings tab: Use FlowParameters in remote sessions. When checked, the ADP Edge node specific values will be applied based on which ADP Edge node the Flow Studio is currently connected to.

Labels#

Creating And Using Labels#

Labels are used to create groups of ADP Edge nodes. A label is just a text string that you attach to an ADP Edge node. On the Nodes page, you can add a label to an ADP Edge node by clicking Add. You will then get a list of existing labels you can select from or you can add a new label by typing in some text.

When deploying flows from the Flows page, you can deploy to multiple ADP Edge nodes by selecting one of these labels on the Labels tab in the deployment dialog.

Removing Labels#

Labels can be removed from individual ADP Edge nodes by clicking on the label in the Nodes table. Labels can also be removed completely on the Labels tab of the Nodes page. If you delete a label here, it will be removed from all ADP Edge nodes and from the list of available labels. By selecting a label on the Labels tab, you can also get a list of ADP Edge nodes with that label attached.

Using Labels To Control Node Access#

When defining roles on the Organization page, labels can be used to limit access to certain ADP Edge nodes. If you expand the Nodes group in the role editor, labels can be added to all of the ADP Edge node permissions. Then a user with this role will only get the specific permissions for ADP Edge nodes with the selected labels. Note that you need to limit label management if you intend to use labels to control ADP Edge node access. Otherwise, someone could just add the labels needed to get access to any ADP Edge nodes.

Events and Jobs#

Events#

Events generated either in ADP Cloud or from ADP Edge nodes that are listed on the Events page. This page serves as both a log of changes to the system and problems detected, for example, a user has logged in or a flow running on an ADP Edge node reported an error. The main table shows high-level information, and when selecting an event, a side panel will open to show more details. Events can be filtered on severity as well as texts in the message subjects.

E-Mail Notifications#

A user or a system administrator can enable e-mail notifications for error events from the User menu (three vertical dots in the top-right corner). A user can enable e-mail notifications on the Settings page. These e-mails will be sent to the same e-mail the user uses to login. Administrators can enable e-mail notifications to any e-mail addresses on the Organization page.

Jobs#

Whenever a change is made in ADP Cloud that affects one or several ADP Edge nodes such as deploying a flow, upgrading a flow to a new version, or starting a remote session, this results in a job being created for the affected ADP Edge nodes. These jobs will reside in a queue that is monitored by the ADP Edge nodes each time they connect to ADP Cloud. Jobs end up on the Jobs tab on the Nodes page and here you can follow the progress of each job. When an ADP Edge node has completed a job that relates to a deployed flow, it will also send the new status to the event log. That is, if you deploy a flow, there will first be a job created and once the flow that has been successfully started the ADP Edge node will report an event.