Create Workload Job Definition

Description of process

This is the process of creating a Workload Job Definition. A Workload Job Definition consists of Data Sources and Workflows (Actions). It can be used to load necessary Data Sources into the relevant Data Lake folder and start a Data Pipeline via the Data Pipeline service. Also, a Workload Job Definition can be created to load only a set of Data Sources into the relevant Data Lake folder.

Workload Job Definitions can be either scheduled or explicitly triggered for a Run. Once a Run is created and the execution is completed, the Workload Run and Workload Log facilitate monitoring each Run by returning the status and relevant information. Also, created Workload Job Definitions can be managed (modify/delete) as well.

A Workload Job Definition can be, 

1. System Defined 

2. User-created.   

This process explains how a user can create a Workload Job Definition from the Workload Job Definitions page in IFS Cloud Web. 

Data Sources section of the Workload Job Definition page enables adding Data Sources into the relevant Workload Job Definition. Once added, the Details option is available against each Data Source to view further details, by navigating to the Parquet Data Source Details page (which displays the Parquet Data Source Details, Refresh History, and Oracle Source columns).  This is helpful in identifying all Parquet Data Source related attributes. i.e. defined columns, Max Age, load type, file name template etc. Also, Parquet Data Source editing can be performed as required (e.g., Column selection changes, Max Age modification).

Both Workload Job Admin user and Data Services Administrator permissions are needed to perform all the activities under this sub-process.