0% found this document useful (0 votes)
11 views630 pages

Studio User Guide Data Integration en

Uploaded by

Aliou SADIO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views630 pages

Studio User Guide Data Integration en

Uploaded by

Aliou SADIO
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Talend Data Integration

Studio User Guide

8.0
Last updated: 2022-06-27
Contents

Copyright........................................................................................................................ 4

What is Talend Studio?................................................................................................5

Managing licenses.........................................................................................................6
Checking/changing the license for the Studio........................................................................................................ 6
Licenses and perspectives in the Studio................................................................................................................... 6

Managing features in Talend Studio.......................................................................... 7


Talend Studio features..................................................................................................................................................... 7
Installing features using the Feature Manager.................................................................................................... 10
Activating/Deactivating installed features............................................................................................................. 12

Working with projects................................................................................................ 14


Creating a project............................................................................................................................................................ 14
Creating a sandbox project.......................................................................................................................................... 15
Importing a demo project.............................................................................................................................................17
Importing local projects................................................................................................................................................ 19
Opening a remote project.............................................................................................................................................23
Exporting a project..........................................................................................................................................................25
Working collaboratively on project items...............................................................................................................26
Working with project branches and tags................................................................................................................ 32
Defining project references..........................................................................................................................................44

Data Integration........................................................................................................ 46
Talend Data Integration functional architecture................................................................................................46
Designing Jobs...................................................................................................................................................................47
Designing a Joblet......................................................................................................................................................... 145
Managing Jobs.................................................................................................................................................................156
Mapping data flows...................................................................................................................................................... 231
Change Data Capture (CDC).......................................................................................................................................283

Centralizing metadata for Data Integration..........................................................318


Objectives..........................................................................................................................................................................318
Centralizing database metadata.............................................................................................................................. 318
Centralizing JDBC metadata...................................................................................................................................... 328
Centralizing SAP metadata........................................................................................................................................ 332
Centralizing File Delimited metadata....................................................................................................................353
Centralizing File Positional metadata................................................................................................................... 359
Centralizing File Regex metadata...........................................................................................................................365
Centralizing XML file metadata............................................................................................................................... 368
Centralizing File Excel metadata.............................................................................................................................389
Centralizing File LDIF metadata.............................................................................................................................. 394
Centralizing JSON file metadata.............................................................................................................................. 398
Centralizing LDAP connection metadata..............................................................................................................414
Centralizing Azure Storage metadata.................................................................................................................... 419
Centralizing Data Stewardship metadata.............................................................................................................423
Centralizing Google Drive metadata...................................................................................................................... 424
Centralizing Marketo metadata................................................................................................................................ 425
Centralizing Salesforce metadata............................................................................................................................429
Centralizing Snowflake metadata............................................................................................................................433
Setting up a generic schema.................................................................................................................................... 437
Centralizing MDM metadata......................................................................................................................................443
Managing a survivorship rule package..................................................................................................................460
Centralizing Embedded Rules (Drools)..................................................................................................................466
Centralizing Web Service metadata........................................................................................................................468
Centralizing a Validation Rule..................................................................................................................................484
Centralizing an FTP connection...............................................................................................................................494
Centralizing UN/EDIFACT metadata....................................................................................................................... 496
Exporting metadata as context and reusing context parameters to set up a connection.................. 500
Importing metadata from a CSV file......................................................................................................................508
Using centralized metadata in a Job......................................................................................................................513

Using routines........................................................................................................... 515


Managing routines.........................................................................................................................................................515
System routines.............................................................................................................................................................. 525

CommandLine............................................................................................................544
CommandLine overview.............................................................................................................................................. 544
Operating modes............................................................................................................................................................544
Updating your license using the CommandLine................................................................................................546
CommandLine API......................................................................................................................................................... 546
CommandLine examples.............................................................................................................................................561

Appendices.................................................................................................................564
Customizing Talend Studio and setting Studio preferences......................................................................... 564
Using SQL templates....................................................................................................................................................618
SQL template writing rules....................................................................................................................................... 626
Talend Studio shortcuts.............................................................................................................................................. 629
Copyright

Copyright
Adapted for 7.3.1. Supersedes previous releases.
Copyright © 2021 Talend. All rights reserved.
The content of this document is correct at the time of publication.
However, more recent updates may be available in the online version that can be found on Talend Help Center .
Notices
All brands, product names, company names, trademarks and service marks are the properties of their respective owners.
End User License Agreement
The software described in this documentation is provided under Talend 's End User Software and Subscription Agreement
("Agreement") for commercial products. By using the software, you are considered to have fully understood and
unconditionally accepted all the terms and conditions of the Agreement.
To read the Agreement now, visit http://www.talend.com/legal-terms/us-eula?utm_medium=help&utm_source=help_content.

4
What is Talend Studio?

What is Talend Studio?


Talend provides you with a range of open source and subscription Studios you can use to create your projects and manage
data of any type or volume.
Using the graphical User Interface and hundreds of pre-built components and connectors, you can design your Jobs with a
drag-and-drop interface and native code generation.
The key capabilities of Talend Studio are accessible from different perspectives. The availability of the perspectives depends
on your license in the case of a local project, or on the type of your remote project.

5
Managing licenses

Managing licenses
With any Talend subscription product, you receive a license that authorizes you to use the Studio.

Checking/changing the license for the Studio


About this task
You can check the expiration date of the license you are using at any time in the Studio, and replace it with a new one, if
needed. Proceed as follows:

Procedure
1. From the Studio menu bar, select Help > About License.
The About License dialog box is displayed, showing the license expiration date.
2. To change the license, select either to import it from the remote server or from your local file system by selecting the
right option in the dialog box and following the instructions in the open wizard.
The license expiration date is updated according to the imported license and is displayed at the bottom of the dialog
box.
3. Click Next and in the Confirm dialog box that is displayed, click OK to restart the Studio so that your new license takes
effect.
The license file is automatically updated in the root directory of the Studio.

Licenses and perspectives in the Studio

Note: The MDM and Profiling perspectives are available only if you have installed the relevant MDM and Data Quality
features from the Feature Manager. For more information, see Managing features in Talend Studio on page 7.

When you are working in a local project, the perspectives you have access to in your Studio depend on the license you are
using.
• If you are using a Talend MDM Platform license, you have access to the MDM, Profiling and Integration perspectives.
• If you are using a Talend Data Management Platform license, you have access to the Profiling and Integration
perspectives.
• If you are using a Talend Data Integration, Talend Big Data or Talend ESB license , you have access to the Integration
perspective. However you can design:
• standard Jobs with the Talend Data Integration license
• standard Jobs and Spark Jobs with the Talend Big Data license
• standard Jobs and Routes with the Talend ESB license
When you connect to a remote project, the perspectives you have access to depend on the type of the remote project:
• When you connect to a Master Data Management project, you have access to the MDM, Profiling and Integration
perspectives.
• When you connect to a Data Quality project, you have access to the Profiling and Integration perspectives.
• When you connect to a Data Integration project, you have access to the Integration perspective.
For more information about how to open a remote project, see Opening a remote project on page 23. For more
information about project types, see Talend Administration Center User Guide.

6
Managing features in Talend Studio

Managing features in Talend Studio


When installing Talend Studio, either using the installer or manually, a minimal version with some basic Data Integration
features is installed. After Talend Studio installation, to use those features that are not shipped with Talend Studio by
default, you need to install them through the Feature Manager.

Talend Studio features


This section provides a list of core features that are installed by default and a list of optional features that need to be
installed using the Feature Manager.

Core features installed by default

Feature Description

Data Integration Job editor The Data Integration Job editor is the workspace where you can design your Jobs.

Data Integration components A component is a functional element that performs a single data integration operation in a Job. Only some basic
Data Integration components are installed by default.

Contexts This feature allows you to manage Jobs differently for various execution types, for example, testing and
production environment.

Routines A routine is a Java class with many functions. It is generally used to factorize code.

SQL templates Talend Studio provides a range of SQL templates to simplify the most common data query and update, schema
creation and modification, and data access control tasks. It also comprises a SQL editor which allows you to
customize or design your own SQL templates to meet less common requirements.

Metadata for database, file, The metadata wizard allows you to store reusable information on databases, files, and/or systems in the
generic schema, etc. Repository tree view. The information can be reused later to set the connection parameters of the relevant input
or output components and the data schema in a centralized manner.

Git support This feature allows you to work on remote projects stored in Git repositories.

Remote execution This feature allows you to deploy and execute your Jobs on a remote JobServer when you work on either a local
project or on a remote one on the condition that you are connected with Talend Administration Center.

Talend Administration Center This feature allows you to set up a connection to Talend Administration Center.
connection

Optional features installed using Feature Manager

Category Feature Description

Shared features Audit Talend Project Audit transforms project data flows to valuable business
information. It introduces an auditing approach for evaluating various
aspects of Jobs implemented in your Talend Studio.

Shared features Data Lineage This feature provides advanced capabilities for analyzing any given item,
such as a Job, in the Repository tree view.
• Impact Analysis: discovers descendant items up to the target
component.
• Data Lineage: discovers the ancestor items starting with the source
component.

Shared features Jobscripts In addition to the graphical Job design interface, a Job script is another
way to create a data integration process with Talend Studio.

7
Managing features in Talend Studio

Category Feature Description

Shared features Job Templates This feature allows you to use the different templates to create ready-to-
run Jobs.

Shared features Metadata Bridge Talend Metadata Bridge accelerates the implementation, maintenance
and continuous improvement of integration scenarios by allowing the
synchronization, the sharing and the conversion of metadata across the
different components.

Shared features Metadata Import from CSV This feature allows you to import metadata from a CSV file on an external
application.

Shared features Publish to artifact repository This feature allows you to publish your Job, Route or Service into an
artifact repository.

Shared features Resources This feature allows you to create resources and use them in your Jobs for
file handling. This way, when exporting your Jobs, for example, you can
pack the resource files as Job dependencies and deploy your Jobs without
having to copy the files to the target system.

Shared features Talend Activity Monitoring Talend Activity Monitoring Console is an add-on tool integrated in Talend
Console Studio for monitoring Talend Jobs and projects.

Shared features Test Cases This feature allows you to create test cases to test your Jobs and Services
during Continuous Integration development to make sure they will
function as expected when they are actually executed to handle large
datasets.

Shared features Validation Rules A validation rule is a basic or integrity rule that you can apply to
metadata items to check the validity of your data. It can be basic check
for correct values or referential integrity check, both applicable to
database tables or individual columns, file metadata or any relevant
metadata item.

Data Integration > Components Amazon DocumentDB This feature installs Amazon DocumentDB components, including
tAmazonDocumentDBConnection, tAmazonDocumentDBInput,
tAmazonDocumentDBOutput, and tAmazonDocumentDBClose.

Data Integration > Components Azure Cosmos DB SQL API This feature installs Azure Cosmos DB SQL API components, including
tCosmosDBSQLAPIInput and tCosmosDBSQLAPIOutput.

Data Integration > Components Azure Data Lake Storage Gen2 This feature installs Azure ADLS Gen2 components, including
tAzureADLSGen2Input and tAzureADLSGen2Output.

Data Integration > Components Azure Storage This feature installs Azure storage components, including tAzureStorageQ
ueueCreate, tAzureStorageQueueDelete, tAzureStorageQueueInput,
tAzureStorageQueueInputLoop, tAzureStorageQueueList, tAzureStorageQ
ueueOPutput, tAzureStorageQueuePurge, tAzureStorageConnection,
tAzureStorageContainerCreate, tAzureStorageContainerDelete,
tAzureStorageContainerList, tAzureStorageDelete, tAzureStorageG
et, tAzureStorageList, tAzureStoragePut, tAzureStorageInputTable,
tAzureStorageOutputTable

Data Integration > Components BRMS / Rules This feature installs BRMS/rules components, including tBRMS and
tRules.

Data Integration > Components Couchbase This feature installs Couchbase components, including tCouchbaseDCIn
put, tCouchbaseDCOutput, tCouchbaseInput, tCouchbaseOutput.

Data Integration > Components CyberArk This feature installs CyberArk components, including tCyberarkInput.

Data Integration > Components ESBConsumer This feature installs the tESBConsumer component.

Data Integration > Components Google Drive This feature installs Google drive components, including tGoogleDriveCo
nnection, tGoogleDriveCopy, tGoogleDriveCreate, tGoogleDriveDelete,
tGoogleDriveGet, tGoogleDriveList, tGoogleDrivePut.

8
Managing features in Talend Studio

Category Feature Description

Data Integration > Components Jira This feature installs Jira components, including tJIRAInput and
tJIRAOutput.

Data Integration > Components Marketo This feature installs Marketo components, including tMarketoBulkExec,
tMarketoCampain, tMarketoConnection, tMarketoInput, tMarketoListOp
eration, tMarketoOutput.

Data Integration > Components Neo4j This feature installs Neo4j components, including tNeo4JClose,
tNeo4JConnection, tNeo4JInput, tNeo4JOutput, tNeo4JRow,
tNeo4JBatchOutput, tNeo4JBatchOutputRelationship, and tNeo4JBatchSch
ema.

Data Integration > Components NetSuite This feature installs NetSuite components, including tNetsuiteConnection,
tNetsuiteInput, tNetsuiteOutput.

Data Integration > Components RabbitMQ This feature installs RabbitMQ components, including tRabbitMQInput,
tRabbitMQOutput, tRabbitMQClose, tRabbitMQConnection.

Data Integration > Components RESTClient This feature installs the tRESTClient component.

Data Integration > Components Salesforce This feature installs Salesforce components, including tSalesforceBul
kExec, tSalesforceConnection, tSalesforceEinsteinBulkExec, tSalesforceEin
steinOutputBulkExec, tSalesforceGetDeleted, tSalesforceGet
ServerTimestamp, tSalesforceGetUpdated, tSalesforceInput,
tSalesforceOutput, tSalesforceOutputBulk, tSalesforceOutputBulkExec.

Data Integration > Components Snowflake This feature installs Snowflake components, including tSnowflakeBulk
Exec, tSnowflakeClose, tSnowflakeCommit, tSnowflakeConn
ection, tSnowflakeInput, tSnowflakeOutput, tSnowflakeOutputBulk,
tSnowflakeOutputBulkExec, tSnowflakeRollback, tSnowflakeRow.

Data Integration > Components Splunk This feature installs Splunk components, including tSplunkEventCollector.

Data Integration > Components Talend Data Preparation The Talend Data Preparation components apply preparations, create
datasets in Talend Data Preparation or create flows with data from
Talend Data Preparation datasets.

Data Integration > Components Talend Data Stewardship The Talend Data Stewardship components load data into Talend Data
Stewardship campaigns and retrieve or delete data in the form of tasks in
Talend Data Stewardshipcampaigns.

Data Integration > Components Workday This feature installs Workday components, including tWorkdayInput.

Data Integration > Components Zendesk This feature installs Zendesk components, including tZendeskInput and
tZendeskOutput.

Data Integration > Metadata Advanced WSDL This feature helps you define an Advanced WebService schema and store
it in the Repository tree view.

Data Integration > Metadata CDC This feature helps you set up a CDC environment on a dedicated database
connection, which can quickly identify and capture data that has been
added to, updated in, or removed from database tables and make this
change data available for future use by applications or individuals. It is
available for Oracle, MySQL, DB2, PostgreSQL, Sybase, MS SQL Server,
Informix, Ingres, Teradata, and AS/400.

Data Integration > Metadata EDIFACT The EDIFACT metadata wizard helps you create a schema to be used
for the tExtractEDIField component to read and extract data from UN/
EDIFACT message files.

Data Integration > Metadata SAP The SAP metadata wizard helps you create a connection to an SAP BW
system and an SAP HANA database and store this connection in the
Repository tree view.

Data Integration > Metadata Talend MDM Talend MDM metadata wizard helps you centralize the details of one or
more MDM connections in the Repository tree view.

9
Managing features in Talend Studio

Category Feature Description

Big Data > Distributions Amazon EMR 5.29.0 This feature enables you to run your Spark Jobs on the Amazon EMR
5.29.0 distribution.

Big Data > Distributions Amazon EMR 6.2.0 This feature enables you to run your Spark Jobs on the Amazon EMR 6.2.0
distribution.

Big Data > Distributions Azure Synapse This feature enables you to run your Spark Jobs on Azure Synapse
Analytics with Apache Spark pools as a distribution.

Big Data > Distributions Cloudera CDH Dynamic This feature enables you to run your Spark Jobs on Cloudera CDH using
Distribution either Static (CDH 6.1, CDH 6.2 and CDH 6.3) or Dynamic distributions.

Big Data > Distributions Cloudera Data Platform Dynamic This feature enables you to run your Spark Jobs on Cloudera Data
Distribution Platform using either Static (CDP 7.1) or Dynamic distributions.

Big Data > Distributions Databricks 5.5 This feature enables you to run your Spark Jobs on the Databricks 5.5
distribution.

Big Data > Distributions Databricks 6.4 This feature enables you to run your Spark Jobs on the Databricks 6.4
distribution.

Big Data > Distributions Databricks 7.3 LTS This feature enables you to run your Spark Jobs on the Databricks 7.3 LTS
distribution.

Big Data > Distributions Hortonworks HDP Dynamic This feature enables you to run your Spark Jobs on Hortonworks HDP
Distribution using either Static or Dynamic distributions.

Big Data > Distributions Microsoft Azure HDInsight 4.0 This feature enables you to run your Spark Jobs on the Microsoft Azure
HDInsight 4.0 distribution.

Big Data > Universal Distribution Universal Distribution (Spark 2.4.x) This feature enables you to run your Spark Jobs on Universal distribution
(Recommended) with Spark 2.4.x.

Big Data > Universal Distribution Universal Distribution (Spark 3.0.x) This feature enables you to run your Spark Jobs on Universal distribution
(Recommended) with Spark 3.0.x.

Big Data > Universal Distribution Universal Distribution (Spark 3.1.x) This feature enables you to run your Spark Jobs on Universal distribution
(Recommended) with Spark 3.1.x on Kubernetes.

Installing features using the Feature Manager


This section shows how to install features using the Feature Manager.
Pay attention to the following before installing features to Talend Studio:
• An installed feature must be active if you want to use it in your project. You can activate or deactivate any installed
feature in any project and on any branch. For more information, see Activating/Deactivating installed features.
• For a local project, after a feature is installed, by default, it is active only in the current project.
• For a remote project, after a feature is installed, by default, it is active only in the current project on the current branch.
When switching between branches, if active features are different between current branch and target branch, Talend
Studio will be restarted.
• For a newly created project, all features already installed are active by default.
• For a migrated or imported project, Talend Studio installs and activates the required features automatically.
• The feature packages are, by default, downloaded from Talend official site. You can configure the URL of the repository
for Talend Studio feature packages. For more information, see Configuring update repositories on page 615.

Procedure
1. Launch your Talend Studio and log into the project in which you want to install and activate the features.
2. Click the Feature Manager button on the top bar or select Help > Feature Manager from the menu.

10
Managing features in Talend Studio

The Feature Manager wizard opens and displays a hierarchical view of all features that are available and not yet
installed based on the license you are using and the type of the project you are working with. The features are
organized by categories.

Note:
If Talend Studio detects an update, the Feature Manager will open and display the update installation wizard first.
You need to install the update and restart Talend Studio before installing features.
If you have installed the 8.0 R2022-05 Studio monthly update or a later one provided by Talend, and if the Talend
Studio update settings are managed by your administrator from Talend Cloud Management Console, a message
is displayed in the update installation wizard to inform you that the update has been approved in Talend Cloud
Management Console by your administrator.

You can click Go to project settings to open the Activate/Deactivate features view in the Project Settings dialog box,
where you can activate or deactivate any feature already installed in your current project.
You can also click Go to libraries to open the Third-party Libraries wizard, where you can install all third-party libraries
in one go.

11
Managing features in Talend Studio

3. Select the check boxes corresponding to the features you want to install.
You can click the name of the corresponding feature to display its details.
4. Click Next.
A summary of all features you have selected and their dependencies are listed and can be reviewed.
The required disk space for installing the selected features and their dependencies is displayed under the list. Make sure
there is enough space available on the disk to complete this operation.

5. Click Next.
The Review license agreements view displays.
6. Review all the license text and select I agree to all listed terms and conditions.
7. Click Install.
When the installation is complete:
• Click Yes, restart to restart your Talend Studio to make the installed features available in the current project.
• Click Not now if you prefer not to restart right away. The installed features will be available next time you log into
the project.

Activating/Deactivating installed features


This section shows how to activate or deactivate installed features in your project. An installed feature must be active if you
want to use it in your project.

Procedure
1. Launch your Talend Studio and log into the project in which you want to activate or deactivate installed features.
2. Click File > Edit Project properties from the menu bar, and on the tree view in the Project Settings dialog box displayed,
click General > Activate/Deactivate features to open the corresponding view.
All features already installed are listed in a hierarchy tree view, organized by categories, the same as they are in the
Feature Manager.

12
Managing features in Talend Studio

3. Select or clear the check boxes corresponding to the features you want to activate or deactivate in the current project.
4. If needed, click Add features to open the Feature Manager to install more features.
5. Click Apply to save your changes.
When prompted:
• Click Yes, restart to restart your Talend Studio so your changes take effect in the current project.
• Click Not now if you prefer not to restart right away. Your changes will take effect next time you log into the
project.

13
Working with projects

Working with projects


Creating a project
A project is the highest physical structure for storing all different types of items. Once you launch your Talend Studio and
before you start a data integration Job, a Route, or any other tasks, you need first create or import a project.

Creating a project at initial Studio launch

Procedure
1. Launch Talend Studio and connect to a local repository.
2. On the login window, select the Create a new project option and enter a project name in the field.

Note: Bear in mind:


• A project name is case insensitive
• A project name must start with an English letter and can contain only letters, numbers, the hyphen (-), and the
underscore (_)
• The hyphen (-) character is deemed as the underscore (_)

3. Click Finish to create the project and open it in the Studio.

Creating a new project after initial Studio launch

About this task


To create a new local project after the initial startup of the Studio, do the following:

Procedure
1. On the login window, select the Create a new project option and enter a project name in the field.

14
Working with projects

2. Click Create to create the project. The newly created project is displayed on the list of existing projects.

3. Select the project on the list and click Finish to open the project in the Studio.
Later, if you want to switch between projects, on the Studio menu bar, use the combination File > Switch Project or
Workspace.

Creating a sandbox project


About this task
A sandbox project is a working project created from Talend Studio by a new user not registered in Talend Administration
Center to test data, environments, etc. When you as a new user create a sandbox project, you create both your project in a
remote repository and your user account on Talend Administration Center. This way, the project can be easily shared with
other users and migrated to a production environment.

Warning: If your account already exists in Talend Administration Center, you will not be able to create a sandbox project.

To create a sandbox project:

15
Working with projects

Procedure
1. Launch Talend Studio using a remote connection.
2. On login screen, select Create a Sandbox Project and click Select. A Create Sandbox project dialog box opens.

3. In the URL field, type in the URL of Talend Administration Center.


To get Talend Administration Center's URL, contact your system administrator.
4. Click Check to validate Talend Administration Center's URL.
5. In the Login and Password fields, type in the email address and password that will be used to connect to your remote
project with Talend Studio and to connect to Talend Administration Center if you want to change your password, for
example.
Be aware that the email entered is never used for another purpose other than logging in.

Warning: If your account already exists in Talend Administration Center, you will not be able to create a sandbox
project.

6. In the First name and Last name fields, type in your first and last name.
7. Click OK to validate.
A popup window prompts you to indicate that your sandbox project and its corresponding connection have successfully
been created. They are respectively named Sandbox_username_project and username_Connection
8. Click OK to close the popup.
You might receive an email notifying of your account creation on Talend Administration Center, if the administrator
activated this functionality.
The Connection, Email and Password fields are automatically filled in with the connection information you provided and
the Project list is automatically filled in with your newly created sandbox project.

16
Working with projects

Results
To open the newly created sandbox project in Talend Studio, select your Sandbox project connection from the connection
list, select the project list, and click Finish.

Importing a demo project


Talend provides you with different demo projects you can import into your Talend Studio. Available demos depend on
the license you have and may include ready to use Jobs which help you understand the functionalities of different Talend
components.
You can import the demo project either from the login window of your studio as a separate project, or from the Integration
perspective into your current project.

Importing a demo project to be a separate project

Procedure
1. Launch your Talend Studio and from the login window select Import a demo project and then click Select.
2. In the open dialog box, select the demo project you want to import and click Finish.

17
Working with projects

Note: The demo projects available in the dialog box may vary depending on the license you are using.

3. In the dialog box that opens, type in a name for the demo project you want to import and click Finish.
A bar is displayed to show the progress of the operation.
4. On the login window, select from the project list the demo project you imported and then click Finish to open the demo
project in the studio.
All the samples of the demo project are imported into the studio under different folders in the repository tree view
including the input files and metadata connection necessary to run the demo samples.

Importing a demo project in the current project

Procedure
1.
Launch your studio and in the Integration perspective, click the icon on the toolbar.
2. In the open dialog box, select the demo project to import and click Finish.
A bar is displayed to show the progress of the operation and then a confirmation message opens.
3. Click OK.

Installing example files in the Data Integration demo project

After importing the Data Integration demo project and opening it for the first time, you are required to install the example
files that are used in the example Jobs.

Procedure
1. Expand the Contexts node in the Repository tree view, and open the context group globalContext.
2. If needed, change the value of the root variable to define the directory where the example files will be generated, C:
\TalendDemoFiles\ being the default value.
3. Change the value of the workspace variable according to the path of the local workspace of your actual Studio
installation.
4. Click Finish, and then click Yes to propagate the modifications.
5. In the Update Detection dialog box, click OK to update the selected items.

18
Working with projects

6. Open the beforeRunJobs Job, and press F6 to execute the Job.

Results
All the example files are generated in directory you specified with the root variable.

Importing local projects


In Talend Studio, you can import one or more projects you already created with previous releases of the Studio.

About this task

Note:
From Talend 7.0 onward, a digital signature is added to each project item when it is saved in Talend Studio. When
importing a project or project items, Talend Studio validates the signatures and rejects items with an invalid signature.
This is a security measure to prevent accidental or malicious modification of project items.
However, you can import a project or project items exported from an earlier version of Talend Studio before the
expiration date of a 90-day grace period from the first installation of Talend Studio or before the date set in the migration
token, whichever comes later. Upon successful import, all imported items are signed.
For more information on setting a migration token, see Talend Data Fabric Installation Guide.

Importing a single project

Procedure
1. From the Studio login window, select Import an existing project then click Select to open the Import wizard.

19
Working with projects

2. Enter a name for your new project in the Project Name field.

Warning: Make sure the project name you entered is unique, and bear in mind:
• A project name is case insensitive
• A project name must start with an English letter and can contain only letters, numbers, the hyphen (-), and the
underscore (_)
• The hyphen (-) character is deemed as the underscore (_)

3. Click Select root directory or Select archive file depending on the source you want to import from.
4. Click Browse... to select the workspace directory/archive file of the specific project folder. By default, the workspace in
selection is the current release's one. Browse up to reach the previous release workspace directory or the archive file
containing the projects to import.
5. Click Finish to validate the operation and return to the login window.

Results
Upon successful project import, the names of the imported projects are displayed on the Project list of the login window.

20
Working with projects

You can now select the imported project you want to open in Talend Studio and click Finish to launch the Studio.

Note: A generation initialization window might come up when launching the application. Wait until the initialization is
complete.

Importing multiple projects

Procedure
1. From the Studio login window, select Import an existing project then click Select to open the Import wizard.
2. Click Import several projects.
3. Click Select root directory or Select archive file depending on the source you want to import from.
4. Click Browse... to select the workspace directory/archive file of the specific project folder.
By default, the workspace in selection is the current release's one. Browse up to reach the previous release workspace
directory or the archive file containing the projects to import.

21
Working with projects

5. Select the Copy projects into workspace check box to make a copy of the imported project instead of moving it.
This option is available only when you import several projects from a root directory.

Note: If you want to remove the original project folders from the Talend Studio workspace directory you import
from, clear this check box. But we strongly recommend you to keep it selected for backup purposes.

6. Select the Hide projects that already exist in the workspace check box to hide existing projects from the Projects list.
This option is available only when you import several projects.
7. From the Projects list, select the projects to import and click Finish to validate the operation.

Note: Make sure that the name of the imported project is not already used for a remote project. Otherwise, an
error message will appear when you try to import the project unless you store the local and remote projects in two
different workspace directories.

Results
Upon successful project import, the names of the imported projects are displayed on the Project list of the login window.

22
Working with projects

You can now select the imported project you want to open in Talend Studio and click Finish to launch the Studio.

Note: A generation initialization window might come up when launching the application. Wait until the initialization is
complete.

Opening a remote project


Before you begin
To open a remote project, you must first create a connection to the repository on which the project is stored and make sure
you have access rights to the project.
For more information on setting up a remote connection in Talend Studio, see Talend Data Fabric Installation Guide.

About this task


To open a remote project in Talend Studio:

Procedure
1. On the Connection area of the Studio login window, select the connection to the repository in which the project is
stored from the Connection list.

23
Working with projects

Note:
• If you are connected with Talend Administration Center and if an update is automatically detected by your
Studio, an Update button appears at the bottom of the login window. Click Update to download and install the
update. After the installation completes, click the Finish button to restart your Studio so that the newly installed
update takes effect.
• All Talend Studio clients connected to the same project must install the same patch. Otherwise, you cannot
connect to the project successfully.

2. Click the Refresh button to update the list of existing projects, which are the projects allocated to you in Talend
Administration Center.

Note: If an administrator edits your access rights on a project while you are already connected to this project in the
Studio, you have to relaunch the Studio to take these rights into account.

3. From the project list, select the project you want to open.
4. From the Branch list, select the main, a branch, or a tag, whichever is desired.

Warning: A tag is a read-only copy of a Git managed project. If you choose to open a tag, you can make changes to
your project items but you will be unable to permanently save your changes to an item unless you copy the item to a
branch. For how to copy an item to a branch, see Copying items from a branch or a tag on page 228.

5. Click Finish to launch the selected project in the Studio.

Note: When you work on a Git managed project, depending on the security policy set for Git in Talend
Administration Center, you may see a dialog box asking you to provide your Git credentials when the Studio tries to
connect to Git.
• If the Git credentials are managed by Talend Administration Center and the Git login and password are not
specified in user settings or at user creation, you will be prompted to enter your Git login credentials each time
the Studio tries to connect to the Git.
• If the Git credentials are managed by Talend Studio, you will be prompted to enter your Git login credentials
when the Studio tries to communicate with Git and you have the option to store your Git login credentials in the
Studio so that you will not be prompted again.
For more information about security policy configuration for Git, see the section Setting up the Security Policy for Git
of the Talend Administration Center User Guide.

A progress bar appears, and the Talend Studio main window opens. A generation engine initialization dialog box
displays. Wait until the initialization is complete.

24
Working with projects

Upon opening a remote project, Talend Studio checks periodically its connection with Talend Administration Center.
When Talend Studio detects loss of connection, it tries automatically to reconnect to Talend Administration Center. You
can view the connection progress on the Progress tab by double-clicking Check Administrator connection at the lower
right corner of the Talend Studio main window. If you click the button at this phase, the project will enter the read-
only mode.
Once Talend Studio detects that you have been logged out by an administrator in Talend Administration Center, a
confirmation dialog box appears asking you whether to reconnect to Talend Administration Center.

6. Click Yes to reconnect to Talend Administration Center.


Talend Studio will perform an authorization check when trying a reconnection. A warning will be displayed and the
project will enter the read-only mode if:
• you no longer have access to the project you have opened, or
• you no longer have access to any reference project of the project you have opened, or
• the number of reference projects of the project you have opened has changed.
If your access right to the project you have opened has changed from read-write to read-only, or if you click No in the
confirmation dialog box, the project directly goes into the read-only mode.
When the project is in the read-only mode, you can still edit the item or items currently open in the design workspace,
and changes you make will be committed to the Git the next time you log in to Talend Administration Center with read-
write access to the project.

Exporting a project
Talend Studio allows you to export projects created or imported in the current instance of Talend Studio.

Procedure
1.
On the toolbar of the Studio main window, click to open the Export Talend projects in archive file dialog box.

25
Working with projects

2. Select the check boxes of the projects you want to export. You can select only parts of the project through the Filter
Types... link, if need be (for advanced users).
3. In the To archive file field, type in the name of or browse to the archive file where you want to export the selected
projects.
4. In the Option area, select the compression format and the structure type you prefer.
5. Click Finish to validate the changes.

Results
The archived file that holds the exported projects is created in the defined place.

Working collaboratively on project items


When working collaboratively on a project, many users could access an item in the shared repository or at the project level
simultaneously. In such a case, the user who opens the item first will have the "read and write" rights. The item will then be
locked and read-only for all other users.
If you click the Refresh button in the upper right corner of the Repository tree view, the items that have been locked by
other users will have a red lock docked on them. You will not be able to make changes to these items.
By default, upon each action you make in your Talend Studio, the lock status of all items is automatically refreshed too. If
you find communications with the Talend Administration Center slow or if the project contains a big number of locked items,
you can disable the automatic retrieval of lock status in the Talend Studio preferences settings to gain performance. For
more information, see Performance preferences (Talend > Performance) on page 609.
Items stored in the Repository tree view that are submitted to lock/unlock system include:

26
Working with projects

• Jobs,
• Routines,
• Metadata of various types (DB connection, File...),
• Other items such as documentations, CDC, etc.
Items at project level are also submitted to lock/unlock system. These items include all Project Settings.
Talend Studio provides several lock modes that allow to grant the "read and write" rights to one of the simultaneous users of
the repository item.

Note: When you work on a Git managed project, depending on the security policy set for Git in Talend Administration
Center, you may see a dialog box asking you to provide your Git credentials when the Studio tries to connect to Git.
• If the Git credentials are managed by Talend Administration Center and the Git login and password are not specified
in user settings or at user creation, you will be prompted to enter your Git login credentials each time the Studio
tries to connect to the Git.
• If the Git credentials are managed by Talend Studio, you will be prompted to enter your Git login credentials when
the Studio tries to communicate with Git and you have the option to store your Git login credentials in the Studio so
that you will not be prompted again.
For more information about security policy configuration for Git, see the section Setting up the Security Policy for Git of the
Talend Administration Center User Guide.

Lock principle
The Lock status is a particular state for all items of your Talend projects. Locks are to be used to prevent edition conflicts as
different users can possibly work on the same item.

Locking/unlocking an item (default)

In the default mode, when you first open an item in the Repository tree view, you get the edition privilege and a green lock
is docked at the edited item.

Until you release the lock by closing the item you are editing, other users will not be able to make any change on it. The
item will show with a red lock in their Repository tree views.
All other users will have a read-only access for locked items until they are unlocked.

27
Working with projects

Note:
A locked item can only be unlocked by the lock owner.

To intentionally lock/unlock an item, simply right-click it in the Repository tree view and select Lock/Unlock.

Accessing locked items (default)

On the Repository tree view, the red lock appended to the repository item shows that the item is already edited by another
user.
But you can still open and view the locked item in the design workspace in a read-only mode. Right-click the item in the
Repository tree view, and then click Open to view the item content.
Alternatively, you can get read-write access to locked items by opening the project in offline mode. For more information,
see Accessing items of a remote project in offline mode on page 29.

Lock types
The lock modes available in Talend Studio are the following: the automatic lock mode, the semi-automatic lock mode and
the manual lock mode. An Administrator can define the lock mode in Talend Administration Center. For more information,
see Talend Administration Center User Guide.
The sections below describe the above lock modes.

Automatic lock mode

By default, the first user who creates or opens an item, owns the lock on this particular item and can edit it until he unlocks
it, generally when closing and/or saving the changes made. When the Lock is removed from an item, the changed item gets
committed to Git.

Note: This is the default mode for Git.

Also, you have the possibility to log information about the changes you made on any item, on the condition that the relevant
option is selected in Talend Administration Center. Check Talend Administration Center User Guide for further details and
read Logging information on edited items on page 31.

Semi-automatic lock mode

When the Ask user option is selected in the Talend Administration Center web application, you get prompted to put a lock on
any item you open in the Studio.
If you want to edit the item you are opening, click OK to put a lock on it. The item becomes read-only for other users like in
the default mode.
When closing or saving the item, you get prompted again to unlock it. If you are complete with the changes, then click OK to
remove the lock and allow other users to lock it if needed.
If you do not need to open the item in edition mode (lock) then click No when prompted to open it in read-only mode.

Manual lock mode

When the manual lock mode option is selected in Talend Administration Center, you cannot lock any item, unless
intentionally.
To intentionally lock an item (for edition purpose for example), simply right-click it and select the Lock option while the item
is in the closed state.
The same way, a locked item can only be unlocked through the same procedure by the lock owner (or through Talend
Administration Center web application by the administrator).
By default, items can only be opened in read-only in this manual lock mode.

28
Working with projects

Accessing items of a remote project in offline mode


Talend Studio allows you to open a remote project in offline (local) mode, so that you can edit any items in parallel with
other users and commit your changes to Git when you log on to the remote project again, or save your changes locally if the
edited items are locked by other users or are in conflict.

Before you begin


You have already logged on to the remote project successfully via a remote connection so that the project information
already exists in the workspace directory of your Talend Studio.

Procedure
1. Launch your Talend Studio, or if you have already opened the project using a remote connection, restart your Studio by
selecting File > Switch Project or Workspace from the menu.
2. Create a local connection by following the steps described in the Getting Started Guide, without modifying the
workspace directory that contains the information of the remote project in the Workspace field.
3. On the login screen, select the local connection you just created from the Connection list, and select the remote project
from the Project field, and then click Finish.

Results
Now you can continue working locally on the project branch that you previously worked on.
When you work in offline mode on a Git project, you are working on the local branch associated with the branch you last
worked on. Your changes are automatically committed to your local Git repository, and the top bar of the Repository tree
view indicates the number of local commits.

29
Working with projects

You can revert the current local branch, switch between local branches and delete local branches you are not currently
working on.
When you reopen the project using a remote connection and if you select any branch on which you made changes while your
worked in offline mode, you will be presented the corresponding local branch and you need push your commits manually to
the remote Git repository.
For more information about working with project branches, see Working with project branches and tags on page 32.

Handling changes not committed to the Git

Warning: For a Git managed project, information in this section is applicable only when you are working on a remote
branch. For more information on working with project branches, see Working with project branches and tags on page
32.

When the commit mode is set to Unlocked Items in Talend Administration Center, the changes you make to an item are
committed to the Git only after the item is unlocked.

Note: In the Repository tree view of Talend Studio, an item with uncommitted changes is preceded by a > symbol.

In this case, if you're the only owner of a changed item, you can get your changes committed to the Git by:
• closing the item if the lock mode is set to Automatic in Talend Administration Center.
• closing the item and unlocking it when prompted if the lock mode is set to Ask user in Talend Administration Center.
• closing the item and manually unlocking it before quitting the Studio if the lock mode is set to Manual in Talend
Administration Center.
In certain situations, a dialog box opens when uncommitted items are found, providing you options to handle those items.
For details, see Handling uncommitted items when prompted on page 30.
For more information about the commit mode and lock mode options, see Talend Administration Center User Guide.

Handling uncommitted items when prompted

When you work on a Git project in local mode and perform a pull or push operation or switch to the associated remote
branch, a dialog box pops up if:
• you have made changes to your project items while the Commit mode is set to Unlocked Items and the Lock mode is set
to Manual in Talend Administration Center.
• any files have been manually added to the project folder of your Talend Studio, regardless of the Commit mode and
Lock mode settings in Talend Administration Center.
• any items are found to have changes not committed to your local Git repository for any reason.

30
Working with projects

You can:
• expand log to view the uncommitted files.
• click Commit to commit the files to your local Git repository and continue your current operation.
• click Reset to abort your changes and continue your current operation.
• click Cancel to cancel your current operation without committing or removing the uncommitted files.

Logging information on edited items

About this task


With the Custom log option selected in Talend Administration Center, the users are required to log meta-information about
all the changes made on any item of the relevant project stored in Git mode.
When you log on to a project for the first time after the project is created or modified in Talend Administration Center,
and each time you are about to commit changes made to any item belonging to that project, the Commit changes window
appears to prompt you to log the changes you have made.
This window consists of:
• a text box where you can enter your comment on the commit,
• an Additional logs tab, which contains the log information that will be appended to your comment,
• a Change list tab, which lists the actual studio files that will be committed.
To log information on edited items, complete the following in the Commit changes window:

Note:
The read-only additional log at the bottom of the window lists a summary of all changes made to the current project, in
case you have not committed them to Git at each individual change.
For a Git project, appending additional logs is optional. This allows you to provide your own commit logs to satisfy strict
syntax rules of your Git server in some circumstances, for example when you are working with a Git server hook, if the
automatically generated logs do not comply with the syntax rules and may cause your commit to fail.

Procedure
1. Type in your comment conforming your Git commit log pattern convention in the text box.
2. Click Finish when you completed the form. Or click Cancel if you want to postpone the change log to a later stage.

31
Working with projects

Results

Note: The log prompt will append all changes that you have not committed to the Git yet. So you can choose to log
information after several actions, rather than on every action.

For more information on managing projects stored on Git and setting the commit log pattern, see Talend Administration
Center User Guide.

Deleting shared items


You can delete any item in the shared repository unless it is concurrently edited by another user, if the item is open. You can
still delete items opened as read-only by other users.

Working with project branches and tags


A project is the highest physical structure for storing all different types of items. Talend Studio supports a version control
system that enables you to have different copies of a project in different Git branches or tags. The items in one project
branch or tag are independent of those in another.
This section addresses topics related to project branches and tags, including:
• Creating a local branch from the Studio on page 32
• Pushing changes on a local branch to the remote end on page 34
• Updating a local branch on page 35
• Reverting a local branch to the previous update state on page 35
• Deleting a local branch on page 36
• Creating a tag for a project on page 36
• Switching between branches or tags on page 37
• Viewing the Git commit history on page 37
• Resolving conflicts between branches on page 38
• Merging remote branches on page 43

Note: When you work on a Git managed project, depending on the security policy set for Git in Talend Administration
Center, you may see a dialog box asking you to provide your Git credentials when the Studio tries to connect to Git.
• If the Git credentials are managed by Talend Administration Center and the Git login and password are not specified
in user settings or at user creation, you will be prompted to enter your Git login credentials each time the Studio
tries to connect to the Git.
• If the Git credentials are managed by Talend Studio, you will be prompted to enter your Git login credentials when
the Studio tries to communicate with Git and you have the option to store your Git login credentials in the Studio so
that you will not be prompted again.
For more information about security policy configuration for Git, see the section Setting up the Security Policy for Git of the
Talend Administration Center User Guide.

Creating a local branch from the Studio


When working on a Git managed project, you can create local branches from within your Talend Studio and keep your
changes local until you push them to the Git server.
Your Talend Studio provides two options for you to create a local branch:
• Creating a new branch based on a selected source
• Checking out a remote branch as a local one

Note: In the context of Git branching, main or master can be regarded as a special branch.

After the branch is created, you can work on it and manage it using tools provided by your Talend Studio.

32
Working with projects

Creating a new branch based on a selected source

Procedure
1. From the Talend Studio, click the top bar of the Repository tree view and select New Branch from the drop-down menu.

2. In the New Branch dialog box, select the source, which can be a remote or local branch your local branch will be based
on, enter a name for your new branch, and click OK.

3. When asked whether to switch to the newly created branch directly, click OK to switch or click Cancel to stay on the
current branch.

Checking out a remote branch as a local one

Procedure
1. Click the top bar of the Repository tree view in Talend Studio.
2. From the drop-down menu, select a remote branch which you want to check out as a local one, and then select check
out as local branch from the sub-menu.
There are at most ten remote branches displayed in the drop-down menu. If the remote branch you want to check out is
not in the drop-down menu, click > Click for more remote branches ... and in the Remote branches dialog box displayed,
select the target branch and click Checkout as local.

33
Working with projects

3. In the Checkout as local branch dialog box, enter a name for your new branch, and click OK to create the branch and
switch to it.

Pushing changes on a local branch to the remote end

About this task


When working on a local branch of a Git managed project, your changes are automatically committed to your local repository
when they are saved. However, they are not automatically pushed to the Git server - you need to push them manually using
the Git Push tool provided within your Talend Studio.

Warning: If you created your local branch on a project that has referenced projects and if you have the project references
management right, the project reference relationships will be automatically created for your branch on the remote
repository when you push your branch for the first time. If you don't have the project references management right, you
will need to ask your administrator to set up the project references manually in the Talend Administration Center.
For more information about referenced projects, see Working with referenced projects on page 190.

Procedure
1. Save your changes so that they are committed to your local repository.
2. Optionally, update your local Git repository to prevent possible errors caused by asynchronization between your local
Git repository and the server.
For more information, see Updating a local branch on page 35.
When you profile databases related to analyses and reports, this step is mandatory. As each table has its own ID, when
you are multiple users working on the same table locally, you may encounter conflicts when you merge branches.

34
Working with projects

3. Click the top bar of the Repository tree view and select Push from the drop-down menu.
4. If any editor windows are open, you will see a warning message. Click OK to close the editor windows and proceed with
the push action.
• If the push operation is not complete within 1.5 seconds, a dialog box pops up to indicate the push progress. You
can:
• Click Run in Background to close this dialog box to keep the progress information shown only at the lower
right corner.
• Click Cancel to cancel the push operation and close the dialog box.
• This dialog box closes automatically when the push operation is complete or any conflict occurs. For more
information about conflict handling, see Resolving conflicts between branches on page 38.
5. If a Push Rejected by Server dialog box pops up indicating a push failure, either:
• Click Yes to let the Studio update your local Git repository and then push your changes again automatically.
• Click No if you want to stop the push action, update your local Git repository and then push your changes again
manually.
6. When the push operation completes, a dialog box opens informing you that your changes have been pushed to the Git
server successfully. Click OK to close the dialog box.

Results
Your changes have now been pushed to the Git server. If this is the first push from your local branch, a remote branch with
the same name is automatically created as the associated branch to hold the commits you push from your local branch.

Updating a local branch

About this task


When working on a local branch of a Git managed project, you can update your branch with information from another branch
using the Git Pull and Merge tool provided by your Talend Studio.
To pull and merge information to your local branch, do the following:

Procedure
1. Save your changes so that they are automatically committed to your local repository.
2. Click the top bar of the Repository tree view and select Pull And Merge Branch from the drop-down menu.
3. If any editor window is open, you will see a warning message. Click OK to close the editor and proceed with the pull
action.
4. In the Select a branch to merge or update dialog box, select the source branch to pull from, which can be a remote or
local branch, and then click OK to complete the pull and merge operation.

Results
Your local branch is now up to date. Depending on the source branch you selected, a dialog box opens to show the pull or
merge result:
• If the source branch is the default branch, which is the remote counterpart of the local branch, the dialog box shows the
pull result.
• If the source branch is another one, the dialog box shows the merge result.

Reverting a local branch to the previous update state

About this task


When working on a local branch of a Git managed project, once a remote branch associated with your local branch is created
on the Git server upon your first push, the top bar of the Repository tree view will indicate the number of new commits not
yet pushed to the associated remote branch. You can choose to push such commits to the Git server or abort such commits
by reverting your local branch to the previous update state.
To revert your local branch to the previous update state, do the following:

35
Working with projects

Procedure
1. Click the top bar of the Repository tree view, select More... from the drop-down menu and then select Reset from the
sub-menu.
2. If any editor window is open, you will see a warning message. Click OK to close the editor and proceed.

Results
Your local branch is now reverted to the previous update state.

Deleting a local branch

About this task


When working on a Git managed project, you can delete local branches you created from your Talend Studio.

Procedure
1. If you are currently on the local branch you want to delete, switch to another branch first.
For more information, see Switching between branches or tags on page 37.
2. Click the top bar of the Repository tree view, select the local branch you want to delete from the drop-down menu, and
then select Delete Branch from the sub-menu.
3. In the Delete dialog box, click OK to confirm your action.

Results
The selected local branch is now deleted.

Creating a tag for a project


For a Git managed project, a tag is a read-only copy that records the current state of the project at the point of time the tag
is created.

Warning: When working in a tag, you can make changes to your project items, but you will be unable to permanently
save your changes to an item unless you copy the item to a branch or the main. For how to copy an item to a branch, see
Copying items from a branch or a tag on page 228.

You can create a tag for a project either in Talend Studio or in Talend Administration Center .

Note:
• Close all open editors in the design workspace before trying to create a tag. Otherwise, a warning message will
appear prompting you to close all open editors.
• All tags are visible for all projects on the same Git repository. Therefore, if you create some tags for one project, all
other projects on the same Git repository will have the same list.

For how to create a tag in Talend Administration Center, see Talend Administration Center User Guide.

Procedure
1. Open the remote project for which you want to create a tag.
2. Click the top bar of the Repository tree view to open a drop-down menu.
3. Select More ... from the drop-down menu and then select Add Tag from the sub-menu.
4. In the New Tag dialog box, select the source, which can be the main or a branch based on which your tag will be
created, enter a name for your new tag, and click OK.

Warning: You cannot create tags based on a local branch you have created in the Studio.

Creating a tag may take some time. When done, the tag you created will be listed on the drop-down menu when
clicking the top bar of the Repository tree view, and you can switch to it by following the steps in Switching between
branches or tags on page 37.

36
Working with projects

Switching between branches or tags


Talend Studio allows you to switch between branches of a project without the need of restarting your Studio.
Once you open a project having different branches or tags, you can switch between the main and any of the existing
branches or tags and between different branches or tags.

Note:
• Close all open editors in the design workspace before trying to switch to another branch or tag. If not, a warning
message will display prompting you to close all open editors.
• When you switch to a local branch and changes are found on the associated remote branch, those changes are
automatically synchronized to the local branch.

The switch operation may take some time. Wait until the operation is finalized. Then, the Repository tree view switches
to show the project items of the selected branch. You can read the directory of the active branch on the top bar of the
Repository view.

Procedure
1. If you are currently on a local branch and you have changes not yet pushed to the associated remote branch, push them
first.
For more information, see Pushing changes on a local branch to the remote end on page 34.
2. Click the top bar of the Repository tree view to open a drop-down menu.
3. Select the target branch or tag from the drop-down menu, and then select switch from the sub-menu.
There are at most ten remote branches displayed in the drop-down menu. If the target branch or tag is not in the drop-
down menu, click > Click for more remote branches ... and in the Remote branches dialog box displayed, select it and
click Switch.

Viewing the Git commit history


You can view the commit history of your project or an item like Job, Joblet, Route, and Routelet specific to the current branch
or tag from your Talend Studio.

Procedure
1. Switch to the branch or tag.
For more information, see Switching between branches or tags on page 37.
2. To view the commit history of your project, click the top bar of the Repository tree view, select More... from the drop-
down menu, and then select Show History from the sub-menu to open the Git History view.
Alternatively, you can open the Git History view by selecting Window > Show view from the Studio menu and then in
the Show View dialog box, selecting Talend > Git History and clicking OK.
3. To view the commit history of a specific item like Job, Joblet, Route, and Routelet, select the item in the Repository tree
view and then in the Job view, click Git History to open the corresponding tab.
4. In the Git History view or tab, select any commit record to view its details.

37
Working with projects

In the upper right corner of the Git History view or tab, you can find commits by clicking the down arrow next to the
Find field, selecting a type like Id, Author, or Branch/Tag from the drop-down menu, and then entering the search
keyword in the Find field.

Resolving conflicts between branches

About this task


In a collaborative environment where multiple users work on the same project simultaneously, you may encounter conflicts
when pushing, updating or merging branches. In this case, you will see a dialog box asking you whether to resolve the
conflicts.
To resolve the conflicts, do the following:

Procedure
1. Click OK in the dialog box to open the Git Merge perspective.

The Git Merge perspective opens, and the Conflicts Navigator panel on the left displays the project items on which
conflicts have been found.
2. In the Conflicts Navigator panel, right-click a conflicted item and select from the context menu:

38
Working with projects

• Resolve in editor: to open a compare editor in the right-hand panel of the Git Merge perspective.
Note that this option is available only for project items mentioned in Resolving conflicts in compare editors on
page 39.
• Accept mine: to accept all the changes on the working branch to fix conflicts on the item without opening a
compare editor.
• Accept theirs: to accept all changes on the other branch to fix conflicts on the item without opening a compare
editor.
• Mark as resolved: to mark all conflicts on the item as resolved, leaving discrepancies between the branches.
3. When all conflicts are fixed and marked as resolved, click Yes when the following dialog box opens, or the icon on
the top of the Conflicts Navigator panel to continue your previous action.

Resolving conflicts in compare editors

Depending on the nature of a conflicted project item, Talend Studio provides one of the following types of compare editors
for you to compare changes on the project item between both branches graphically:
• Job Compare editor, if the conflicted project item is a standard Job, , a Joblet, , or a test case.
• EMF Compare (Eclipse Modeling Framework) editor, if the conflicted item is , a context group, or a database connection,
or an item in project settings.
• Text Compare editor, if the conflicted item is a general text file, a routine, , a Job script, or a SQL script.
To open a compare editor, right-click a conflicted project item in the Conflicts Navigator tree view and select Resolve in
editor from the contextual menu.
After fixing conflicts in an editor, mark the conflicts as resolved and close the editor to continue your previous branch
operation.
Job Compare editor
The upper part of the Job Compare editor displays a tree view that shows all the design and parameter items of the Job on
which conflicts have occurred. A dark red conflict indicator is seen on the icon of each conflicted item.
In this tree view, you can expand each node and select the conflicted items to explore the details of the conflicts.
The lower part displays a comparison view that shows the details of the different versions of the selected item. In this
comparison view:

39
Working with projects

• When a design item, such as a component or a connection, is selected in the upper tree view, the item is highlighted
graphically.
• When a parameter item, such as the schema of a component, is selected in the upper tree view, a yellow warning sign
indicates each conflicted parameter, such as a schema column, of that item.

In the Job Compare editor, you can resolve conflicts on the entire Job, all the design items, or all the parameter items in one
go, or resolve conflicts on individual items, parameters, or parameter properties separately.
When fixing conflicts on the entire Job:
• To accept the version of the working branch, either right-click Job Designs Unit in the upper tree view and select Accept

mine from the contextual menu, or select Job Designs Unit in the upper tree view and click icon in the comparison
view.
• To accept the version of the other branch, either right-click Job Designs Unit in the upper tree view and select Accept

theirs from the contextual menu, or select Job Designs Unit in the upper tree view and click icon in the comparison
view.
When fixing conflicts on all the items under a node:

40
Working with projects

• To accept the version of the working branch, either right-click the node in the upper tree view and select Accept mine

from the contextual menu, or select the node in the upper tree view and click icon in the comparison view.
• To accept the version of the other branch, either right-click the node in the upper tree view and select Accept theirs

from the contextual menu, or select the node in the upper tree view and click icon in the comparison view.
When fixing conflicts on an item:
• To accept the version of the working branch, either right-click the conflicted item in the upper tree view and select

Accept mine from the contextual menu, or select the item in the upper tree view and then click the icon in the
comparison view.
• To accept the version of the other branch, either right-click the conflicted item in the upper tree view and select

Accept theirs from the contextual menu, or select the item in the upper tree view and then click the icon in the
comparison view.

Note:
When you try to accept the version of a connection from the other branch:
• If both the input and output components across the connection differ between the working branch and the other
branch, you will be prompted to accept the whole Job design from the other branch.
This, however, is not mandatory - you can try to accept the components and the connection parameters individually
first.
• If the input or output component of the connection cannot be redirected to the new input or output component on
the working branch, you will be prompted to accept the whole Job design from the other branch.
This, however, is not mandatory - you can try to accept the component and the connection parameters individually
first.
• If the input or output component of the connection does not exist on the working branch, you will be prompted to
accept the component first.

When fixing conflicts on a parameter or parameter property:



To accept the version of the working branch, click the icon for that parameter or parameter property in the
comparison view.

To accept the version of the other branch, click the icon for that parameter or parameter property in the
comparison view.
You can edit parameters and parameter properties manually for the working branch in the comparison view. To do so:

Procedure
1. Select the concerned parameter item in the upper tree view to show the parameters under that item in the comparison
view.
2. To edit a parameter, click the in the comparison view so that a ... button appears next to it.
To edit a property of a parameter, expand the parameter and click the property you want to edit to show a ... button
next to it.
3. Click the ... button to open a dialog box.
4. Make your changes and then click OK to close the dialog box.

Results
With the conflicts on an item fixed, the conflict indicator on the icon of the conflicted item in the upper view and the conflict
signs in the comparison view become green.

Warning:
After fixing conflicts in a conflicted editor, be sure to save your changes.

Note that if a centralized Repository item - a context group or a file or database connection defined in the Repository
for example - is called in a Job, fixing conflicts for the Job in the Job Compare editor does not automatically update the

41
Working with projects

corresponding Repository item. When you open the Job in the Integration perspective, you will be asked whether to update
your Job.
EMF Compare editor
The upper part of the EMF Compare editor gives an overview of the differences detected between the two branches. The
lower part is a comparison view that shows the different versions of the selected item between both branches.

To view and fix conflicts:



Click or to navigate through the detected differences. The details about the selected item are shown in the
comparison view.

Click to accept the single, currently selected change, or to reject the single, currently selected change.

Click to accept all non conflicting changes at once, or to reject all non conflicting changes at once.

For a text feature, click the button on the top of the comparison view to copy all the shown changes, or the
button to copy the selected change, from right to left.
Accepted changes are presented by a icon, and rejected changes are presented by a icon.

Warning:
After fixing conflicts in a conflicted editor, be sure to save your changes.

Note that if a centralized Repository item - a context group or a file or database connection defined in the Repository for
example - is called in a Job, fixing conflicts for the Repository item in the EMF Compare editor does not automatically update
the corresponding Job. When you open the Job in the Integration perspective, you will be asked whether to update your Job.
Text Compare editor
The Text Compare editor is a two-pane comparison view that displays the different versions of a text-based project item
between both branches.

42
Working with projects

To view and fix conflicts:



Click to show the ancestory pane, which shows the ancestory version of the compared versions if detected. This
button is operable only for three-way comparison.

Click to toggle between two-way (ignoring the ancestor version) and three-way comparison.

Click the button to copy all the shown changes, or the button to copy the selected change, from right to left.

Click or to navigate through the differences.

Click or to navigate through the changes.
• You can also edit text directly in the left pane to make changes to the version of the current branch.

Warning:
After fixing conflicts in a conflicted editor, be sure to save your changes.

Note that if a centralized Repository item - a routine, a Job script, or an SQL script defined in the Repository for example
- is called in a Job, fixing conflicts for the Repository item in the Text Compare editor does not automatically update the
corresponding Job. When you open the Job in the Integration perspective, you will be asked whether to update your Job.

Merging remote branches

About this task


When working on a Git managed project, you can merge remote branches from within your Talend Studio in local mode.
Use the following procedure to merge a remote branch A to another remote branch B.

Procedure
1. Check out the remote branch B to your Talend Studio as a local branch with the same branch name, by following the
steps described in Checking out a remote branch as a local one on page 33.
2. Update the local branch B using the remote branch A as the source to pull from, by following the steps described in
Updating a local branch on page 35.
3. Resolve any conflicts that occurred during the branch update, by following the steps described in Resolving conflicts
between branches on page 38.

43
Working with projects

4. Push your local branch B to the remote end, by following the steps described in Pushing changes on a local branch to
the remote end on page 34.

Defining project references


A project reference is a property that you can set for a project so that all or some of the project items can be referenced by
another project.
When one project references another, the items (Jobs, Metadata and so on) in the referenced project are available for reuse.
When one project is stored on GIT, its items are structured in main and branches so that the reference can be established at
either levels to provide more flexibility in project usability.

Note:
• You can establish references between projects only if the type of the project to be used as a reference is subordinate
to the type of the referencing project. For example, a Data Management project can be used as a reference for a
Master Data Management project and not vice versa. For more information about project types, see the Talend
Administration Center User Guide.
• You need to have read-write access to the projects only for migration purposes upon migrating to a new version or
applying a patch.
• You cannot:
• define two or more branches of a project as references to another project.
• create a cycle of references between two projects.
• have project items (Jobs, Contexts, Metadata, etc.), except Joblets, with the same name in both a project and its
reference.

The procedure below describes how to define a project reference in Talend Studio. You can also create and manage project
references in Talend CommandLine using the addReference and deleteReference commands. For more information
on these commands, check the help <commandName> command.
For more information about working with referenced projects, see Working with referenced projects on page 190.

Before you begin


Check that you are working on a remote project.

Procedure
1. Right-click the Referenced project node in the Repository tree view and select Reference Projects Setting from the
contextual menu to open the Reference Projects Setting view.

Alternatively, you can click on the Studio tool bar, or select File > Edit Project Properties from the menu bar, and
then select Reference Projects on the left panel of the dialog box to open the Reference Projects Setting view on the
right.

44
Working with projects

Example

2.
In the Add new reference area, click the button to retrieve the projects you can set as referenced projects.
3. From the Project list, select the project you want to add as a referenced project.
4. From the Branch list, select a branch on which the project is established.
5. Depending on how you want Joblets to be searched when building a Job:
• Select the Use strict references to find joblets in target project check box to find a Joblet only in the project from
which the Joblet was added to the Job during the Job design.
• Clear the Use strict references to find joblets in target project check box to find a Joblet in other projects if the
Joblet is not found in the project from which it was added to the Job during the Job design. By default, this check
box is cleared.
6. Click the button to add the selected project to the Current references area.

In the Current references area, you can delete a referenced project by selecting it and clicking the button.
7. Click OK to validate the project reference settings and close the dialog box.
The Studio will log in to the main or referencing project again to apply the project reference settings.

Results
The defined referenced project is displayed in the Repository tree view.

45
Data Integration

Data Integration
Talend Data Integration functional architecture
Talend Studio functional architecture is an architectural model that identifies Talend Studio functions, interactions and
corresponding IT needs. The overall architecture has been described by isolating specific functionalities in functional blocks.
The following chart illustrates the main architectural functional blocks used to handled your data integration tasks.

Note:
The building blocks you have available for use depend on your license.

Several functional blocks are defined:


• The Clients block includes one or more Talend Studio(s) and Web browsers that could be on the same or on different
machines.
From the Studio, you can carry out data integration processes regardless of the level of data volumes and process
complexity. Talend Studio allows you to work on any project for which you have authorization. .
From the Web browser, you connect to the remotely based Talend Administration Center through a secured HTTP
protocol.
• The Talend Servers block includes a web-based Administration Center (application server) connected to:
• two shared repositories: one based on a Git server and one based on an Artifact repository,
• databases: one for administration metadata, one for audit information, and one for Activity monitoring,
• Talend execution server(s).
Talend Administration Center enables the management and administration of all projects. Administration metadata
(user accounts, access rights and project authorization for example) is stored in the Administration database. Project
metadata (Jobs and Routines for example) is stored in the Git server.
For detailed information about the Administration Center, see Talend Administration Center User Guide.

46
Data Integration

• The Repositories block includes the Git server and the Artifact repository. The Git server is used to centralize all project
metadata like Jobs shared between different end-users, and accessible from the Talend Studio to develop them and
from Talend Administration Center to publish, deploy and monitor them.
The Artifact repository is used to store:
• Software Updates available for download,
• Jobs that are published from the Talend Studio and are ready to be deployed and executed.
• The Talend Execution Servers block includes one or more execution servers, deployed inside your information system.
Talend Jobs are deployed to the Job servers through the Administration Center's Job Conductor to be executed on
scheduled time, date, or event.
For detailed information about execution servers, see Talend Administration Center User Guide.
• The Databases block includes the Administration, the Audit and the Monitoring databases. The Administration database
is used to manage user accounts, access rights and project authorization, and so on. The Audit database is used to
evaluate different aspects of the Jobs implemented in projects realized in Talend Studio with the aim of providing solid
quantitative and qualitative factors for process-oriented decision support. The Monitoring databases include the Talend
Activity Monitoring Console database and the Service Activity Monitoring database.
The Talend Activity Monitoring Console allows you to monitor the execution of technical processes. It provides detailed
monitoring capabilities that can be used to consolidate collected log information, understand the underlying data flows
interaction, prevent faults that could be unexpectedly generated and support the system management decisions.
The Service Activity Monitoring allows you to monitor service calls. It provides monitoring and consolidated event
information that the end-user can use to understand the underlying requests and replies that compose the event,
monitor faults and support the system management decisions.

Designing Jobs
What is a Job design?
A Job Design is a graphical design, of one or more components connected together, that allows you to set up and run
dataflow management processes. A Job Design translates business needs into code, routines and programs, in other words it
technically implements your data flow.
The Jobs you design can address all of the different sources and targets that you need for data integration processes and any
other related process.
When you design a Job in Talend Studio, you can:
• put in place data integration actions using a library of technical components.
• change the default setting of components or create new components or family of components to match your exact
needs.
• set connections and relationships between components in order to define the sequence and the nature of actions.
• access code at any time to edit or document the components in the designed Job.
• create and add items to the repository for reuse and sharing purposes (in other projects or Jobs or with other users).

Warning: To execute the Jobs you design in Talend Studio, you must install the compatible Java environment. For more
information, see Compatible Java environments.

Getting started with a basic Job


This section provides a continuous example that will help you create, add components to, configure, and execute a simple
Job. This Job will be named A_Basic_Job and will read a text file, display its content on the Run console, and then write
the data into another text file.

Creating a Job

Talend Studio enables you to create a Job by dropping different technical components from the Palette onto the design
workspace and then connecting these components together.

47
Data Integration

Warning: If you are working on a Git-managed project, do not use any of the following reserved key words to name your
Job or Job folder:
• tests
• target
• src
If any of the above-mentioned key words is used in the name of a Job, a Job folder or any level of its parent folders,
changes to your Job or your Jobs in the folder will not get pushed to Git.

Note that if you are a subscription-based user of one of the Talend solutions with Big Data, another type of Job can
be created to generate native Spark code and executed directly in Spark clusters. For related situation, see the chapter
describing how to design a Spark Job.

About this task


To create the example Job described in this section, proceed as follows:

Procedure
1. In the Repository tree view of the Integration perspective, right-click the Job Designs node or the Standard folder
under the Job Designs node and select Create Standard Job from the contextual menu.
The New Job wizard opens to help you define the main properties of the new Job.

2. Fill the Job properties as shown in the previous screenshot.


The fields correspond to the following properties:

Field Description

Name the name of the new Job.


Note that a message comes up if you enter prohibited characters.

Purpose Job purpose or any useful information regarding the Job use.

48
Data Integration

Field Description

Description Job description containing any information that helps you describe what the Job does and how it does it.

Author a read-only field that shows by default the current user login.

Locker a read-only field that shows by default the login of the user who owns the lock on the current Job. This field is
empty when you are creating a Job and has data only when you are editing the properties of an existing Job.

Version a read-only field. You can manually increment the version using the M and m buttons. For more information, see
Managing Job versions on page 184.

Status a list to select from the status of the Job you are creating.

Path a list to select from the folder in which the Job will be created.

3. An empty design workspace opens up showing the name of the Job as a tab label.

Results
The Job you created is now listed under the Job Designs node in the Repository tree view.
You can open one or more of the created Jobs by simply double-clicking the Job label in the Repository tree view.

Adding components to the Job

Now that the Job is created, components have to be added to the design workspace, a tFileInputDelimited, a tLogRow, and a
tFileOutputDelimited in this example.
There are several ways to add a component onto the design workspace. You can:
• find your component on the Palette by typing the search keyword(s) in the search field of the Palette and drop it onto
the design workspace.
• add a component by directly typing your search keyword(s) on the design workspace.
• add an output component by dragging from an input component already existing on the design workspace.
• drag and drop a centralized metadata item from the Metadata node onto the design workspace, and then select the
component of interest from the Components dialog box.
This section describes the first three methods. For details about how to drop a component from the Metadata node, see
Centralizing database metadata on page 318 .
You can also search Stitch connectors on the design workspace and in the Palette. The search result will lead you to the
Stitch web page about the connector you select.

49
Data Integration

Dropping the first component from the Palette

About this task


The first component of this example will be added from the Palette. This component defines the first task executed by the
Job. In this example, as you first want to read a text file, you will use the tFileInputDelimited component.
To drop a component from the Palette, proceed as follows:

Procedure
1. Enter the search keyword(s) in the search field of the Palette and press Enter to validate your search.
The keyword(s) can be the partial or full name of the component, or a phrase describing its functionality if you don't
know its name, for example, fileinputde, fileinput, or read file row by row. The Palette filters to the only
families where the component can be found. If you cannot find the Palette view in the Studio, see Changing the Palette
layout and settings on page 589.

Note: Components may not always have a prefix letter in the name. Therefore, as a best practice, you may want to
simply specify the main body when searching for a component by its name.

2. Select the component you want to use and click on the design workspace where you want to drop the component.

Results
Note that you can also drop a note to your Job the same way you drop components.
Each newly-added component is shown in a blue box to show that it as an individual subJob.

50
Data Integration

Adding the second component by typing on the design workspace

About this task


The second component of our Job will be added by typing its name directly on the workspace, instead of dropping it from the
Palette or from the Metadata node.
Prerequisite: Make sure you have selected the Enable Component Creation Assistant check box in the Studio preferences. For
more information, see Using centralized metadata in a Job on page 513.
To add a component directly on the workspace, proceed as follows:

Procedure
1. Click where you want to add the component on the design workspace, and type your keywords, which can be the full or
partial name of the component, or a phrase describing its functionality if you don't know its name. In our example, start
typing log.

Note: Components may not always have a prefix letter in the name. Therefore, as a best practice, you may want to
simply specify the main body when searching for a component by its name.

A list box appears below the text field displaying all the matching components in alphabetical order.

2. Double-click the desired component to add it on the workspace, tLogRow in our example.

51
Data Integration

Adding an output component by dragging from an input one

About this task


Now you will add the third component, a tFileOutputDelimited, to write the data read from the source file into another text
file. We will add the component by dragging from the tLogRow component, which serves as an input component to the new
one to be added.

Procedure
1. Click the tLogRow component, and drag and drop the arrow icon displayed onto the design workspace.
A text field and a component list appear. The component list shows all the components that can be connected with the
input component.

2. To narrow the search, type in the text field the name of the component you want to add or part of it, or a phrase
describing the component's functionality if you don't know its name, and then double-click the component of interest,
tFileOutputDelimited in this example, on the component list to add it onto the design workspace.

Note: Components may not always have a prefix letter in the name. Therefore, as a best practice, you may want to
simply specify the main body when searching for a component by its name.

The new component is automatically connected with the input component tLogRow, using a Row Main connection.

52
Data Integration

Connecting the components together

Now that the components have been added on the workspace, they have to be connected together. Components connected
together form a subJob. Jobs are composed of one or several subJobs carrying out various processes.
In this example, as the tLogRow and tFileOutputDelimited components are already connected, you only need to connect the
tFileInputDelimited to the tLogRow component.
To connect the components together, use either of the following methods:
Right-click and click again

Procedure
1. Right-click the source component, tFileInputDelimited in this example.
2. In the contextual menu that opens, select the type of connection you want to use to link the components, Row Main in
this example.
3. Click the target component to create the link, tLogRow in this example.

Note that a black crossed circle is displayed if the target component is not compatible with the link.

According to the nature and the role of the components you want to link together, several types of link are available.
Only the authorized connections are listed in the contextual menu.
Drag and drop

Procedure
Click the input component, tFileInputDelimited in this example, and drag and drop the arrow icon displayed onto the
destination component, tLogRow in this example.
A Row > Main connection is automatically created between the two components.

53
Data Integration

While this method requires less operation steps, it works only with these types of Row connections: Main, Lookup,
Output, Filter, and Reject, depending on the nature and role of the components you are connecting.

Configuring the components

Now that the components are linked, their properties should be defined.
For more advanced details regarding the components properties, see Defining component properties on page 74.
Configuring the tFileInputDelimited component

Procedure
1. Double-click the tFileInputDelimited component to open its Basic settings view.

2. Click the [...] button next to the File Name/Stream field.


3. Browse your system or enter the path to the input file, customers.txt in this example.
4. In the Header field, enter 1.
5. Click the [...] button next to Edit schema.
6. In the Schema Editor that opens, click three times the [+] button to add three columns.
7. Name the three columns id, CustomerName and CustomerAddress respectively and click OK to close the editor.

54
Data Integration

8. In the pop-up that opens, click OK accept the propagation of the changes.
This allows you to copy the schema you created to the next component, tLogRow in this example.

Configuring the tLogRow component

Procedure
1. Double-click the tLogRow component to open its Basic settings view.
2. In the Mode area, select Table (print values in cells of a table).
By doing so, the contents of the customers.txt file will be printed in a table and therefore more readable.

Configuring the tFileOutputDelimited component

Procedure
1. Double-click the tFileOutputDelimited component to open its Basic settings view.

55
Data Integration

2. Click the [...] button next to the File Name field.


3. Browse your system or enter the path to the output file, customers.csv in this example.
4. Select the Include Header check box.
5. If needed, click the Sync columns button to retrieve the schema from the input component.

Executing the Job and checking the result

About this task


Now that components are configured, the Job can be executed.
To do so, proceed as follows:

Procedure
1. Press Ctrl+S to save the Job.
2. Go to Run tab, and click on Run to execute the Job.

Results
The file is read row by row and the extracted fields are displayed on the Run console and written to the specified output file.

56
Data Integration

Creating a Job from a template


Talend Studio enables you to use the following different templates to create ready-to-run Jobs:
• TableToFile: creates a Job that outputs data from a database table to a file. For more information, see Outputting data
from a file to a database table and vice versa on page 57.
• TableToTable: migrates data from one database table to another, for example. For more information, see Outputting
data from one database table to another on page 61.
• FileToTable: writes data to a database table. For more information, see Outputting data from a file to a database table
and vice versa on page 57.
• FileToJoblet: retrieves data from files and writes this data into a Joblet in a specific format.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

Outputting data from a file to a database table and vice versa

About this task


You can use different templates to create a Job that writes data from a file to a database table or from a database table to a
file, do the following:

Procedure
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job from templates
in the drop-down list. A Job creation wizard displays to help you defining the new Job main properties

2. Select the Simple Template option and click Next to open a new view on the wizard.

57
Data Integration

3. Enter the relevant information according to the following:

Field Description

Name Enter a name for your new Job. A message comes up if you enter prohibited characters.

Purpose Enter the Job purpose or any useful information regarding the Job use.

Description Enter a description, if need be, for the created Job.

Author The Author field is read-only as it shows by default the current user login.

Locker The Locker field is read-only as it shows by default the current lock.

Version The Version is read-only. You can manually increment the version using the M and m buttons. For more
information, see Managing Job versions on page 184.

Status You can define the status of a Job in your preferences. By default none is defined. To define them, go to Window >
Preferences > Talend > Status.

Path Select the folder in which the Job will be created.

4. Once you filled in the Job information, click Next to validate and open a new view on the wizard.

58
Data Integration

5. Select the template you want to use to create your Job and click Next.

59
Data Integration

6. In the Type Selection area, select from the drop-down list the input file to use, tFileInputDelimited for example.
7. In the Main properties of the component area, click the [...] button and browse to the file you want to use the properties
of. The file should be centralized in the Repository tree view. The fields that follow in the Detail settings area are filled
automatically with the properties of the selected file. Alternatively, you can set manually the file path and all properties
fields in the Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.

8. In the Metadata area, click the three-dot button to open the Repository Content dialog box and select the schema.
Alternatively, you can use the toolbar to import it or add columns manually. Then, click Next to validate and open a new
view on the wizard.

60
Data Integration

9. In the Type Selection area, select the output database type from the drop-down list.
10. In the Main properties of the component area, click the three-dot button and browse to the connection you want to use
the properties of. The Database connection should be centralized in the Repository tree view. The fields that follow in
the Detail settings area are filled automatically with the properties of the selected connection. Alternatively, you can set
manually the database details and all properties fields in the Detail setting area, if needed.
Then, click Finish to close the wizard.
The ready-to-run Job is created and listed under the Job Designs node in the Repository tree view.

Results
Once the Job is created, you can modify the properties of each of the components in the Job according to your needs.

Outputting data from one database table to another

About this task


You can use different templates to create a Job that writes data from a database table to another database table or from a
database table to a file.
To output data from one database table to another database table, do the following:

61
Data Integration

Procedure
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job from templates
in the drop-down list. A Job creation wizard displays to help you defining the new Job main properties

2. Select the From Table List option and click Next to open a new view on the wizard.

3. Select the template you want to use to create your Job and click Next, TableToTable in this example.

62
Data Integration

4. In the Main properties of the component area, click the [...] button and browse to the connection you want to use the
properties of. The database connection should be centralized in the Repository tree view. The fields that follow in the
Detail settings area are filled automatically with the properties of the selected database table. Alternatively, you can
manually set the database parameters in the Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.

63
Data Integration

5. In the Select Schema to create area, select the check box of the table you want to use and click Next to validate and
open a new view on the wizard. Then, click Next to validate and open a new view on the wizard.

64
Data Integration

6. In the Type Selection area, select the output database type from the drop-down list.
7. In the Main properties of the component area, click the three-dot button and browse to the connection you want to use
the properties of. The Database connection should be centralized in the Repository tree view. The fields that follow in
the Detail settings area are filled automatically with the properties of the selected connection. Alternatively,you can
manually set the output database details and all properties fields in the Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.

65
Data Integration

8. In the Check Availability area, select the check boxes of the available option according to your needs. In this example,
we want to save the input schemas in the Repository tree view and we want to insert a tMap component between the
input and output components of the created Job.
9. In the Jobname field, enter a name for your Job, and click the check button to verify that the name chosen for your Job is
available. A dialog box opens and informs you whether the Job name is available. Click Ok to close the dialog box.
10. Click Finish to validate and close the wizard. The ready-to-run Job is created and listed under the Job Designs node in
the Repository tree view.

Results
Once the Job is created, you can modify the properties of each of the components in the Job according to your needs.

Outputting data from a file to a Joblet in a specific format

About this task


This template allows you to create a Job that retrieves data from a file and writes this data into a Joblet in a specific format.

66
Data Integration

Note:
The target Joblet you want to write data in must already exist and the metadata to be read have been created in the
centralized repository when using the template.

To output data from a file to a Joblet, do the following:

Procedure
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job from templates
in the drop-down list. A Job creation wizard displays to help you defining the new Job main properties

2. Select the Migrate data from file to joblet option and click Next to open a new view on the wizard.

67
Data Integration

3. Select the FileToJoblet template to create your Job and click Next.

68
Data Integration

4. In the Select Schema to create area, select the metadata you want to use as parameters to retrieve and write the data
into the target Joblet. This example uses a .csv file. Then, click Next to proceed.

5. In the Type Selection area, select the target Joblet you want to write the retrieved data in, and click Next to validate and
open a new view on the wizard.

69
Data Integration

6. In the Jobname field, type in what you want to add to complete the Job name. By default, the Job name is
Job_{CURRENT_TABLE}, type in example to complete this name as Job_example_{CURRENT_TABLE}, and click
check button to see whether the Job name to be used already exists or not. If it exists, you need type in another Job
name in the Jobname field. If it does not, a Success dialog box pops up to prompt you to continue. Click OK.

Warning:
Do not replace or delete {CURRENT_TABLE} when you type in texts to complete the Job name.

7. Select the Create subJobs in a single job check box if you have selected several metadata files to retrieve and write data
in the target Joblet and meanwhile, you want to handle these files using subJobs in a single Job.
Keep this check box cleared if you want to handle these files in several separate Jobs.

Results
Once the Job is created, you can modify the properties of each of the components in the Job according to your needs.

Creating a Job using a Job creation API


A Job script is another way to create a data integration process with Talend Studio. You can either:
• write a script with the properties of each element of your process.
• create a Job and display its script through the Jobscript tab to edit it if necessary.
You can create your Job script in any text editor and name your file with the .jobscript extension, or you can create it with
the Integration perspective of Talend Studio. When you create it with the Job script API editor of Talend Studio, the use of
the auto-completion (Ctrl+Space) will ease the writing process of the script. Moreover, in the Studio, the code displays in
color to be easily identified and you can create template in Talend Studio's preferences.
To access Job scripts API editor in the Studio, expand the Code node in the Repository of the Integration perspective.
For more information on using Talend Job scripts, see Talend Job scripts reference guide at https://help.talend.com/.

Working with components


The sections below give detailed information about various subjects related to handling components in Jobs , including:
• Adding a component between two connected components on page 70
• Defining component properties on page 74
• Finding Jobs containing a specific component on page 85
• Setting default values in the schema of a component in a Job on page 87

Adding a component between two connected components

When designing a Job, you can insert a component between two components linked by a Row connection, provided that the
new component can serve as a middle component between the two.
The examples below show different options for you to insert a tMap between a tFileInputDelmited and LogRow linked by a
Row > Main connection.
Dropping the component from the Palette onto the connection

Procedure
1. From the Palette, locate and select tMap.
2. Drag the component and drop it onto the Row connection.

70
Data Integration

If you are prompted to give a name to the output connection from the newly added component, which is true in the
case of a tMap, type in a name and click OK to close the dialog box.

Note:
You may be asked to retrieve the schema of the target component. In that case, click OK to accept or click No to
deny.

The component is inserted in the middle of the connection, which is now divided into two connections.

Adding the component by typing on the connection

Procedure
1. Click on the connection that links the two existing components to select it.

2. Type the name of the new component you want to add, tMap in this example, and double click the component on the
suggested list to add it onto the connection.

71
Data Integration

3. If you are prompted to give a name to the output connection from the newly added component, which is true in the
case of a tMap, type in a name and click OK to close the dialog box.

Note:
You may be asked to retrieve the schema of the target component. In that case, click OK to accept or click No to
deny.

The component is inserted in the middle of the connection, which is now divided in two connections.
Adding the component to the design workspace and moving the existing connection

Procedure
1. Add the new component, tMap in this example, onto the design workspace by either dropping it from the Palette or
clicking in the design workspace and typing the component name.

72
Data Integration

2. Select the connection and move your mouse pointer towards the end of the connection until the mouse pointer
becomes a + symbol.

3. Drag the connection from the tLogRow component and drop it onto the tMap component.

4. Connect the tMap component to the tLogRow using a Row > Main connection.

73
Data Integration

Defining component properties

The properties information for each component in a Job allows to set the actual technical implementation of the active Job.
Each component is defined by basic and advanced properties shown respectively on the Basic Settings tab and the Advanced
Settings tab of the Component view of the selected component in the design workspace. The Component view gathers also
other collateral information related to the component in use, including View and Documentation tabs.
Basic Settings tab

About this task


The Basic Settings tab is part of the Component view, which is located on the lower part of the designing editor of the
Integration perspective of Talend Studio.

Each component has specific basic settings according to its function requirements within the Job.

Note:
Some components require code to be input or functions to be set. Make sure you use Java code in properties.

For File and Database components in a Job, you can centralize properties in metadata files located in the Metadata directory
of the Repository tree view. This means that on the Basic Settings tab you can set properties on the spot, using the Built-
In Property Type or use the properties you stored in the Metadata Manager using the Repository Property Type. The latter
option helps you save time.
Select Repository as Property Type and choose the metadata file holding the relevant information. Related topic:
Centralizing database metadata on page 318.
Alternatively, you can drop the Metadata item from the Repository tree view directly to the component already dropped on
the design workspace, for its properties to be filled in automatically.

74
Data Integration

If you selected the Built-in mode and set manually the properties of a component, you can also save those properties as
metadata in the Repository. To do so:

Procedure
1. Click the floppy disk icon. The metadata creation wizard corresponding to the component opens.
2. Follow the steps in the wizard. For more information about the creation of metadata items, see Centralizing database
metadata on page 318.
3. The metadata displays under the Metadata node of the Repository.

Results
For all components that handle a data flow (most components), you can define a Talend schema in order to describe and
possibly select the data to be processed. Like the Properties data, this schema is either Built-in or stored remotely in the
Repository in a metadata file that you created. A detailed description of the Schema setting is provided in the next sections.
Setting a built-in schema in a Job
A schema created as Built-in is meant for a single use in a Job, hence cannot be reused in another Job.

Procedure
1. Select Built-in in the Property Type list of the Basic settings view.
2. Click the Edit Schema button to create your built-in schema by adding columns and describing their content, according
to the input file definition.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
In all output properties, you also have to define the schema of the output. To retrieve the schema defined in the input
schema, click the Sync columns tab in the Basic settings view.

75
Data Integration

Warning: When creating a database table, you are recommended to specify the Length field for all columns of type
String, Integer or Long and specify the Precision field for all columns of type Double, Float or BigDecimal in the
schema of the component used. Otherwise, unexpected errors may occur.

You can use the following two buttons at the bottom of the Schema editor dialog box of database components to edit
Db Column names:

: converts the Db Column name of each selected column to upper case or lower case.

: adds or removes quote characters in the Db Column name of each selected column by selecting Add quote or
Remove quote and entering the quote character in the field in a pop-up dialog box.

Note: The Guess Query function for database input components does not work when quote characters are
added in Db Column names.

Setting a repository schema in a Job


If you often use certain database connections or specific files when creating your Jobs, you can avoid defining the same
properties over and over again by creating metadata files and storing them in the Metadata node in the Repository tree view
of the Integration perspective.

Procedure
1. To recall a metadata file into your current Job:
• Select Repository in the Schema list and then select the relevant metadata file.
• Or, drop the metadata item from the Repository tree view directly to the component already dropped on the design
workspace.
2. Click Edit Schema to check that the data is appropriate.

76
Data Integration

You can edit a repository schema used in a Job from the Basic settings view. However, note that the schema hence
becomes Built-in in the current Job.

Note:
You cannot change the schema stored in the repository from this window. To edit the schema stored remotely, right-
click it under the Metadata node and select the corresponding edit option (Edit connection or Edit file) from the
contextual menu.

Using a repository schema partially in a Job


When using a repository schema, if you do not want to use all the predefined columns, you can select particular columns
without changing the schema into a built-in one.
The following describes how to use a repository schema partially for a database input component. The procedure may vary
slightly according to the component you are using.

Procedure
1. Click the [...] button next to Edit schema on the Basic settings tab. The Edit parameter using repository dialog box
appears. By default, the option View schema is selected.

77
Data Integration

2. Click OK. The Schema dialog box pops up, which displays all columns in the schema. The Used Column check box before
each column name indicates whether the column is used.
3. Select the columns you want to use.

4. Click OK. A message box appears, which prompts you to do a guess query.

Note:
The guess query operation is needed only for the database metadata.

78
Data Integration

5. Click OK to close the message box. The Propagate dialog box appears. Click Yes to propagate the changes and close the
dialog box.

6. On the Basic settings tab, click Guess Query. The selected column names are displayed in the Query area as expected.

Setting a field dynamically (Ctrl+Space bar)

About this task


On any field of your component Properties view, you can use the Ctrl+Space bar to access the global and context variable
list and set the relevant field value dynamically.

Procedure
1. Place the cursor on any field of the Component view.
2. Press Ctrl+Space bar to access the proposal list.
3. Select on the list the relevant parameters you need. Appended to the variable list, a information panel provides details
about the selected parameter.

79
Data Integration

This can be any parameter including: error messages, number of lines processed, or else... The list varies according to
the component in selection or the context you are working in.
Related topic: Using contexts and variables on page 105.
Advanced settings tab
Some components, especially File and Databases components in Jobs, provides numerous advanced use possibilities.

The content of the Advanced settings tab changes according to the selected component.
Generally you will find on this tab the parameters that are not required for a basic or usual use of the component but may be
required for a use out of the standard scope.
Measuring data flows
You can also find in the Advanced settings view the option tStatCatcher Statistics that allows you, if selected, to display logs
and statistics about the current Job without using dedicated components. For more information regarding the stats & log
features, see Automating the use of statistics & logs on page 137.
Dynamic settings tab of components in a Job

About this task


The Basic settings and Advanced settings tabs of all components display various check boxes and drop-down lists for
component parameters. Usually, available values for these types of parameters can only be edited when designing your Job.
The Dynamic settings tab, on the Component view, allows you to customize these parameters into code or variable.
This feature allows you, for example, to define these parameters as variables and thus let them become context-dependent,
whereas they are not meant to be by default.

80
Data Integration

Another benefit of this feature is that you can now change the context setting at execution time. This makes full sense when
you intend to export your Job in order to deploy it onto a Job execution server for example.

To customize these types of parameters, as context variables for example, follow the following steps:

Procedure
1. Select the relevant component basic settings or advanced settings view that contains the parameter you want to define
as a variable.
2. Click the Dynamic settings tab.
3. Click the plus button to display a new parameter line in the table.
4. Click the Name of the parameter displaying to show the list of available parameters. For example: Print
operations
5. Then click in the facing Code column cell and set the code to be used. For example: context.verbose if you create
the corresponding context variable, called verbose.

Note: As code, you can input a context variable or a piece of Java code.

Results
The corresponding lists or check boxes thus become unavailable and are highlighted in yellow in the Basic settings or
Advanced settings tab.

Note: If you want to set a parameter as context variable, make sure you create the corresponding variable in the Contexts
view. For more information regarding the context variable definition, see Defining context variables in the Contexts view
on page 106.

For use cases showing how to define a dynamic parameter, see Defining Context Groups.

81
Data Integration

Dynamic schema
Talend Studio allows you to add a dynamic column to the schema of certain components in a Job. The dynamic column may
constitute the only column in the schema, or it may be added after known columns, as the last column in the schema.
The dynamic column retrieves the columns which are undefined in the schema. This means that source columns which
are unknown when the Job is designed, become known at runtime and are added to the schema. This can make Job design
much easier as it allows for simple one to one mapping of many columns. There are many uses for dynamic columns. For
instance in data migration tasks, developers can copy columns of data to another location without having to map each
column individually.
Note that any static object set in the schema editor, such as data pattern or default value, is not taken into account for a
dynamic column.

Warning: The Dynamic Schema functionality is to help you configure a schema in a non-static way, so you won't have to
redesign your Job for future schema alteration while ensuring it will work all the time. However, it is not a guarantee that
your schema will stick 100% accurately to the schema of the actual data to handle.

While the dynamic schema feature significantly eases Job designs, it does not work in all components. For a list
of components that support this feature, go to <install_dir>/plugins/, where <install_dir> is the
Studio installation directory, extract the jar file org.talend.core.tis_<version>.jar to get the text file
supportDynamic.txt in the resources folder.

Warning: In the Database Input components, the SELECT query must include the * wildcard, to retrieve all of the
columns from the table selected.

For further information about defining dynamic schemas, see Defining dynamic schema columns on page 82.
For further information regarding the mapping of dynamic columns, see Mapping dynamic columns on page 83.
For an example of using the dynamic schema feature in a Job, see Dynamic Schema.
Defining dynamic schema columns
The dynamic schema columns are easy to define. To define dynamic columns for the Databases Input and Output
components, or for tFileInputDelimited and tFileOutputDelimited.

Procedure
1. In the component's Basic settings tab, set the Property Type as Built-In.

Warning: The dynamic schema is only supported in Built-In mode.

2. Click Edit schema to define the schema.


The Schema dialog box opens.

82
Data Integration

3. In the last row added to the schema, enter a name for the dynamic column in the Column field.

Warning: In the Database Input components, the SELECT query must include the * wildcard, to retrieve all of the
columns from the table selected.

4. Click the Type field and then the arrow which appears to select Dynamic from the list.
Mapping dynamic columns
It is easy to map dynamic columns in the tMap component, as in the Map Editor, simply dropping the dynamic column from
the input schema to the output schema does not change any of the column values:

83
Data Integration

However, certain limitations must be respected:


• The dynamic column must be mapped on a one to one basis and cannot undergo any transformations.
• The dynamic column cannot be used in filter expressions or in variables.
• The dynamic column cannot be renamed in output tables and cannot be used as a join condition.

Note: Dynamic schemas can be mapped to several outputs and can also be mapped from lookup inputs.

View tab
The View tab of the Component view allows you to change the default display format of components on the design
workspace.

Field Description

Label format Free text label showing on the design workspace. Variables can be set to retrieve and display values from
other fields. The field tooltip usually shows the corresponding variable where the field value is stored.

Hint format Hidden tooltip, showing only when you mouse over the component.

Connection format Indicates the type of connection accepted by the component.

You can graphically highlight both Label and Hint text with HTML formatting tags:
• Bold: <b> YourLabelOrHint </b>
• Italic: <i> YourLabelOrHint </i>
• Return carriage: YourLabelOrHint <br> ContdOnNextLine
• Color: <Font color= '#RGBcolor'> YourLabelOrHint </Font>
To change your preferences of this View panel, click Window > Preferences > Talend > Designer.

84
Data Integration

Documentation tab
Feel free to add any useful comment or chunk of text or documentation to your component.

In the Documentation tab, you can add your text in the Comment field. Then, select the Show Information check box and an
information icon display next to the corresponding component in the design workspace.
You can show the Documentation in your hint tooltip using the associated variable _COMMENT_, so that when you place
your mouse on this icon, the text written in the Comment field displays in a tooltip box.
For advanced use of Documentations, you can use the Documentation view in order to store and reuse any type of
documentation.

Finding Jobs containing a specific component

About this task

Note:
You should open one Job at least in the Studio to display the Palette to the right of the design workspace and thus start
the search.

From the Palette, you can search for all the Jobs that use the selected component. To do so:

Procedure
1. In the Palette, right-click the component you want to look for and select Find Component in Jobs.

85
Data Integration

A progress indicator displays to show the percentage of the search operation that has been completed then the Find a
Job dialog box displays listing all the Jobs that use the selected component.

86
Data Integration

2. From the list of Jobs, click the desired Job and then click OK to open it on the design workspace.

Setting default values in the schema of a component in a Job

About this task


You can set default values in the schema of certain components to replace null values retrieved from the data source.

Note:
At present, only tFileInputDelimited, tFileInputExcel, and tFixedFlowInput support default values in the schema.

In the following example, the company and city fields of some records of the source CSV file are left blank, as shown
below. The input component reads data from the source file and completes the missing information using the default values
set in the schema, Talend and Paris respectively.

id;firstName;lastName;company;city;phone
1;Michael;Jackson;IBM;Roma;2323
2;Elisa;Black;Microsoft;London;4499
3;Michael;Dujardin;;;8872
4;Marie;Dolvina;;;6655
5;Jean;Perfide;;;3344
6;Emilie;Taldor;Oracle;Madrid;2266
7;Anne-Laure;Paldufier;Apple;;4422

To set default values:

Procedure
1. Double-click the input component tFileInputDelimited to show its Basic settings view.

87
Data Integration

In this example, the metadata for the input component is stored in the Repository. For information about metadata
creation in the Repository, see Centralizing database metadata on page 318.
2. Click the [...] button next to Edit schema, and select the Change to built-in property option from the pop-up dialog box
to open the schema editor.
3. Enter Talend between quotation marks in the Default field for the company column, enter Paris between quotation
marks in the Default field for the city column, and click OK to close the schema editor.

4. Configure the output component tLogRow to display the execution result the way you want, and then run the Job.

88
Data Integration

In the output data flow, the missing information is completed according to the set default values.

Using the tPrejob and tPostjob components

The tPrejob and tPostjob components are designed to make the execution of tasks before and after a given Job easier to
manage. These components differ from other components in that they do not actually process data and they do not have
any components properties to be configured. A key feature of these components is that they are always guaranteed to be
executed, even if the main data Job fails. Therefore, they are very useful for setup and teardown actions for a given Job.

Note:
As tPrejob and tPostjob are not meant to take part in any data processing, they cannot be part of a multi-thread
execution. They are meant to help you make your Job design clearer.

To use these tPrejob and tPostjob components, simply drop them onto the design workspace as you would do with any other
components, and then connect tPrejob to a component or subJob that is meant to perform a pre-job task, and tPostjob to a
component or subJob that is meant to perform a post-job task, using Trigger connections. An orange square on the pre- and
post-job parts indicates that they are different types of subJobs.

89
Data Integration

Tasks that require the use of a tPrejob component include:


• Loading context information required for the subJob execution.
• Opening a database connection.
• Making sure that a file exists.
Tasks that require the use of a tPostjob component include:
• Cleaning up temporary files created during the processing of the main data Job.
• Closing a database connection or a connection to an external service.
• Any task required to be executed, even if the preceding Job or subJobs failed.
For a use case that uses the tPrejob and tPostjob components, see Orchestration (Integration)

Downloading/uploading Talend Community components

Talend Studio enables you to access a list of all community components in Talend Exchange that are compatible with your
current version of Talend Studio. You can then download and install these components to use them later in the Job designs
you carry out in the Studio. From Talend Studio, you can also upload components you have created to Talend Exchange to
share with other community users.

Warning: Make sure that the -Dtalend.disable.internet parameter is not present in the Studio .ini file or is set
to false.

A click on the Exchange link on the toolbar of Talend Studio opens the Exchange tab view on the design workspace, where
you can find lists of:
• components available in Talend Exchange for you to download and install,
• components you downloaded and installed in previous versions of Talend Studio but not installed yet in your current
Studio,
• components you have created and uploaded to Talend Exchange to share with other Talend Community users.
Note that the approach explained in this section is to be used for the above-mentioned components only.

90
Data Integration

Note:
• Before you can download community components or upload your own components to the community, you need
to sign in to Talend Exchange from your Studio first. If you did not sign in to Talend Exchange when launching
the Studio, you still have a chance to sign in from the Talend Exchange preferences settings page. For more info
rmation, see Exchange preferences (Talend > Exchange) on page 606.
• The community components available for download are not validated by Talend . This explains why you may
encounter component loading errors sometimes when trying to install certain community components, why an
installed community component may have a different name in the Palette than in the Exchange tab view, and why
you may not be able to find a component in the Palette after it is seemingly installed successfully.

Installing community components from Talend Exchange

About this task


To install community components from Talend Exchange to the Palette of your current Talend Studio:

Procedure
1. Click the Exchange link on the toolbar of Talend Studio to open the Exchange tab view on the design workspace.

2. In the Available Extensions view, if needed, enter a full component name or part of it in the text field and click the fresh
button to find quickly the component you are interested in.
3. Click the view/download link for the component of interest to display the component download page.

91
Data Integration

4. View the information about the component, including component description and review comments from community
users, or write your own review comments and/or rate the component if you want. For more information on reviewing
and rating a community component, see Reviewing and rate a community component on page 93.
If needed, click the left arrow button to return to the component list page.
5. Click the Install button in the right part of the component download page to start the download and installation
process.
A progress indicator appears to show the completion percentage of the download and installation process. Upon
successful installation of the component, the Downloaded Extensions view opens and displays the status of the
component, which is Installed.

92
Data Integration

Reinstalling or update community components

About this task


From the Exchange tab view, you can reinstall components you already downloaded and installed in your previous version of
Talend Studio or install the updated version of Talend Studio or components in your current Studio.

Note:
By default, while you are connected to Talend Exchange , a dialog box appears to notify you whenever an update to an
installed community component is available. If you often check for community component updates and you do not want
that dialog box to appear again, you can turn it off in Talend Exchange preferences settings. For more information, see
Exchange preferences (Talend > Exchange) on page 606.

To reinstall a community component you already downloaded or update an installed one, do the following:

Procedure
1. From the Exchange tab view, click Downloaded Extensions to display the list of components you have already
downloaded from Talend Exchange .
In the Downloaded Extensions view, the components you have installed in your previous version of Talend Studio but
not in your current Studio have an Install link in the Install/Update column, and those with updates available in Talend
Exchange have an Update link.
2. Click the Install or Update link for the component of interest to start the installation process.
A progress indicator appears to show the completion percentage of the installation process. Upon successful
installation, the Downloaded Extensions view displays the status of the component, which is Installed.
Reviewing and rate a community component

About this task


To review and rate a community component:

Procedure
1. From the Available Extensions view, click the view/download link for the component you want to review or rate to
open the community component download page.
2. On the component download page, click the write a review link to open the Review the component dialog box.

93
Data Integration

3. Fill in the required information, including a title and a review comment, click one of the five stars to rate the
component, and click Submit Review to submit you review to the Talend Exchange server.
Upon validation by the Talend Exchange moderator, your review is published on Talend Exchange and displayed in
the User Review area of the component download page.
Uploading a component you created to Talend Exchange

About this task


You can create your own components for use in your Jobs in Talend Studio and upload them to Talend Exchange to share
with other Talend Community users. For information on how to create your own components and deploy them in Talend
Studio, see How to define the user component folder (Talend > Components) on page 604.
To upload a component you created to Talend Exchange , complete the following:

Procedure
1. From the Exchange tab view, click My Extensions to open the My Extensions view.

2. Click the Add New Extension link in the upper right part of the view to open the component upload page.

94
Data Integration

3. Complete the required information, including the component title, initial version, Studio compatibility information,
and component description, fill in or browse to the path to the source package in the File field, and click the Upload
Extension button.
Upon successful upload, the component is listed in the My Extensions view, where you can update, modify and delete
any component you have uploaded to Talend Exchange .

Managing components you uploaded to Talend Exchange


From the Exchange tab view, you can manage components you have uploaded to Talend Exchange , including updating
component version, modifying component information, and deleting components from Talend Exchange .
To update the version of a component, complete the following:
1.
From the My Extensions view, click the icon in the Operation column for the component your want to update to
open the component update page.

95
Data Integration

2. Fill in the initial version and Studio compatibility information, fill in or browse to the path to the source package in the
File field, and click the Update Extension button.
Upon successful upload of the updated component, the component is replaced with the new version on Talend
Exchange and the My Extension view displays the component's new version and update date.
To modify the information of a component uploaded to Talend Exchange , complete the following:
1.
From the My Extensions view, click the icon in the Operation column for the component your want to modify
information for to open the component information editing page.

2. Complete the Studio compatibility information and component description, and click the Modify Extension button to
update the component information to Talend Exchange .

To delete a component you have uploaded to Talend Exchange , click icon for the component from the My Extensions
view. The component is then removed from Talend Exchange and is no longer displayed on the component list in the My
Extensions view.

96
Data Integration

Using connections in a Job


In Talend Studio, a Job or a subJob is composed of a group of components logically linked to one another via connections.
You need to use the connections to define how the components in use are coordinated. This section will describe the types
of connections and their related settings.

Connection types

There are various types of connections which define either the data to be processed, the data output, or the Job logical
sequence.
Right-click a component on the design workspace to display a contextual menu that lists all available connections for the
selected component.
The sections below describe all available connection types.
Row connection
A Row connection handles the actual data. The Row connections can be Main, Lookup, Reject, Output, Uniques/Duplicates, or
Combine according to the nature of the flow processed.
Main
This type of row connection is the most commonly used connection. It passes on data flows from one component to the
other, iterating on each row and reading input data according to the component properties setting (schema).
Data transferred through main rows are characterized by a schema definition which describes the data structure in the input
file.

Note:
You cannot connect two Input components together using a Row > Main connection. Only one incoming Row connection
is possible per component. You will not be able to link twice the same target component using a main Row connection.
The second Row connection will be called Lookup.

To connect two components using a Main connection, right-click the input component and select Row > Main on the
connection list.
Alternatively, you can click the component to highlight it, then right-click it or click the arrow icon that appears on side of it
and drag the cursor towards the destination component. This will automatically create a Row > Main type of connection.
For information on using multiple Row connections, see Multiple Input/Output on page 99.
Lookup
This row connection connects a sub-flow component to a main flow component (which should be allowed to receive more
than one incoming flow). This connection is used only in the case of multiple input flows.

97
Data Integration

A Lookup row can be changed into a main row at any time (and reversely, a main row can be changed to a lookup row). To do
so, right-click the row to be changed, and on the pop-up menu, click Set this connection as Main.
Related topic: Multiple Input/Output on page 99.
Filter
This row connection connects specifically a tFilterRow component to an output component. This row connection gathers the
data matching the filtering criteria. This particular component offers also a Reject connection to fetch the non-matching data
flow.
Rejects
This row connection connects a processing component to an output component. This row connection gathers the data that
does NOT match the filter or are not valid for the expected output. This connection allows you to track the data that could
not be processed for any reason (wrong type, undefined null value, etc.). On some components, this connection is enabled
when the Die on error option is deactivated.
ErrorReject
This row connection connects a tMap component to an output component. This connection is enabled when you clear the
Die on error check box in the tMap editor and it gathers data that could not be processed (wrong type, undefined null value,
unparseable dates, etc.).
Related topic: Handling errors on page 251.
Output
This row connection connects a tMap component to one or several output components. As the Job output can be multiple,
you get prompted to give a name for each output row created.

Note:
The system also remembers deleted output connection names (and properties if they were defined). This way, you do not
have to fill in again property data in case you want to reuse them.

Related topic: Multiple Input/Output on page 99.


Uniques/Duplicates
These row connection connect a tUniqRow to output components.
The Uniques connection gathers the rows that are found first in the incoming flow. This flow of unique data is directed to
the relevant output component or else to another processing subJob.

98
Data Integration

The Duplicates connection gathers the possible duplicates of the first encountered rows. This reject flow is directed to the
relevant output component, for analysis for example.
Multiple Input/Output
Some components help handle data through multiple inputs and/or multiple outputs. These are often processing-type
components such as the tMap.
If this requires a join or some transformation in one flow, you want to use the tMap component, which is dedicated to this
use.
For further information regarding data mapping, see Map editor interfaces on page 231.
Combine
This type of row connection connects one CombinedSQL component to another.
When right-clicking the CombinedSQL component to be connected to the next one, select Row > Combine.
Iterate connection
The Iterate connection can be used to loop on files contained in a directory, on rows contained in a file or on DB entries.
A component can be the target of only one Iterate connection. The Iterate connection is mainly to be connected to the start
component of a flow (in a subJob).
Some components such as the tFileList component are meant to be connected through an iterate connection with the next
component. For how to set an Iterate connection, see Iterate connection settings on page 102.

Note: The name of the Iterate connection is read-only unlike other types of connections.

Warning: Note that the globalMap is error-prone in parallel execution. Be cautious when using globalMap.put(
"key","value") and globalMap.get("key") to create your own global variables and then retrieve their values in
your Jobs, especially after an Iterate connection with the parallel execution option enabled.

Trigger connections for a Job


Trigger connections define the processing sequence, so no data is handled through these connections.
The connection in use will create a dependency between Jobs or subJobs which therefore will be triggered one after the
other according to the trigger nature.

Trigger connections fall into two categories:


• subJob triggers: On Subjob Ok, On Subjob Error and Run if,
• component triggers: On Component Ok, On Component Error and Run if.

99
Data Integration

OnSubjobOK: This connection is used to trigger the next subJob on the condition that the main subJob completed without
error. This connection is to be used only from the start component of the Job.
These connections are used to orchestrate the subJobs forming the Job or to easily troubleshoot and handle unexpected
errors.
OnSubjobError: This connection is used to trigger the next subJob in case the first (main) subJob do not complete correctly.
This "on error" subJob helps flagging the bottleneck or handle the error if possible.
OnComponentOK and OnComponentError are component triggers. They can be used with any source component on the
subJob.
OnComponentOK will only trigger the target component once the execution of the source component is complete without
error. Its main use could be to trigger a notification subJob for example.
OnComponentError will trigger the subJob or component as soon as an error is encountered in the primary Job.
The main difference between OnSubjobOK and OnComponentOK lies in the execution order of the linked subJob.
• With OnSubjobOK, the linked subJob starts only when the previous subJob completely finishes.
• With OnComponentOK, the linked subJob starts when the previous component finishes.
The execution order of the subJobs linked by OnComponentOK is within the execution cycle of the previous subJob.
Run if triggers a subJob or component in case the condition defined is met. For further information about Run if, see Run if
connection settings on page 102.
For how to set a trigger condition, see Trigger connection settings on page 102.

Note: It is possible to add checkpoints to certain trigger connections in order to be able to recover the execution of a Job
from the last checkpoint previous to the error. For more information, see Setting checkpoints on trigger connections on
page 123.

Link connection
The Link connection can only be used with ELT components. These connections transfer table schema information to the
ELT mapper component in order to be used in specific DB query statements.
The Link connection therefore does not handle actual data but only the metadata regarding the table to be operated on.
When right-clicking the ELT component to be connected, select Link > New Output.

Warning: Be aware that the name you provide to the connection must reflect the actual table name.

In fact, the connection name will be used in the SQL statement generated through the ETL Mapper, therefore the same name
should never be used twice.
For more information, see the documentation for Talend ELT components.

100
Data Integration

Defining connection settings

You can display the properties of a connection by selecting it and clicking the Component view tab, or by right-clicking the
connection and selecting Settings from the contextual menu. This section summarizes connection property settings.
Row connection settings

About this task


The Basic settings vertical tab of the Component view of the connection displays the schema of the data flow handled by
the connection. You can change the schema by clicking the Edit schema button. For more information, see Setting a built-in
schema in a Job on page 75.

The Advanced settings vertical tab lets you monitor the data flow over the connection in a Job without using a separate
tFlowMeter component. The measured information will be interpreted and displayed in a monitoring tool such Talend
Activity Monitoring Console (available with Talend subscription-based products). For information about Talend Activity
Monitoring Console, see Talend Activity Monitoring Console User Guide.

To monitor the data over the connection, perform the following settings in the Advanced settings vertical tab:

Procedure
1. Select the Monitor this connection check box.
2. From the Mode list, select Absolute to log the actual number of rows passes over the connection, or Relative to log the
ratio (%) of the number of rows passed over this connection against a reference connection. If you select Relative, you
need to select a reference connection from the Connections List list.
3. Click the plus button to add a line in the Thresholds table and define a range of the number of rows to be logged.

101
Data Integration

Iterate connection settings


When you configure an Iterate connection, you are actually enabling parallel iterations. For further information, see
Launching parallel iterations to read data on page 208.
Trigger connection settings
OnSubjobOK/OnSubjobError connection settings

About this task


When working in a remote project, you can define checkpoints on OnSubjobOK and OnSubjobError trigger connections, so
that the execution of your Job can be recovered, in case of Job execution failure, from the last checkpoint previous to the
error through the Error Recovery Management page in Talend Administration Center.

To define a checkpoint on a subJob trigger connection, perform the following settings in the Error recovery vertical tab of
the connection's Component view:

Procedure
1. Select the Recovery Checkpoint check box.
2. Enter a name for the checkpoint in the Label field.
3. Fill in any text that can explain the failure in the Failure instructions text field.

Results
For more information, see Setting checkpoints on trigger connections on page 123.
Related topic: Recovering Job execution in Talend Administration Center User Guide.
Run if connection settings

About this task


In the Basic settings view of a Run if connection, you can set the condition to the subJob in Java.
You can use variables in your condition. Pressing Ctrl+Space allows you to access all global and context variables. For more
information, see Using variables in a Job on page 117.

Warning:
When adding a comment after the condition, be sure to enclose it between /* and */ even if it is a single-line comment.

In the following example, a message is triggered if the input file contains 0 rows of data.

102
Data Integration

Procedure
1. Create a Job and drop three components to the design workspace: a tFileInputDelimited, a tLogRow, and a tMsgBox.
2. Connect the components as follows:
• Right-click the tFileInputDelimited component, select Row > Main from the contextual menu, and click the
tLogRow component.
• Right-click the tFileInputDelimited component, select Trigger > Run if from the contextual menu, and click the
tMsgBox component.
3. Configure the tFileInputDelimited component so that it reads a file that contains no data rows.
4. Select the Run if connection between the tFileInputDelimited component and the tMsgBox component, and click the
Component view. In the Condition field on the Basic settings tab, pressing Ctrl+Space to access the variable list, and
select the NB_LINE variable of the tFileInputDelimited component. Edit the condition as follows:

((Integer)globalMap.get("tFileInputDelimited_1_NB_LINE"))==0

5. Go to the Component view of the tMsgBox component, and enter a message, "No data is read from the file" for example,
in the Message field.
6. Save and run the Job. You should see the message you defined in the tMsgBox component.

Adding a conditional breakpoint

About this task


The Breakpoint vertical tab lets you define breakpoints to monitor data processing over this connection.

103
Data Integration

To define a breakpoint, perform the following settings in the Breakpoint vertical tab:

Procedure
1. Select the Activate conditional breakpoint check box.
2. If you want to combine simple filtering and advanced mode, select your logical operator in the Logical operator used to
combine conditions list.
3. Click the [+] button to add as many filtering conditions as you want in the Conditions table. These conditions will be
performed one after another for each row. Each condition includes the input column to operate the selected function
on, the function to operate, an operator to combine the input column and the value to be filtered, and the value.
4. If the standard functions are not sufficient to carry out your operation, select the Use advanced mode check box and fill
in a regular expression in the text field.

Results
Upon defining your breakpoint, run your Job in Traces Debug mode. For more information about breakpoint usage in Traces
Debug mode, see Breakpoint monitoring on page 201.

Partitioning a data flow

The Parallelization vertical tab allows you to configure parameters for partitioning a data flow into multiple threads, so as to
handle those threads in parallel for better performance. The options that appear in this tab vary depending on the sequence
of the row connection in the flow. In addition, different icons will appear in the row connection according to the options you
selected.
Note that the Parallelization tab is available only on the condition that you have subscribed to one of the Platform solutions
or Big Data solutions.

104
Data Integration

For further information about how to partition a data flow for parallelized executions, see Enabling parallelization of data
flows on page 211.

Using contexts and variables


Variables represent values which change throughout the execution of a program.
A global variable is a system variable which can be accessed by any module or function. It retains its value after the function
or program using it has completed execution.

Warning: Note that the globalMap is error-prone in parallel execution. Be cautious when using globalMap.put(
"key","value") and globalMap.get("key") to create your own global variables and then retrieve their values in
your Jobs, especially after an Iterate connection with the parallel execution option enabled.

A context variable is a variable which is defined by the user for a particular context. Depending on the circumstances the Job
is being used in, you might want to manage it differently for various execution types, known as contexts (Prod and Test in
the example given below). For instance, there might be various testing stages you want to perform and validate before a Job
is ready to go live for production use.

Note:
General advices for using context variables:
• Use small pieces of data in context variables.
• Avoid putting code in context variables.
• Use appropriate variable types rather than the default type String, for example, the type Password for a
password to enhance security, the type File or Directory for a file path or directory to avoid possible unwanted
escaping.
• Avoid using double quotes around a string value in general. However, double quotes must be used if
• the string contains an escape character and you want the unescaping to happen.
• the string has a leading or a trailing double quote that is part of the variable value.

A context is characterized by parameters. These parameters are mostly context-sensitive variables which will be added
to the list of variables for reuse in the component-specific properties on the Component view through the Ctrl+Space
keystrokes.

105
Data Integration

Talend Studio offers you the possibility to create multiple context data sets. Furthermore you can either create context data
sets on a one-shot basis from the context tab of a Job, or you can centralize the context data sets in the Contexts node of the
Repository tree view in order to reuse them in different Jobs.
For a Job, you can define the values of your context variables when creating them, or load your context parameters
dynamically, either explicitly using the tContextLoad component or implicitly using the Implicit Context Load feature, when
your Jobs are executed.

Note: When loading context parameters dynamically, the parameter values will be loaded from the specified file or
database and the parameter values statically defined in Talend Studio, or in Talend Cloud Management Console in the
case of Talend Cloud context parameters, will not take effect.

This section describes how to create contexts and variables and define context parameter values.
For an example of loading context parameters dynamically using the tContextLoad component, see Context.
For an example of loading context parameters dynamically using the Implicit Context Load feature, see Data Integration Job
Examples.

Defining context variables for a Job

You can define context variables for a particular Job in two ways:
• Using the Contexts view of the Job.
• Using the F5 key from the Component view of a component.
Defining context variables in the Contexts view
The Contexts view is positioned among the configuration tabs below design workspace.
The Contexts tab view shows all of the variables that have been defined in the current Job and context variables imported
into the current Job.

From this view, you can manage your built-in variables:


• Create and manage built-in contexts.
• Create, edit and delete built-in variables.
• Reorganize the context variables.
• Add built-in context variables to the Repository.
• Import variables from a Repository context source for use in the current Job.
• Edit Repository-stored context variables and update the changes to the Repository.
• Remove imported Repository variables from the current Job.
The following example will demonstrate how to define two contexts named Prod and Test and a set of variables - host,
port, database, username, password, and table_name - under the two contexts for a Job.
Defining contexts

Procedure
1. Open the Job in the design workspace and go to the Contexts view.

106
Data Integration

If the Contexts view is not displayed, select Window > Show view > Talend Contexts to open the Contexts view in the
Integration perspective.
2. Click the [+] button at the top right corner.
The Configure Contexts dialog box opens and a context named Default is created by default.
3. Select the default context and click Edit to rename it, Prod in this example. Click OK.
4. In the open dialog box, click New... and enter Test in the New Context dialog box, click OK.
5. Select the check box preceding the context you want to set as the default context.
You can also set the default context by selecting the context name from the Default context environment list in the
Contexts tab view.
If needed, move a context up or down by selecting it and clicking the Up or Down button.

Example
In this example, set Test as the default context and move it up.

6. Click OK to validate your context definition and close the dialog box.
The newly created contexts are shown in the context variables table of the Contexts view.
7. Repeat the above steps to create as many new contexts as needed.
If you do not want to define the values of each new context from scratch, you can create the first context and define
all its values, as when you create a new one all the parameters of the context selected as default are copied to the new
context. You can then modify the values of the new context as needed.
Defining variables

Procedure
1. Click the [+] button at the bottom of the Contexts view to add lines in the table.

2. Click in the Name field and enter the name of the variable you are creating.

107
Data Integration

Name the first variable host for this example.


3. From the Type list, select the type of the variable.

Warning: If you change the type of a variable that has already a value, the value will be cleared and you need to set
it again.

4. If needed, click in the Comment field and enter a comment to describe the variable.
5. Click in the Value field and enter the variable value under each context.
For different variable types, the Value field appear slightly different when you click in it and functions differently:

Type Value field Default value

String (default type) Editable text field null

Boolean Drop-down list box with two options: true and false -

Character, Double, Integer, Long, Editable text field -


Short, Object, BigDecimal

Date Editable text field, with a button to open theSelect Date & Time -
dialog box.

File Editable text field, with a button to open the Open dialog box -
for file selection.

Directory Editable text field, with a button to open the Browse for Folder -
dialog box for folder selection.

List of Value Editable text field, with a button to open the Configure Values (Empty)
dialog box for list creation and configuration.

Password Editable text field; text entered appears encrypted. -

Resource Editable text field, with a button to open the Select a Resource -
dialog box for you to select a resource defined under Resources
in the Repository.
For more information, see Using resources in Jobs on page
124.

6. If needed, select the check box next to the variable of interest and enter the prompt message in the corresponding
Prompt field.
This allows you to see a prompt for the variable value and to edit it at the execution time. You can show/hide a Prompt
column of the table by clicking the black right/left pointing triangle next to the relevant context name.
7. Repeat the above steps to define all the variables for the different contexts.
• port, type String,
• database, type String,
• username, type String,
• password, type Password,
• table_name, type String.

Results
All the variables created and their values under different contexts are displayed in the table and are ready for use in your
Job. You can further edit the variables in this view if needed.
You can also add a built-in context variable to the Repository to make it reusable across different Jobs. For more information,
see Adding a built-in context variable to the Repository on page 114.
Related topics:
• Defining variables from the Component view on page 109
• Centralizing context variables in the Repository on page 109
• Using variables in a Job on page 117
• Running a Job in a selected context on page 118

108
Data Integration

Defining variables from the Component view

About this task


The quickest way to create a single context variable is to use the F5 key from the Component view. The following example
demonstrates how to create a context variable while configuring a file path for a component in a Job.

Procedure
1. On the relevant Component view, place your cursor in the field you want to parameterize.
2. Press F5 to display the New Context Parameter dialog box:

3. Give a Name to this new variable, fill in the Comment field if needed, and choose the Type.

Note: The variable name should follow some typing rules and should not contain any forbidden characters, such as
space character.

4. Enter a Prompt to be displayed to confirm the use of this variable in the current Job execution (generally used for test
purpose only), select the Prompt for value check box to display the prompt message and an editable value field at the
execution time.
5. If you filled in a value already in the corresponding properties field, this value is displayed in the Default value field.
Else, type in the default value you want to use for one context.
6. Click Finish to validate.
7. Go to the Contexts view tab. Notice that the context variables tab lists the newly created variables.

Results
The newly created variables are listed in the Contexts view. The variable created this way is automatically stored in all
existing contexts, but you can subsequently change the value independently in each context. For more information on how
to create or edit a context, see Defining contexts on page 106.

Centralizing context variables in the Repository

Context variables centrally stored in the Repository can be reused across various Jobs.
You can store context variables in the Repository in different ways:

109
Data Integration

• Creating a context group using the Create / Edit a context group wizard. See Creating a context group and define
context variables in it on page 110 for details.
• Adding a built-in context variable to an existing or new context group in the Repository. See Adding a built-in context
variable to the Repository on page 114 for details.
• Saving a context from metadata. See Creating a context from a Metadata on page 115 for more information.
Creating a context group and define context variables in it
The following example will demonstrate how to use the Create / Edit a context group wizard to create a context group
named TalendDB that contains two contexts named Prod and Test and define a set of variables - host, port,
database, username, password, and table_name - under the two contexts in the Repository, for reuse in database
handling Jobs.
Once you created and adapted as many context sets as you want, click Finish to validate. The group of contexts thus displays
under the Contexts node in the Repository tree view. You can further edit the context group, contexts, and context variables
in the wizard by right-clicking the Contexts node and selecting Edit context group from the contextual menu.
Creating the context group and contexts

Procedure
1. Right-click the Contexts node in the Repository tree view and select Create context group from the contextual menu.
A 2-step wizard appears to help you define the various contexts and context parameters.
2. In Step 1 of 2, type in a name for the context group to be created, TalendDB in this example, and add any general
information such as a description if required. The information you provide in the Description field will appear as a
tooltip when you move your mouse over the context group in the Repository.

3. Click Next to go to Step 2 of 2, which allows you to define the various contexts and variables that you need.
A context named Default has been created and set as the default one by the system.

110
Data Integration

4. Click the [+] button at the upper right corner of the wizard to define contexts. The Configure Contexts dialog box pops
up.

5. Select the context Default, click the Edit... button and enter Prod in the Rename Context dialog box that opens to
rename the context Default to Prod.
Then click OK to close the dialog box.
6. Click the New... button and enter Test in the New Context dialog box. Then click OK to close the dialog box.
7. Select the check box preceding the context you want to set as the default context. You can also set the default context
by selecting the context name from the Default context environment list on the wizard.
If needed, move a context up or down by selecting it and clicking the Up or Down button.
In this example, set Test as the default context and move it up.

111
Data Integration

8. Click OK to validate your context definition and close the Configure Contexts dialog box.
The newly created contexts are shown in the context variables table of the wizard.

Defining context variables

Procedure
1. Click the [+] button at the bottom of the wizard to add a parameter line in the table.
2. Click in the Name field and enter the name of the variable you are creating, host in this example.
3. From the Type list, select the type of the variable corresponding to the component field where it will be used, String for
the variable host in this example.

112
Data Integration

Warning: If you change the type of a variable that has already a value, the value will be cleared and you need to set
it again.

4. If needed, click in the Comment field and enter a comment to describe the variable.
5. Click in Value field and enter the variable value under each context.
For different variable types, the Value field appear slightly different when you click in it and functions differently:

Type Value field Default value

String (default type) Editable text field null

Boolean Drop-down list box with two options: true and false

Character, Double, Integer, Long, Editable text field


Short, Object, BigDecimal

Date Editable text field, with a button to open the Select Date &
Time dialog box.

File Editable text field, with a button to open the Open dialog box
for file selection.

Directory Editable text field, with a button to open the Browse for Folder
dialog box for folder selection.

List of Value Editable text field, with a button to open the Configure Values (Empty)
dialog box for list creation and configuration.

Password Editable text field; text entered appears encrypted.

Resource Editable text field, with a button to open the Select a Resource
dialog box for you to select a resource defined under Resources
in the Repository.
For more information, see Using resources in Jobs on page
124.

6. If needed, select the check box next to the variable of interest and enter the prompt message in the corresponding
Prompt field. This allows you to see a prompt for the variable value and to edit it at the execution time.
You can show/hide a Prompt column of the table by clicking the black right/left pointing triangle next to the relevant
context name.
7. Repeat the steps above to define all the variables in this example.
• port, type String,
• database, type String,
• username, type String,
• password, type Password,
• table_name, type String

113
Data Integration

All the variables created and their values under different contexts are displayed in the table and are ready for use in
your Job.
You can further edit the variables if needed. Through preference configuration, you can enable or disable propagation
of variable changes to your Jobs. For more information, see Performance preferences (Talend > Performance) on page
609.
Adding a built-in context variable to the Repository

About this task


You can save a built-in context variable defined in a Job to a new context group, or an existing context group provided that
the context variable does not already exist in the group.

Procedure
1. In the Context tab view of a Job, right-click the context variable you want to add to the Repository and select Add to
repository context from the contextual menu to open the Repository Content dialog box.

2. In the dialog box, do either of the following:


• to add your context variable to a new context group, select Create new context group and enter a name for the
new context group in the Group Name field, and then click OK.
• to add your context variable to an existing context group, select the context group and click OK.

Warning: When adding a built-in context variable to an existing context group, make sure that the variable does not
already exist in the context group.

In this example, add the context variable password defined in a Job to a new context group named DB_login.

114
Data Integration

The context variable is added to the Repository context group of your choice, along with the defined built-in contexts,
and it appears as a Repository-stored context variable in the Contexts tab view.

Creating a context from a Metadata


When creating or editing a metadata connection (through a File or DB metadata wizard), you have the possibility to save the
connection parameters as context variables in a newly created context group under the Contexts node of the Repository. To
do so, complete your connection details and click the Export as context button in the second step of the wizard.
For more information about this feature, see Exporting metadata as context and reusing context parameters to set up a
connection on page 500.

Applying Repository context variables to a Job

Once a context group is created and stored in the Repository, there are two ways of applying it to a Job:
• Drop a context group. This way, the group is applied as a whole. See Dropping a context group onto a Job on page
115 for details.

Use the button. This way, the variables of a context group can be applied separately. See Applying context
variables to a Job using the context button on page 116 for details.
Dropping a context group onto a Job

About this task


To drop a context group onto a Job, proceed as follows:

Procedure
1. Double-click the Job to which a context group is to be added.
2. Once the Job is opened, drop the context group of your choice either onto the design workspace or onto the Contexts
view beneath the workspace.

115
Data Integration

The Contexts view shows all the contexts and variables of the group. You can:
• edit the contexts by clicking the [+] button at the upper right corner of the Contexts view.
• delete the whole group or any variable by selecting the group name or the variable and clicking the X button.
• save any imported context variable as a built-in variable by right-click it and selecting Add to built-in from the
contextual menu.
• double-click any context variable to open the context group in the Create / Edit a context group wizard and update
changes to the Repository.
Applying context variables to a Job using the context button

About this task


To use the context button to apply context variables to a Job, proceed as follows:

Procedure
1. Double-click the Job to which a context variable is to be added.
2. Once the Job is opened in the workspace, click the Contexts view beneath the workspace to open it.
3.
At the bottom of the Contexts view, click the button to open the wizard to select the context variables to be
applied.

116
Data Integration

4. In the wizard, select the context variables you need to apply or clear those you do not need to.

Note: The context variables that have been applied are automatically selected and cannot be cleared.

5. Click OK to apply the selected context variables to the Job.


The Contexts view shows the context group and the selected context variables. You can edit the contexts by clicking
the [+] button at the upper right corner of the Contexts view, delete the whole group or any variable by selecting the
group name or the variable and clicking the X button, but you cannot edit Repository-stored variables in this view.

Using variables in a Job

About this task


You can use an existing global variable, a context variable defined in your Job, or a Repository-stored context variable
applied to your Job in any component properties field.

Warning: In database output components, when parallel execution is enabled, it is not possible to use global variables to
retrieve the return values in a subJob.

Procedure
1. In the relevant Component view, place your mouse in the field you want to parameterize and press Ctrl+Space to display
a full list of all the global variables and those context variables defined in or applied to your Job.
The list grows along with new user-defined variables (context variables).

117
Data Integration

2. Double-click the variable of your choice to fill it in the field.

Running a Job in a selected context

You can select the context you want the Job design to be executed in.

Procedure
1. Click the Run tab.
2. In the Context area, select the relevant context among the various ones you created.

If you did not create any context, only the Default context shows on the list.
All the context variables you created for the selected context display, along with their respective values, in a table
underneath.
To make a change permanent in a variable value, you need to change it on the Context view if your variable is of type
built-in or in the Context group of the repository.

Setting up code dependencies on a Job


If you want to enable your Job to call any function in a routine or custom routine jar, you need to set up code dependencies,
namely routine or custom routine jar dependencies, on the Job.
For more information about routines and custom routine jars, see What are routines on page 515 and Creating custom
routine jars on page 516.

Procedure
1. Right-click your Job and select Setup Codes Dependencies from the contextual menu in the Repository tree view.
Otherwise, select Setup routine dependencies.
A dialog box with two tabs Custom Routine Jars and Global routines is displayed. You can set up custom routine jar and
global routine dependencies on the Job on the corresponding tab. By default, all system routines are automatically set
as dependencies for Jobs.

118
Data Integration

2. Click the [+] button on a tab if you need to set up the corresponding dependencies on the Job.
A dialog box is displayed, which lists all the corresponding routines or custom routines jars.
3. Select one or more routines or custom routine jars containing the functions that your Job calls.
4. Click OK to save your changes and close the dialog box.
If a routine or a custom routine jar does not contain any function that your Job calls, you can remove it by selecting it on
the tab and then clicking the [x] button. This will help avoid redundancy in the exported dependencies.

Note: You can right-click a routine or a custom routine jar to use the Impact Analysis feature. This feature indicates
which Jobs use the routine or the custom routine jar and would therefore be impacted by any modification. For
further information about Impact Analysis, see Analyzing repository items on page 175.

Handling Jobs: advanced subjects


The sections below give detail information about various advanced configuration situations of a data integration Job
including handling multiple input and output flows, using SQL queries, using external components in the Job, scheduling a
task to run your Job.

Creating queries using the SQLBuilder

SQLBuilder helps you create your SQL queries and monitor the changes between DB tables and metadata tables. This editor
is available in all DBInput and DBSQLRow components (specific or generic).
You can create a query using the SQLbuilder whether your database table schema is stored in the Repository tree view or
built-in directly in the Job.
Fill in the DB connection details and select the appropriate repository entry if you defined it.
Remove the default query statement in the Query field of the Basic settings view of the Component panel. Then click the [...]
button to open the SQL Builder editor.

119
Data Integration

The SQL Builder editor is made of the following panels:


• Current Schema,
• Database structure,
• Query editor made of editor and designer tabs,
• Query execution view,
• Schema view.
The Database structure shows the tables for which a schema was defined either in the repository database entry or in your
built-in connection.
The schema view, in the bottom right corner of the editor, shows the column description.
Comparing database structures
On the Database Structure panel, you can see all tables stored in the DB connection metadata entry in the Repository tree
view, or in case of built-in schema, the tables of the database itself.

Note:
The connection to the database, in case of built-in schema or in case of a refreshing operation of a repository schema
might take quite some time.

Click the refresh icon to display the differences between the DB metadata tables and the actual DB tables.

120
Data Integration

The Diff icons point out that the table contains differences or gaps. Expand the table node to show the exact column
containing the differences.
The red highlight shows that the content of the column contains differences or that the column is missing from the actual
database table.
The blue highlight shows that the column is missing from the table stored in Repository > Metadata.
Creating a query

About this task


The SQL Builder editor is a multiple-tab editor that allows you to write or graphically design as many queries as you want.
To create a new query, complete the following:

Procedure
1. Right-click the table or on the table column and select Generate Select Statement on the pop-up list.
2. Click the empty tab showing by default and type in your SQL query or press Ctrl+Space to access the autocompletion
list. The tooltip bubble shows the whole path to the table or table section you want to search in.

121
Data Integration

Alternatively, the graphical query Designer allows you to handle tables easily and have real-time generation of the
corresponding query in the Edit tab.
3. Click the Designer tab to switch from the manual Edit mode to the graphical mode.

Note:
You may get a message while switching from one view to the other as some SQL statements cannot be interpreted
graphically.

4. If you selected a table, all columns are selected by default. Clear the check box facing the relevant columns to exclude
them from the selection.
5. Add more tables in a simple right-click. On the Designer view, right-click and select Add tables in the pop-up list then
select the relevant table to be added.
If joins between these tables already exist, these joins are automatically set up graphically in the editor.
You can also create a join between tables very easily. Right-click the first table columns to be linked and select Equal
on the pop-up list, to join it with the relevant field of the second table.

122
Data Integration

The SQL statement corresponding to your graphical handlings is also displayed on the viewer part of the editor or click
the Edit tab to switch back to the manual Edit mode.

Note:
In the Designer view, you cannot include graphically filter criteria. You need to add these in the Edit view.

6.
Once your query is complete, execute it by clicking the icon on the toolbar.
The toolbar of the query editor allows you to access quickly usual commands such as: execute, open, save and clear.
The results of the active query are displayed on the Results view in the lower left corner.
7. If needed, you can select the context mode check box to keep the original query statement and customize it properly
in the Query area of the component. For example, if a context parameter is used in the query statement, you cannot

execute it by clicking the icon on the toolbar.


8. Click OK. The query statement will be loaded automatically in the Query area of the component.
Storing a query in the repository
To be able to retrieve and reuse queries, we recommend you to store them in the repository.
In the SQL Builder editor, click the icon on the toolbar to bind the query with the DB connection and schema in case these
are also stored in the repository.
The query can then be accessed from the Database structure view, on the left-hand side of the editor.

Setting checkpoints on trigger connections

About this task


You can set "checkpoints" on one or more trigger connections of the types OnSubjobOK and OnSubjobError you use to
connect components together in your Job design. Doing that will allow, in case of failure during execution, to recover the
execution of your Job from the last checkpoint previous to the error.
Therefore, checkpoints within Job design can be defined as reference points that can precede or follow a failure point during
Job execution.

Note: The Error recovery settings can be edited only in a remote project. For information about opening a remote project,
see Opening a remote project on page 23.

To define a checkpoint on a trigger connection in a Job, do the following:

Procedure
1. In the design workspace and after designing your Job, click the trigger connection you want to set as a checkpoint.
The Basic settings view of the selected trigger connection appears.
2. Click the Error recovery tab in the lower left corner to display the Error recovery view.

123
Data Integration

3. Select the Recovery Checkpoint check box to define the selected trigger connection as a checkpoint in the Job data

flow. The icon is appended on the selected trigger connection.


4. In the Label field, enter a name for the defined checkpoint. This name will display in the Label column in the Recovery
checkpoints view in Talend Administration Center. For more information, see Talend Administration Center User Guide.
5. In the Failure Instructions field, enter a free text to explain the problems and what do you think the failure reason
could be. These instructions will display in the Failure Instructions column in the Recovery checkpoints view in Talend
Administration Center, for more information, see Talend Administration Center User Guide.
6. Save your Job before closing or running it in order for the define properties to be taken into account.

Results
Later, and in case of failure during the execution of the designed Job, you can recover this Job execution from the latest
checkpoint previous to the failure through the Error Recovery Management page in Talend Administration Center.
For more information, see the recovering Job execution chapter in Talend Administration Center User Guide.

Using resources in Jobs

You can create resources in Talend Studio and use them in your Jobs for file handling. This way, when exporting your Jobs,
for example, you can pack the resource files as Job dependencies and deploy your Jobs without having to copy the files to
the target system.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.
Creating a resource

Procedure
1. In the Repository tree view of the Integration perspective, right-click Resources and select Create resource from the
context menu to open the New Resource wizard.

124
Data Integration

Field Description

Source File The path to a local or remote file to create the resource from.

Name The name of the resource. A message comes up if you enter prohibited characters.

Purpose The purpose of the resource or any useful information regarding the resource use.

Description Resource description.

Author A read-only field that shows by default the current user login.

Locker A read-only field that shows by default the login of the user who owns the lock on the current resource. This field
is empty when you are creating a resource and has data only when you are editing the properties of an existing
resource.

Version A read-only field. You can manually increment the version using the M and m buttons.

Status A list to select from the status of the resource you are creating.

Path A list to select from the folder in which the resource will be created.

2. In the New Resource wizard:


• To create your resource from an existing file, specify the path to the file in the Resource file field or click Browse to
browse to it.
The resource is automatically named after the specified file. You can rename it if needed.
• To create your resource from scratch, simply give your resource a name.
3. Click Finish to create the resource and close the wizard.
• If the resource is created from an existing file, a new file named after the resource with the same content is created
in the <PROJECT_NAME>\resources folder in the workspace directory and the content is displayed in the
default editor or in a text editor on the design workspace depending on the content type. You can edit the content
if needed.

125
Data Integration

• If the resource is created from scratch, an empty .txt file named after the resource is created in the
<PROJECT_NAME>\resources folder in the workspace directory and opened on the design workspace for you to
provide the content.

Results
The created resource is added under the Resources node of the Repository tree. You can access different management
options, detect dependencies, and perform impact analysis for your resource, by right-clicking the resource name.
Using a resource in a Job
You can use a resource in a Job by adding a context variable in the Contexts view of the Job.

Note: Using resources through repository context variables is not supported yet.

Procedure
1. Create a resource by following the instructions in Creating a resource on page 124.

Example
In this example, create a resource named AirportInfo to read airport information a local text file.
2. Open the Job in which you want to use the resource.

Example
In this example, the Job has only two components -- tFileInputDelimited and tLogRow, connected by a Row > Main
connection.
3. In the Contexts view of the Job, add a context variable, named resourceFile in this example, of type Resource.

Example

4. Click in the Value field of the context variable, click the button and select the resource in the Select a Resource dialog
box.
5. In the Component view of the tFileInputDelimited component, define the component schema according to the structure
of the resource content.
6. In File name/Stream field, specify the resource context variable.

Example
In this example, enter context.resourceFile.

Results
When executed, the Job will read and display the content of the input file defined in the resource. You can export the Job
with the input file as a Job dependency, and deploy it to a different system without copying the file to the target machine.

126
Data Integration

Using the Use Output Stream feature

The Use Output Stream feature allows you to process the data in byte-arrays using a java.io.outputstream() class which
writes data using binary stream without data buffering. When processing data with a linear format, for example, when all
data is of String format, this feature will help you improve the overall output performance.

The Use Output Stream feature can be found in the Basic settings view of a number of components such as tFileOutputDel
imited.
To use this feature, select Use Output Stream check box in the Basic settings view of a component that has this feature. In
the Output Stream field that is thus enabled, define your output stream using a command.

Note:
Prior to using the output stream feature, you have to open a stream.
For a detailed example of the illustration of this prerequisite and the usage of the Use Output Stream feature, see Data
Integration Job Examples.

Handling Jobs: miscellaneous subjects


The sections below give detail information about various subjects related to the management of a data integration Job
including:
• Using folders on page 127
• Sharing a database connection on page 128
• Viewing in-process data on page 132
• Adding notes to a Job design on page 131
• Viewing in-process data on page 132
• Displaying the code or the outline of your Job on page 134
• Managing the subJob display on page 135
• Defining options on the Job view on page 137

Using folders

About this task


You can organize your Jobs into folders.
To create a folder, proceed as follows:

Procedure
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create folder from the
contextual menu.
The New folder dialog box displays.

127
Data Integration

2. In the Label field, enter a name for the folder and then click Finish to confirm your changes and close the dialog box.
The created folder is listed under the Job Designs node in the Repository tree view.

Results

Note:
If you have already created Jobs that you want to move into this new folder, simply drop them into the folder.

Sharing a database connection

About this task


If you have various Jobs using the same database connection, you can factorize the connection by using the Use or register a
shared DB Connection option so that the connection can be shared between parent and child Jobs.
This option has been added to all database connection components in order to reduce the number of connections to open
and close.

Warning: The Use or register a shared DB Connection option of all database connection components is incompatible with
the Use dynamic job and Use an independent process to run subJob options of the tRunJob component. Using a shared
database connection together with a tRunJob component with either of these two options enabled will cause your Job to
fail.

The procedure below assumes that you have two related Jobs (a parent Job and a child Job) that both need to connect to your
remote MySQL database.
For a complete use case, see MySQL.
To use a shared database connection in the two Jobs, to the following:

Procedure
1. Add a tMysqlConnection (assuming that you work with a MySQL database) to both the parent and the child Job, if they
are not using a database connection component.
2. Connect each tMysqlConnection to the relevant component in your Jobs using a Trigger > On Subjob Ok link.

128
Data Integration

3. In the Basic settings view of the tMysqlConnection component that will run first, fill in the database connection details
if the database connection is not centrally stored in the Repository.
4. Select the Use or register a shared DB Connection check box, and give a name to the connection in the Shared DB
Connection Name field.

You are now able to re-use the connection in your child Job.
5. In the Basic settings view of the other tMysqlConnection component, which is in the other Job, simply select Use or
register a shared DB Connection check box, and fill the Shared DB Connection Name field with the same name as in the
parent Job.

Note:
Among the different Jobs sharing the same database connection, you need to define the database connection details
only in the first Job that needs to open the database connection.

Handling error icons on components or Jobs

When the properties of a component are not properly defined and contain one or several errors that can prevent the Job code
to compile properly, error icons will automatically show next to the component icon on the design workspace and the Job
name in the Repository tree view.
Warnings and error icons on components
When a component is not properly defined or if the link to the next component does not exist yet, a red checked circle or a
warning sign is docked at the component icon.
Mouse over the component, to display the tooltip messages or warnings along with the label. This context-sensitive help
informs you about any missing data or component status.

129
Data Integration

Error icons on Jobs


When the component settings contain one or several errors that can prevent the Job code to compile properly, an icon will
automatically show next to the Job name in the Repository tree view.

The error icon displays as well on the tab next to the Job name when you open the Job on the design workspace.
The compilation or code generation does only take place when carrying out one of the following operations:
• opening a Job,
• clicking on the Code Viewer tab,
• executing a Job (clicking on Run Job),
• saving the Job.
Hence, the red error icon will only show then.
When you execute the Job, a warning dialog box opens to list the source and description of any error in the current Job.

130
Data Integration

Click Cancel to stop your Job execution or click Continue to continue it.

Adding notes to a Job design

In the Palette, click the Misc family and then drop the Note element to the design workspace to add a text comment to a
particular component or to the whole Job.

You can change the note format. To do so, select the note you want to format and click the Basic setting tab of the
Component view.

Select the Opacity check box to display the background color. By default, this box is selected when you drop a note on the
design workspace. If you clear this box, the background becomes transparent.
You can select options from the Fonts and Colors list to change the font style, size, color, and so on as well as the
background and border color of your note.
You can select the Adjust horizontal and Adjust vertical boxes to define the vertical and horizontal alignment of the text of
your note.
The content of the Text field is the text displayed on your note.

131
Data Integration

Viewing in-process data

At any stage of your Job execution, you might want to check the actual data being processed, either before you run the Job
(on the input component) or after the Job has been executed to view quickly the output (on the output component).
Result Data Viewer

About this task


The Data Viewer feature is available on all components that handle data flows (as input or output) and allows you to view
the data the way it has been set. You will thus be able to spot any setting errors.

Procedure
1. On your Job Design, select the input component.
2. Right-click and select Data Viewer from the pop-up menu.
The Data Viewer dialog box displays the content of the component selected.

You can set the display parameters and filter the content, as described in the table below:

Parameter Description

Rows/page Enter the maximum number of rows to be displayed per page.

Limits Enter the maximum number of rows to be displayed in the viewer.

Null Select the Null check box above a given column to filter any null values from the column.

Condition Enter a condition on which to filter the content displayed.

3. Click Set parameters and continue to go to the Select context dialog box.

132
Data Integration

From the drop-down context list, you can select the context variables you want to verify.
Raw Data Viewer
Some components (of the File family) provide also an extra tab in the viewer. This tab shows the raw data content as it is in
the actual file.

133
Data Integration

This file content viewer shows the data as it is in the file disregarding your setting. This can be convenient to spot the files
that are not well formed.

Displaying the code or the outline of your Job

This panel is located below the Repository tree view. It displays detailed information about the open Job in the design
workspace.
The Information panel is composed of two tabs, Outline and Code Viewer, which provide information regarding the displayed
diagram.
Outline
The Outline tab offers a quick view of the open Job on the design workspace and also a tree view of all used elements in
the Job. As the design workspace, like any other window area, can be resized to suit your needs, the Outline view provides a
convenient way for you to check out where on your design workspace you are located.

134
Data Integration

This graphical representation of the diagram highlights in a blue rectangle the diagram part showing in the design
workspace.
Click the blue-highlighted view and hold down the mouse button. Then, move the rectangle over the Job. The view in the
design workspace moves accordingly.
The Outline view can also be displaying a folder tree view of components in use in the current diagram. Expand the node of
a component, to show the list of variables available for this component.

To switch between the graphical outline view and the tree view, click or on the toolbar at the top right of the
panel.

Selecting a tree view node and then clicking on the toolbar of the Outline view can direct you to the actual related item
on the design workspace.
Code viewer
The Code viewer tab provides lines of code generated for the selected component, behind the active Job design view, as well
the run menu including Start, Body and End elements.
Using a graphical colored code view, the tab shows the code of the component selected in the design workspace. This is a
partial view of the primary Code tab docked at the bottom of the design workspace, which shows the code generated for the
whole Job.

Managing the subJob display

A subJob is graphically defined by a blue square gathering all connected components that belong to this subJob. Each
individual component can be considered as a subJob when they are not yet connected to one another.

135
Data Integration

This blue highlight helps you easily distinguish one subJob from another.

Note: A Job can be made of one single subJob. An orange square shows the prejob and postjob parts which are different
types of subJobs. For more information about prejob and postjob, see Using the tPrejob and tPostjob components on page
89.

Formatting subJobs

About this task


You can modify the subJob color and its title color. To do so, select your subJob and click the Component view.

In the Basic setting view, select the Show subJob title check box if you want to add a title to your subJob, then fill in a title.
To modify the title color and the subJob color:

Procedure
1. In the Basic settings view, click the Title color/subJob color button to display the Colors dialog box.
2. Set your colors as desired. By default, the title color is blue and the subJob color is transparent blue.
Collapsing the subJobs
If your Job is made of numerous subJobs, you can collapse them to improve the readability of the whole Job. The minus (-)
and plus ([+]) signs on the top right-hand corner of the subJob allow you to collapse and restore the complete subJob.

Click the minus sign (-) to collapse the subJob. When reduced, only the first component of the subJob is displayed.
Click the plus sign ([+]) to restore your subJob.
Removing the subJob background color
If you do not want your subJobs to be highlighted, you can remove the background color on all or specific subJobs.
To remove the background color of all your subJobs, click the Toggle subJobs icon on the toolbar of Talend Studio.

136
Data Integration

To remove the background color of a specific subJob, right-click the subJob and select the Hide subJob option on the pop-up
menu.

Defining options on the Job view

On the Job view located on the bottom part of the design workspace, you can define Job's optional functions. This view is
made of two tabs: Stats & Logs tab and Extra tab.
The Stats & Logs tab allows you to automate the use of Stats & Logs features and the Context loading feature. For more
information, see Automating the use of statistics & logs on page 137.
The Extra tab lists various options you can set to automate some features such as the context parameters use, in the Implicit
Context Loading area. For more information, see Using the features in the Extra tab on page 138.
Automating the use of statistics & logs

About this task


If you have a great need of log, statistics and other measurement of your data flows, you are facing the issue of having too
many log-related components loading your Job Designs. You can automate the use of tFlowMeterCatcher, tStatCatcher,
tLogCatcher component functionalities without using the components in your Job via the Stats & Logs tab.
The Stats & Logs panel is located on the Job tab underneath the design workspace and prevents your Jobs Designs to be
overloaded by components.

Note:
This setting supersedes the log-related components with a general log configuration.

To set the Stats & Logs properties:

Procedure
1. Click the Job tab.
2. Select the Stats & Logs panel to display the configuration view.

137
Data Integration

3. Set the relevant details depending on the output you prefer (console, file or database).
4. Select the relevant Catch check box according to your needs.

Results

Note:

You can save the settings into your Project Settings by clicking the button. This way, you can

access such settings via File > Edit project settings > Job settings > Stats & Logs or via the button on the toolbar.

When you use Stats & Logs functions in your Job, you can apply them to all its subJobs.

To do so, click the Apply to subJobs button in the Stats & Logs panel of the Job view and the selected stats & logs functions
of the main Job will be selected for all of its subJobs.
Using the features in the Extra tab
The Extra tab offers some optional function parameters.
• Select the Multithread execution check box to allow two Job executions to start at the same time.
• Set the Implicit tContextLoad option parameters to avoid using the tContextLoad component on your Job and automate
the use of context parameters.
Choose between File and Database as source of your context parameters and set manually the file or database access.
Set notifications (error/warning/info) for unexpected behaviors linked to context parameter setting.

138
Data Integration

For an example of loading context parameters dynamically using the Implicit Context Load feature, see Data Integration
Job Examples.
• When you fill in Implicit tContextLoad manually, you can store these parameters in your project by clicking the Save to
project settings button, and thus reuse these parameters for other components in different Jobs.
• Select the Use Project Settings check box to recuperate the context parameters you have already defined in the Project
Settings view.
The Implicit tContextLoad option becomes available and all fields are filled in automatically.
For more information about context parameters, see Context settings on page 585.
• Click Reload from project settings to update the context parameters list with the latest context parameters from the
project settings.

Error handling in Talend Studio

There are various ways to capture, handle and manage errors at the Job level in Talend Studio.
• Using the dedicated components provided by Talend
• Using connections between two components in a Job
• Using a customized, appropriate Job design
Error handling with components
This section explains how to handle errors using components.
Talend offers the following components to design error handling:
• tAssert and tAssertCatcher
• tChronometerStart and tChronometerStop
• tDie, tWarn and tLogCatcher
• tFlowMeter and tFlowMeterCatcher
• tLogRow
For more information, see the related documentation of these components.
Using tAssert and tAssertCatcher for error handling
This section explains how to design error handling with tAssert and tAssertCatcher.
tAssert works alongside tAssertCatcher to evaluate the status of a Job execution. It generates a boolean evaluation, either OK
or FAIL, for the Job execution status.
These two components, when used together, help catching certain types of errors and handling or routing them to the right
direction, as per the project requirement.

Use case
A Job expects a file with ten lines of data. Each line has a master record data about its data center. The batch processing
should start only if this files arrives with ten lines of data.

139
Data Integration

Sample Job

Design
In this Job, tFileRowCount reads the record count.
In tAssert, there is a condition to validate if the record count is equal to ten. tAssert performs the condition check and
declares either OK or FAIL as output.
tAssertCatcher catches the output given by tAssert and, in this example, displays the output to the console.
This Job can be used for numerous tasks, such as:
• Triggering the next set of Jobs in an execution plan
• Sending an email to the source team stating that the input records are either good or not good, depending on the result
Using tChronometerStart and tChronometerStop for error handling
This section explains how to measure the time a Job or subJob takes using tChronometerStart and tChronometerStop.

Use case
An exclusive chocolate company's world marketing head wants to see a specific report every morning. The devops team
decides to look at the time the Job takes to complete, to ensure that they keep an eye on the Job and never miss the service-
level agreement (SLA).

140
Data Integration

Sample Job

Design
In this example, before the file is read, the start time is captured by tChronometerStart. Towards the end, tChronometerStop
captures the end time of the Job.
Apart from the use case given, these components can be used to:
• Perform periodical audits to see the performance of each Job
• Determine dependency wait in case there are too many source/input file streams
Using tDie, tWarn and tLogCatcher for error handling
This section explains how to design error handling with tDie, tWarn and tLogCatcher.
tDie allows you to have a code and message associated with the raised exception. It also has options to exit the JVM and kill
the Job on exception. This component is very useful in case you want to take different actions in your Job based on the kind
of error raised.
tWarn allows you to have a code and message associated with the raised exception. It is generally used to signal the
completion of a Job or exceptions which do not block the execution of the Job. You can choose the message and the code
you want to send to the catcher component. For example, you can use tWarn to signal the end of a specific pipeline flow in
your Job.
tLogCatcher catches all exceptions and warnings raised by tWarn and tDie. You can also use tWarn or tDie at the end of the
Job, to make tLogCatcher update the Job log when a Job finished successfully or to carry out other actions.

141
Data Integration

Sample Job

Design
In this example, the Job reads from a file, makes some changes and writes to an output file. tWarn is used to signal when
the Job is completed successfully and is written to a file. tDie is used to capture any errors when writing the file, and the
message is written to a file.
When the Job runs, there are three possible results.
• The Job works as expected and the file is written successfully. In this case, tWarn displays the message and the Job exits
green.
• The Job works but there is an issue while creating the file. In this case, the Job does not end but displays an error and
exits gracefully.
• The Job meets a fatal error.
tLogCatcher catches all the errors and redirects them as per the conditions given in tMap. Here, all the warnings raised by
tWarn are sent to a warning file, all the error messages raised by tDie are sent to an error file and all other unexpected errors
are sent to another error file.
Depending on the requirement and design, these components can be used to control the Job and also effectively manage the
data flow. For example, you can use tWarn to call on a subJob if the file is created successfully.
Using tFlowMeter and tFlowMeterCatcher for error handling
This section explains how to design error handling using tFlowMeter and tFlowMeterCatcher.
Based on a defined schema, tFlowMeterCatcher catches the processing volumetric from tFlowMeter and passes them on to
the output component.

142
Data Integration

Sample Job

Design
In this example, the Job reads from a file, makes some changes and stores the output data in a user defined variable using
the tJava component.
tFlowMeter is used to capture the inner join and reject records.
tFlowMeterCatcher is used to catch the records processed at both links. Then, it is used to check if all the records read are
being processed and that no data is lost.
Using tWarn, it is captured as a message.
This Job ensures that there is no data leakage and that all the records that are read from the source file are either processed
successfully or rejected.
Using tLogRow for error handling
This section explains how to design error handling using tLogRow.
tLogRow displays data or results in the Run console, it is used to monitor processed data.

143
Data Integration

Sample Job

Design
The sample Job shows a design in which tLogRow is used to display the processed records to the console.
Error handling with connections
This section explains how to use connections to design error handling.
Trigger connections are the connections that are triggered based on certain conditions. In this example, the conditions are
that a component or subJob is either completed successfully or has errors. The connection in use creates a dependency
between Jobs or subJobs, which are triggered one after the other according to the nature of the trigger. It is important to
note that no data is handled through these conditions.

SubJob triggers
OnSubjobOk is used to trigger the next subJob on the condition that the main subJob completed without error. This
connection is to be used only from the start component of the Job.
These connections are used to orchestrate the subJobs forming the Job or to easily troubleshoot and handle unexpected
errors.
OnSubjobError is used to trigger the next subJob in case the first subJob does not complete correctly. This subJob helps
flagging the bottleneck or handling the error if possible.

Component triggers
OnComponentOk and OnComponentError are component triggers. They can be used with any source component in the
subJob.
OnComponentOk only triggers the target component once the execution of the source component is completed without
error. Its main use could be to trigger a notification subJob, for example.
OnComponentError triggers the subJob or component as soon as an error is encountered in the primary Job.

Run if triggers
Run if triggers a subJob or component in case the condition defined is met.

144
Data Integration

Trigger connection example

This example shows the functionality of the triggers.


• When OnSubjobOk is used, the next component gets triggered only when the subJob is completed.
• When OnComponentOk is used, the next step is triggered as soon as the component execution is complete.
Error handling at design
This section explains how to integrate error handling in the Job design.
Apart from the built-in components and link triggers provided by Talend, you can handle errors very effectively by
integrating the logic in the Job design itself using processing components such as tMap.

In this example, tMap is used to determine the data flow. All the error records that need reprocessing are notified to
business users via email. The rejects are written to a file, probably for reprocessing, and the delta records are written to the
table.

Designing a Joblet
What is a Joblet
A Joblet is a specific component that replaces Job component groups. It factorizes recurrent processing or complex
transformation steps to ease the reading of a complex Job. Joblets can be reused in different Jobs or several times in the
same Job.
At runtime, the Joblet code is integrated into the Job code itself. No separate code is generated, the same Java class being
used.

145
Data Integration

This way, the Joblet use does not have any drawbacks on the performance side. The execution time is unchanged whether
your Job includes a Joblet or the whole subJob directly.
Moreover if you intend to log and monitor the whole Job statistics and execution error or warnings, the Joblets included in
your Job will be monitored without requiring further log component (such as tLogCatcher, tStatCatcher or tFlowCatcher).
A Joblet is easily identified as it is enclosed in a dotted square on a green background.

This specific component can be used like any other usual component within a Job. For more information on how to design a
Job, see What is a Job design? on page 47.
Unlike for the tRunJob component, the Joblet code is automatically included in the Job code at runtime, thus using less
resources. As it uses the same context variables as the Job itself, the Joblet is easier to maintain.
To use a group of components as a standalone Job, you can use the tRunJob component. Unlike the Joblet, the tRunJob has
its own context variables.

Creating a Joblet from scratch

Procedure
1. In Talend Studio Repository tree view, right-click the Job Designs node or Joblets under the Job Designs node, and
select Create Joblet on the contextual menu.

2. In the New Joblet dialog box, fill in at least the Name field to designate the Joblet. You can also add information to ease
the Joblet management, such as: Description, Version, Author and Status.

Field Description

Name Enter a name for your new Joblet. A message comes up if you enter prohibited characters.

Purpose Enter the Joblet purpose or any useful information regarding the Job in use.

Description Enter a description if need.

Author This field is read-only as it shows by default the current user login.

Locker This field is read-only as it shows by default the current user login.

Version The version is read-only. You can manually increment the version using the M and m buttons.

146
Data Integration

Field Description

Status Select a status from the list. By default, the status you can select is development, testing, or production. To define
more statuses, see Status settings on page 586.

Path This field is read-only. It refers to the item access path in the repository. This field is empty when the item is
created in the root folder.

Icon Select the icon you want to use for your Joblet. It will show next to the Joblet name in the Repository tree view
and in the Palette as well.

3. In the Icon area, click the [...] button to open a window where you can browse to an icon of your choice and add it to
your Joblet, if needed.
4. Select the icon and click Open. The window closes and the selected icon displays in the Icon area in the New Joblet
dialog box.

Note:
The icon must have the dimensions 32 x 32. You will have an image-size related error if you try to use icons with
other dimensions.

5. If necessary, click revert to go back and use the by-default icon.


6. Click Finish to validate your settings and close the dialog box.
The design workspace opens showing the Joblet name as tab label. By default the newly created Joblet includes an
input and an output Joblet component.
The INPUT component is only to be used if there is a flow coming from the main Job that should be used in the joblet,
and the OUTPUT component is only to be used if there is a flow going out of the joblet that needs be used in the main
Job. You can remove either or both of them as needed.

147
Data Integration

7. Include the transformation components you need and connect them to the Joblet input and the output components.
In the example below, the input component is removed, and a tMap component is used for the transformation step.
If you need lookup connections, include them here as well.

8. Define the input component schema of the Joblet.


As for any component requiring a schema definition, you can define your schema as Built-in, import it from an XML file
or retrieve it from the Repository tree view.

Results
The output schema is automatically retrieved from the preceding component (likely the transformation component) but you
can also change it if you like. The next step is to use the Joblet you have just created in your usual Job in order to replace the
transformation steps.

Note: Note that you can also include Joblets in a Joblet.

Creating a Joblet from a Job

About this task


You can create a Joblet directly from an existing Job. To do so:

Procedure
1. Open the Job in Talend Studio.

148
Data Integration

2. Right-click the component(s) you want to transform to a Joblet and select Refactor to Joblet from the contextual menu
to open the New Joblet dialog box. In this example, we want to transform tMap to a Joblet.
3. In the New Joblet dialog box, fill in at least the Name field to designate the Joblet. You can also add information to ease
the Joblet management, such as: Description, Version, Author and Status.

Field Description

Name Enter a name for your new Joblet. A message comes up if you enter prohibited characters.

Purpose Enter the Joblet purpose or any useful information regarding the Job in use.

Description Enter a description if need.

Author This field is read-only as it shows by default the current user login.

Locker This field is read-only as it shows by default the current user login.

Version The version is read-only. You can manually increment the version using the M and m buttons.

Status Select a status from the list. By default, the status you can select is development, testing, or production. To define
more statuses, see Status settings on page 586.

Path This field is read-only. It refers to the item access path in the repository. This field is empty when the item is
created in the root folder.

Icon Select the icon you want to use for your Joblet. It will show next to the Joblet name in the Repository tree view
and in the Palette as well.

4. In the Icon area, click the [...] button to open a window where you can browse to an icon of your choice and add it to
your Joblet, if needed.
5. Select the icon and click Open. The window closes and the selected icon displays in the Icon area in the New Joblet
dialog box.

Note:
The icon must have the dimensions 32 x 32. You will have an image-size related error if you try to use icons with
other dimensions.

149
Data Integration

6. If necessary, click revert to go back and use the by-default icon.


7. Click Finish to validate your settings and close the dialog box.
The design workspace opens showing the Joblet name as tab label. The input and output Joblet components are
automatically included in the Joblet during its creation and the transformation component selected for creating the
Joblet, tMap in this example.

The tMap component is then automatically replaced by the Joblet component in the Job.

150
Data Integration

8. Save your Job and press F6 to execute it.

Results
You can as well include other transformation steps after your Joblet, if necessary. For more information about modifying a
Joblet, see Editing a Joblet on page 154.

Using a Joblet in a Job

About this task


As soon as you have created a Joblet, you will find it in your usual Palette of components and thus you will be able to
include it in the place of the transformation steps within your Job.

Procedure
1. Like any other component, click the relevant Joblet name in the Palette and drop it to the design workspace to include
it in your Job.
2. Connect the Joblet with the input and/or output components of the Job.

151
Data Integration

3. Define all other components properties and context variables, if required, before running the Job like any other Job.
4. Include other transformation steps following the Joblet if required.
5. Save and run your Job as usual.

Using a Joblet in the beginning of a Job

About this task


It is possible to use your Joblet in the beginning of a Job. To do so:

Procedure
1. Open the Joblet you want to use in the beginning of a Job.
2. Click the Joblet tab in the lower part of the Studio to display the relevant view and then click Extra.
3. Select the Startable check box.

Launching a Joblet
It is possible to use a Joblet as a step in a procedure. You can start a Joblet, or a subJob after the execution of a Joblet,
using the Trigger Input and Trigger Output components from the Palette. In this example, the created Joblet is called
Transformation.

Using triggers in a Joblet

Procedure
1. Drag the Trigger Input component from the Palette and drop it above your Joblet.
2. Right-click Trigger Input, select a link of the type Trigger > OnSubjobOk and then click the input component of the
Joblet so that your Joblet starts after the execution of the first subJob.

152
Data Integration

3. Drag the Trigger Output component from the Palette and drop it below the Joblet.
4. Right-click the input component of the Joblet and select a link of the type Trigger > OnSubjobOk and then click the
Trigger Output component so that your third subJob starts after the execution of your Joblet.

Example

Launching the Joblet in a Job

Procedure
1. Create a new Job.
2. From the Repository tree view, click the created Joblet (Transformation) and drop it in the Job.
3. Drop a tFileOutputDelimited component next to the Joblet component, drop a tWarn component above the Joblet
component, and drop a tMsgBox component below the Joblet component.
4. Right-click the Joblet component and then select the Row > Joblet OUTPUT_1 link and click tFileOutputDelimited.
5. Double-click tFileOutputDelimited to display its basic settings and then define the path to the folder and file to be
created in the File Name field.
6. Right-click the tWarn component and select the link of the type Trigger > On Subjob Ok (TRIGGER_INPUT_1) and then
click the Joblet component.
7. Double-click the component that represents the Joblet to display its basic settings view.
In the Joblet TRIGGER_INPUT_1 field, the link type defined in the Joblet is read-only.

Tip: If you use many Triggers Input components in the Joblet and corresponding launching components in the Job,
verify that the right component is attached with the right launching link in the Attached node field of the Basic
settings view.

8. From the Version list, select the Joblet version you want to use in your Job. In this example, we use the latest version of
the Joblet.

153
Data Integration

Example

9. Right-click the Joblet component, select the link Trigger > On Subjob Ok (TRIGGER_OUTPUT_1), and then click the
tMsgBox component.

Example

10. Run your Job.

Results
The tWarn component sends a warning massage and launches the next subJob holding the Joblet you created:
Transformation. Once the second subJob is successfully executed, it launches a third subJob holding the tMsgBox
component indicating at the same time that the transformation has been carried out.

Editing a Joblet
You can edit a Joblet just like any other Job in the Integration perspective of Talend Studio.
You can make changes to a Joblet and get your changes reflected in the actual Job execution output. These changes can be
made directly in the Job or in a separate tab view.
Note that you can not modify the links of the Joblet directly in the Job.

154
Data Integration

Editing the Joblet in the Job

Procedure
1. Click the [+] sign to expand the Joblet.

2. Double-click any component to open its Basic settings view and modify its properties.
3. Press Ctrl+S to save your changes.

Editing the Joblet in a new tab view

Procedure
1. Double-click the Joblet you want to edit. You can also right-click it and select Open Joblet Component from the
contextual menu.
The Joblet is automatically locked.

Note: If you do not want the Joblet to open when double-clicking on it, see How to change specific component
settings (Talend > Components) on page 605.

2. Make your changes on your Joblet.


3. If you modify any trigger link connected to the Trigger Input or to the Trigger Output component, be sure to update the
Job using this Joblet accordingly.
4. Press Ctrl+S to save your changes.

Organizing Joblets
The same way as for the Job Designs, you can create folders via the right-click menu to gather together families of Joblets.

Procedure
1. In Talend Studio Repository tree view, click Job Designs > Joblets to expand the Joblets node.
2. Right-click on the Joblets node, and choose Create folder.
3. Give a name to this folder and click OK.
4. If you have already created Joblets that you want to move in this new folder, simply drag and drop them into the folder.

Applying a context variable to a Joblet


The context variable stored in the repository can be applied to a Joblet the same way as to any other Jobs.
In addition, a Job containing a Joblet allows its Joblet to use different context variables from those the Job is using. To do so,
simply drop the group of contexts you want to use onto the Joblet in the workspace of the Job.
For more information on how to apply a context variable to a Job from the repository, see Applying Repository context
variables to a Job on page 115.

Setting up code dependencies on a Joblet


If you want to enable your Joblet to call any function in a routine or custom routine jar, you need to set up code
dependencies, namely routine or custom routine jar dependencies, on the Joblet.
You can set up code dependencies on a Joblet in the same way as setting up code dependencies on a Job. For more
information, see Setting up code dependencies on a Job on page 118.

155
Data Integration

Managing Jobs
Activating/Deactivating a component or a subJob
You can activate or deactivate a subJob directly connected to the selected component. You can also activate or deactivate a
single component as well as all the subJobs linked to a Start component. The Start component is the trigger of the Job. It has
a green background.
When a component or a subJob is deactivated, you are not able to create or modify links from or to it. Moreover, at runtime,
no code is generated for the deactivated component or subJob.

Activate or deactivate a component

About this task


To activate or deactivate a component, proceed as follows:

Procedure
1. Right-click the component you want to activate or deactivate, the tFixedFlowInput component for example.
2. Select the option corresponding to the action you want to perform:
• Activate tFixedFlowInput_1 if you want to activate it.
• Deactivate tFixedFlowInput_1 if you want to deactivate it.

Activate or deactivate a subJob

About this task


To activate or deactivate a subJob, proceed as follows:

Procedure
1. Right-click any component composing the subJob.
2. Select the option corresponding to the action you want to perform:
• Activate current subJob if you want to activate it.
• Deactivate current subJob if you want to deactivate it.

156
Data Integration

Activate or deactivate all linked subJobs

About this task


To activate or deactivate all linked subJobs, proceed as follows:

Procedure
1. Right-click the Start component.
2. Select the option corresponding to the action you want to perform:
• Activate all linked subJobs if you want to activate them.
• Deactivate all linked subJobs if you want to deactivate them.

Importing/exporting items and building Jobs


Talend Studio enables you to import/export your Jobs or items in your Jobs from/to various projects or various versions of
the Studio. It enables you as well to build Jobs and thus deploy and execute those created in the Studio on any server.

Importing items

You can import items from previous versions of Talend Studio or from a different project of your current version.
The items you can possibly import are multiple:
• Jobs Designs
• Routines
• Documentation
• Metadata

Note:
From Talend 7.0 onward, a digital signature is added to each project item when it is saved in Talend Studio. When
importing a project or project items, Talend Studio validates the signatures and rejects items with an invalid signature.
This is a security measure to prevent accidental or malicious modification of project items.
However, you can import a project or project items exported from an earlier version of Talend Studio before the
expiration date of a 90-day grace period from the first installation of Talend Studio or before the date set in the migration
token, whichever comes later. Upon successful import, all imported items are signed.
For more information on setting a migration token, see Talend Data Fabric Installation Guide.

Talend Studio allows any authorized user to import any project item from a local repository into the remote repository and
share them with other users.
To import items, right-click any entry such as Job Designs in the Repository tree view and select Import Items from the

contextual menu or directly click the icon on the toolbar to open the Import items dialog box and then select an import
option.

157
Data Integration

The Find features needed for the import check box is selected by default to detect all features needed for importing a full
project. This operation may be time consuming and you can clear the check box to ignore the operation.
With the Find features needed for the import check box selected, if any required features for importing a full project are not
installed, a dialog box will pop up and display those missing features. You can click Install and restart in the dialog box to
install the features and restart your Studio or click Cancel to import only the items based on the features already activated in
your project.
To import items stored in a local directory, do the following:
1. Click the Select root directory option in the Import items dialog box.
2. Click Browse to browse down to the relevant project folder within the workspace directory. It should correspond to the
project name you picked up.

158
Data Integration

3. If you only want to import very specific items such as some Job Designs, you can select the specific folder, such as
Process where all the Job Designs for the project are stored.
But if your project gathers various types of items (Jobs Designs, Metadata, Routines...), we recommend you to select the
project folder to import all items in one go, and click OK.
4. If needed, select the overwrite existing items check box to overwrite existing items with those having the same names
to be imported. This will refresh the Items List.

Note: You cannot overwrite the existing items, if:


• the item is identical but the path is different, or
• the name is identical but the item is different.

5. From the Items List which displays all valid items that can be imported, select the items that you want to import by
selecting the corresponding check boxes.
6. Click Finish to validate the import.
To import items from an archive file (including source files and scripts), do the following:
1. Click the Select archive file option in the Import items dialog box.
2. Browse to the desired archive file and click Open.
3. If needed, select the overwrite existing items check box to overwrite existing items with those having the same names
to be imported. This will refresh the Items List.

Note: You cannot overwrite the existing items, if:


• the item is identical but the path is different, or
• the name is identical but the item is different.

4. From the Items List which displays all valid items that can be imported, select the items that you want to import by
selecting the corresponding check boxes.
5. Click Finish to validate the import.
To import items from Talend Exchange , do the following:

159
Data Integration

1. Click the Select archive file option in the Import items dialog box. Then, click BrowseTalendExchange to open the Select
an item from Talend Exchange dialog box.
2. Select the desired category from the Category list, and select the desired version from the TOS_VERSION_FILTER list.
A progress bar appears to indicate that the extensions are being downloaded. At last, the extensions for the selected
category and version will be shown in the dialog box.

3. Select the extension that you want to import from the list.
Click Finish to close the dialog box.
4. If needed, select the overwrite existing items check box to overwrite existing items with those having the same names
to be imported. This will refresh the Items List.

Note: You cannot overwrite the existing items, if:


• the item is identical but the path is different, or
• the name is identical but the item is different.

5. From the Items List which displays all valid items that can be imported, select the items that you want to import by
selecting the corresponding check boxes.
6. Click Finish to validate the import.

Note: If there are several versions of the same items, they are all imported into the Project you are running, unless you
already have identical items.

If you import items from previous versions of Talend Studio, those items will be migrated to the current version when being
imported.

160
Data Integration

When the migration completes, a CSV report file <timestamp>_<project-name>_Migration_Report.csv that


lists all migrated items is generated under the directory <Talend-Studio>\workspace\report\migration
Report_<timestamp>, where <timestamp> designates when the report is generated and <project-name>
designates the name of your project, and you will see a dialog box with a link to the report file.
• Click Run analysis to run the project analysis tool to analyze your migrated project. For more information, see Analyzing
projects.
• Click Not now to close the dialog box.
The table below describes the information presented in the report file.

Column Description

Task name the name of the migration task

Task description the description of the migration task

Item type the type of the migrated item

Path to migrated item the path to the migrated item

Migration details the details of the migration

You can now use and share your Jobs and all related items in your collaborative work. For more information about how to
collaborate on a project, see Working collaboratively on project items on page 26.

Building Jobs

The Build Job feature allows you to deploy and execute a Job on any server, independent of Talend Studio.

About this task


By executing build scripts generated from the templates defined in Project Settings, the Build Job feature adds all of the
files required to execute the Job to an archive, including the .bat and .sh along with any context-parameter files or other
related files.

Note: Your Talend Studio provides a set of default build script templates. You can customize those templates to meet
your actual needs. For more information, see Customizing shell command templates on page 566 and Customizing
Maven build script templates on page 567.

By default, when a Job is built, all the required jars are included in the .bat or .sh command. For a complex Job that
involves many Jars, the number of characters in the batch command may exceed the limitation of command length on
certain operating systems. To avoid failure of running the batch command due to this limitation, before building your Job, go
to Window > Preferences > Talend > Import/Export and then select the Add classpath jar in exported jobs check box to wrap
the Jars in a classpath.jar file added to the built Job.

Warning: The above-mentioned option is incompatible with JobServer. If your built Job will be deployed and executed in
Talend Administration Center, make sure to clear the check box before building your Job.

Procedure
1. In the Repository tree view, right-click the Job you want to build, and select Build Job to open the Build Job dialog box.

Note: You can show/hide a tree view of all created Jobs in Talend Studio directly from the Build Job dialog box by
clicking the and the buttons respectively. The Jobs you earlier selected in the Studio tree view display with
selected check boxes. This accessibility helps to modify the selected items to be exported directly from the dialog
box without having to close it and go back to the Repository tree view in Talend Studio to do that.

161
Data Integration

2. In the To archive file field, browse to the directory where you want to save your built Job.
3. From the Select the Job version area, select the version number of the Job you want to build if you have created more
than one version of the Job.
4. Select the Build Type from the list:
• Standalone Job
• OSGI Bundle For ESB
If the data service Job includes the tRESTClient or tESBConsumer component, and none of the Service Registry, Service
Locator or Service Activity Monitor is enabled in the component, the data service Job can be built as OSGI Bundle For
ESB or Standalone Job. With the Service Registry, Service Locator or Service Activity Monitor enabled, the data service
Job including the tRESTClient or tESBConsumer component can only be built as OSGI Bundle For ESB.
5. Select the Extract the zip file check box if you want the archive file to be automatically extracted in the target
directory.
6. In the Options area, select the check boxes corresponding to the file type(s) you want to add to the archive file. The
check boxes corresponding to the file types necessary for the execution of the Job are selected by default. You can clear
these check boxes depending on what you want to build.

Option Description

Binaries This option is selected by default to build your Job as an executable Job.

Shell launcher Select this check box to export the .bat and/or .sh files necessary to launch the built Job.
• All: exports the .bat and .sh files.
• Unix exports the .sh file.
• Windows exports the .bat file.

Context scripts Select this check box to export ALL context parameters files and not just those you select in the
corresponding list.

Note: To export only one context, select the context that fits your needs from the Context scripts
list, including the .bat or .sh files holding the appropriate context parameters. Then you can,
if you wish, edit the .bat and .sh files to manually modify the context type.

162
Data Integration

Option Description

Apply to children Select this check box if you want to apply the context selected from the list to all child Jobs.

Custom log4j level Select this check box to activate the Log4j output level list and select an output level for the built
Job.
For more information on Log4j settings, see Activating and configuring Log4j on page 583.

Items Select this check box to export the sources used by the Job during its execution including the
.item and .properties files, Java and Talend sources.

Note: If you select the Items or Source files check box, you can reuse the built Job in a Talend
Studio installed on another machine. These source files are only used in Talend Studio.

Execute tests Select this check box to execute the test case(s) of the Job, if any, when building the Job, and include
the test report files in the sunfire-reports folder of the build archive.
This check box is available only when the Binaries option is selected.
For more information on how to create test cases, see Testing Jobs using test cases on page 220.

Java sources Select this check box to export the .java file holding Java classes generated by the Job when
designing it.
This check box is available only when the Binaries option is selected.

7. Click the Override parameters' values button, if necessary.


In the window which opens you can update, add or remove context parameters and values of the Job context you
selected in the list.
8. Click Finish to validate your changes, complete the build operation and close the dialog box.

Results
A zipped file for the Jobs is created in the defined place.

Note: If the Job to be built calls a user routine that contains one or more extra Java classes in parallel with the public
class named the same as the user routine, the extra class or classes will not be included in the exported file. To export
such classes, you need to include them within the class with the routine name as inner classes. For more information
about user routines, see Managing user routines on page 516. For more information about classes and inner classes, see
relevant Java manuals.

Building a Job as a standalone Job


In the case of a Plain Old Java Object export, if you want to reuse the Job in Talend Studio installed on another machine,
make sure you selected the Items check box. These source files (.item and .properties) are only needed within Talend
Studio.
If you want to include an Ant or Maven script for each built Job, select the Add build script check boxes, and then select Ant
or Maven option button.
Select a context from the list when offered. Then once you click the Override parameters' values button below the Context
scripts check box, the opened window will list all of the parameters of the selected context. In this window, you can
configure the selected context as needs.
All contexts parameter files are exported along in addition to the one selected in the list.

Note:
After being exported, the context selection information is stored in the .bat or .sh file and the context settings are
stored in the context .properties file.

Building a Job as an OSGI Bundle For ESB

About this task

Note: You can build a Job as an OSGI Bundle for ESB only if the Job contains at least one ESB component.

163
Data Integration

Procedure
1. Click the Browse... button to specify the folder in which building your Job.
2. In the Job Version area, select the version number of the Job you want to build if you have created more than one
version of the Job.
3. In the Build type area, select Talend Runtime (OSGI) to build your Job as an OSGI Bundle.
4. Click Finish to build it.
For a quick test of your Job, you can start a locally installed Talend Runtime, and copy the build artifact to the
<TalendRuntimePath>/container/deploy folder.
For more information on how to install a local Talend Runtime, see the installation guide.
For use cases of building a Job as an OSGI Bundle For ESB, see Data Service and Routing Examples on Talend Help Center
(https://help.talend.com).

Exporting items

About this task


You can export multiple items from the repository onto a directory or an archive file. Hence you have the possibility to
export metadata information such as DB connection or Documentation along with your Job, for example. To do so:

Procedure
1. In the Repository tree view, select the items you want to export.
2. To select several items at a time, press the Ctrl key and select the relevant items.

164
Data Integration

Warning: If you want to export a database table metadata entry, make sure you select the whole DB connection, and
not only the relevant table as this will prevent the export process to complete correctly.

3. Right-click while maintaining the Ctrl key down and select Export items on the pop-up menu:

165
Data Integration

You can select additional items on the tree for exportation if required.
4. Click Browse to browse to where you want to store the exported items. Alternatively, define the archive file where to
compress the files for all selected items.

Note: If you have several versions of the same item, they will all be exported.

5. Select the Export Dependencies check box if you want to set and export dependencies along with the items you are
exporting. By deafult, this check box is selected.
All of the user routines are selected by default. For further information about routines, see What are routines on page
515.
6. Click Finish to close the dialog box and export the items.

Changing context parameters in Jobs

About this task


As explained in Building Jobs on page 161, you can edit the context parameters:

Procedure
• If you want to change the context selection, simply edit the .bat/.sh file and change the following setting: --
context=Prod to the relevant context.
• If you want to change individual parameters in the context selection, edit the .bat/.sh file and add the following setting
according to your need:

166
Data Integration

Operation Setting

To change value1 for parameter key1 --context_param key1=value1

To change value1 and value2 for --context_param key1=value1 --context_param key2=value2


respective parameters key1 and key2

To change a value containing space characters --context_param key1="path to file"


such as in a file path

Publishing to an artifact repository


Once created, you can publish your Job into an artifact repository. This artifact repository allows you to centralize, manage,
and register all items created and to be deployed on your Talend Runtime.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

Before you begin


The connection to your artifact repository has been configured in the Studio preferences. For more information, see
Configuring repositories for publishing artifacts on page 169.

Procedure
1. In the Repository tree view, select the item you want to publish into the artifact repository.
2. Right-click it and select Publish in the menu.
The Publish wizard opens.

167
Data Integration

The settings displayed in the Artifact Information area are read only.
• If a custom group ID has been specified for the item in the Job view, the Group ID field is automatically filled in
with the custom ID.
Otherwise,
• If the item is a root node in the repository, the Group ID field is automatically filled in with the default group
ID set in the artifact repository preference settings.
• If the item is not a root node in the repository, by default the Group ID field is automatically filled in with the
folder structure. For example: getting_started.movies.
If needed, customize the group ID for the item in the Deployment tab of the Job view.
• The Artifact ID field is filled in with the name of the item to publish.
• The Version field is filled in with:
• the deployment version set in the Job view of the item or in the Project Settings dialog box.
• deployment version derived from the version of the item to publish if no deployment version has been set for
the item.
• Depending on whether the snapshot option is enabled for the item in the Project Settings dialog box or the Job
view:
• If the Publish as Snapshot check box is selected, a snapshot version of the item will be published to the
Snapshot repository.
• Otherwise, a release version of the item will be published to the Release repository.
For more information, see Customizing deployment of a Job on page 171 and Managing deployment versions of Jobs
on page 570.
3. The <Item> Version (the <Item> is Job) field is automatically filled with the highest version of the item if more than one
version of it is available. Change it by selecting the version you want from the list.

168
Data Integration

4. From the Export Type list, select:


• Standalone Job to publish the Job as a standalone Job.
• OSGI Bundle For ESB to publish the Job as an OSGI Bundle.

Note: This option is available only when your Job contains tRESTClient, tRESTRequestm or tESBComsumer
components.

For more information on export types, see Importing/exporting items and building Jobs on page 157.
5. Click Next and select the export options based on your needs.
6. Click Finish to publish your item to the artifact repository.
A confirmation wizard appears when you choose to publish a Release Version Artifact. Click OK to confirm it.

Now your item is available as an artifact in the repository and can later on be retrieved for deployment and execution
on Talend Runtime . The features of the item, including the dependencies, are also published.

Configuring repositories for publishing artifacts

To be able to publish your Jobs into an artifact repository, you need to set your artifact repository connection preferences in
Talend Studio.

Before you begin


You have installed and launched Talend Artifact Repository. For more information, see Installing and configuring Talend
Artifact Repository.

Procedure
1. Click Window > Preferences from the menu bar to open the Preferences dialog box.

169
Data Integration

2. In the tree view, expand the Talend > Artifact Repository nodes and select Repository Settings.
• If your Talend Studio is connected with the Talend Administration Center, all the artifact repository settings are
automatically retrieved from the Talend Administration Center. You can choose to use the retrieved settings to
publish your Jobs, or configure your own artifact repositories.

• If your Studio is working on a local connection, all the fields are pre-filled with the locally-stored default settings.
You can modify the artifact repository settings according to your needs.

170
Data Integration

3. When connected with the Talend Administration Center, by default, the Studio checks the latest artifact repository
settings each time it interacts with the artifact server. To disable this, if the artifact repository settings are not subject
to frequent changes or if you have a poor internet connection, for example, clear the Always check latest settings check
box.
4. When connected with the Talend Administration Center, if you want to configure your own artifact repositories, select
the Use customized settings option.
5. Modify the artifact repository settings according to your needs.
Parameter Description

Type Select NEXUS 3, NEXUS, or Artifactory. NEXUS 3 is


delivered with Talend Administration Center as the
default type of artifact repository.

Url Type in the location URL of your repository.

Username Type in the username to connect to your repository.

Password Type in the password to connect to your repository.

Default Release Repo Type in the name of the repository into which to publish
the Release version of your artifact items by default.

Default Snapshot Repo Type in the name of the repository into which to publish
the Snapshot version of your artifact items by default.

Default Group ID Type in the name of the group in which to publish your
artifact items by default.

6. Click Apply to apply your changes or click Apply and Close to apply your changes and close the wizard.

Results
Now, you will be able to publish your Jobs into your Artifact repository.

Customizing deployment of a Job

About this task


Through the Job view, you can customize the deployment information, including the group ID and the deployment version, of
your Job before publishing it to the artifact repository.

Procedure
1. Open the Job you want to customize the deployment information of.
2. Click the Job tab and select the Deployment vertical tab.
3. Customize the deployment information for the Job as needed.
• To customize the group ID of the Job, select the Use Custom GroupId check box and enter the name of the group in
which you want to publish the Job.
When published, if the Job is a root node, its default group ID is the one set in the artifact repository preference
settings; if the Job is not a root node, the folder structure is used as the default group ID.
• To customize the deployment version of the Job, select the Use Custom Version check box and enter a new version.
By default, the deployment version is the version of the Job, or the deployment version set for the Job in project
settings.
• To publish a Snapshot version of the Job, select the Publish as Snapshot check box. To publish a Release version,
clear this check box.

171
Data Integration

Managing repository items


Talend Studio enables you to edit the items centralized in the repository and to update the Jobs that use these items
accordingly. It enables you as well to identify all the Jobs that use a certain repository item and analyze the data flow in
each of the listed Jobs.

Handling updates in repository items

You can update the metadata, context or joblets parameters that are centralized in the Repository tree view any time in
order to update the database connection or the context group details, for example.
When you modify any of the parameters of an entry in the Repository tree view, all Jobs using this repository entry will be
impacted by the modification. This is why the system will prompt you to propagate these modifications to all the Jobs that
use the repository entry.
The Update Detection dialog box is displayed to let you update the impacted Jobs when:
• you modify a centralized repository entry that is used in any Jobs and click Yes in the Modification dialog box that is
display automatically.
• you select Detect Dependencies from the right-click menu of a modified repository entry that is used in any Jobs, or click

the icon on the toolbar after modifying a centralized repository entry that is used in any Jobs.
For more information on updating impacted Jobs, see Updating impacted Jobs automatically on page 173 and Updating
impacted Jobs manually on page 174.
Talend Studio also provides advanced analyzing capabilities, namely impact analysis and data lineage, on repository items.
For more information, see Analyzing repository items on page 175.
The following sections explain how to modify the parameters of a repository entry and how to propagate the modifications
to all or some of the Jobs that use the entry in question.
Modifying a repository item

About this task


To update the parameters of a repository item, complete the following:

Procedure
1. Expand the Metadata, or Contexts , or Joblets Designs node in the Repository tree view and browse to the relevant
entry that you need to update.
2. Right-click this entry and select the corresponding edit option in the contextual menu.
A respective wizard displays where you can edit each of the definition steps for the entry parameters.
When updating the entry parameters, you need to propagate the changes throughout numerous Jobs or all your Jobs
that use this entry.

172
Data Integration

A prompt message pops up automatically at the end of your update/modification process when you click the Finish
button in the wizard.

3. Click Yes to close the message and implement the changes throughout all Jobs impacted by these changes. For more
information about the first way of propagating all your changes, see Updating impacted Jobs automatically on page
173.
Click No if you want to close the message without propagating the changes. This will allow you to propagate your cha
nges on the impacted Jobs manually on one by one basis. For more information on another way of propagating changes,
see Updating impacted Jobs manually on page 174.
Updating impacted Jobs automatically

About this task


After you update the parameters of any item already centralized in the Repository tree view and used in different Jobs, a
message will prompt you to propagate the modifications you did to all Jobs that use these parameters.
To update impacted Jobs, complete the following:

Procedure
1. In the Modification dialog box, click Yes to let the system scan your Repository tree view for the Jobs that get impacted
by the changes you just made.
This aims to automatically propagate the update throughout all your Jobs (open or not) in one click.
The Update Detection dialog box displays to list all Jobs impacted by the parameters that are modified.

173
Data Integration

2. Select the check boxes corresponding to the Jobs you want to update and clear those corresponding to the Jobs you do
not want to update.

You can update them any time later through the Detect Dependencies menu or the icon on the toolbar. For more
information, see Updating impacted Jobs manually on page 174.
3. Click OK to close the dialog box and update all selected Jobs.
Updating impacted Jobs manually

About this task


Before propagating changes in the parameters of an item centralized in the tree view throughout the Jobs using this entry,
you might want to view all Jobs that are impacted by the changes. To do that, complete the following:

Procedure
1. In the Repository tree view, expand the node holding the entry you want to check what Jobs use it.
2. Right-click the entry and select Detect Dependencies.
A progress bar indicates the process of checking for all Jobs that use the modified metadata or context parameter. Then
a dialog box displays to list all Jobs that use the modified item.

Note: You can also display this dialog box by clicking the icon on the toolbar.

3. Select the check boxes corresponding to the Jobs you want to update with the modified metadata or context parameter
and clear those corresponding to the Jobs you do not want to update.
4. Click OK to validate and close the dialog box.

Results

Note: The Jobs that you choose not to update will be switched back to Built-in, as the link to the Repository cannot be
maintained. It will thus keep their setting as it was before the change.

174
Data Integration

Analyzing repository items

Talend Studio provides you with advanced capabilities for analyzing any given item, such as a Job, in the Repository tree
view. This implies two forms of navigation: moving forward to discover descendant items up to the target component
(Impact Analysis) and moving backward to discover the ancestor items starting with the source component (Data Lineage).
The results of the analysis will determine where data comes from, how it is transformed, and where it is going or vice versa.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

Warning: All items on which you want to execute impact analysis or data lineage must be centralized in the Repository
tree view under any of the following nodes: Joblet Designs, Contexts , SQL Templates , Referenced project or Metadata.

Impact analysis
Impact analysis helps to identify all the Jobs that use any of the items centralized in the Repository tree view and that will
be impacted by a change in the parameters of a repository item.
Impact analysis also analyzes the data flow in each of the listed Jobs to show all the components and stages the data flow
passes through and the transformation done on data from the source component up to the target component.

Warning: All items on which you want to execute impact analysis or data lineage must be centralized in the Repository
tree view under any of the following nodes: Joblet Designs, Contexts , SQL Templates , Referenced project or Metadata.

The example below shows an impact analysis done on a database connection item stored under the Metadata node in the
Repository tree view.
To analyze data flow in each of the listed Jobs from the source component up to the target component, complete the
following:

Procedure
1. In the Repository tree view, expand Metadata and browse to the metadata entry you want to analyze, employees
under the DB connection mysql in this example.
2. Right-click the entry you want to analyze and select Impact Analysis.
A progress bar indicates the process of checking for all Jobs that use the modified metadata parameter. The Impact
Analysis view appears in the Studio to list all Jobs that use the selected metadata entry. The names of the selected
database connection and table schema are displayed in the corresponding fields.

Note: You can also open this view if you select Window > Show View > Talend > Impact Analysis.

3. Right-click any of the listed Jobs and select:

Select... To...

Open Job open the corresponding Job in the Studio workspace.

Expand/Collapse expand/collapse all the items included in the selected Job.

Thus, you have an outline of the Jobs that use the selected metadata entry.

175
Data Integration

4. From the Column list, select the column name for which you want to analyze the data flow from the data source
(input component), through various components and stages, to the data destination (output component), Name in this
example.

Note: The Last version check box is selected by default. This option allows you to select the last version of your Job
instead of displaying all versions of your Job in the analysis results.

5. Click Analysis....
A bar displays to indicate the progress of the analysis operation and the analysis results display in the view.

Results

Note: Alternatively, you can directly right-click a particular column in the Repository tree view and select Impact Analysis
from the contextual menu to display the analysis results regarding that column in the Impact Analysis view.

The impact analysis results trace the components and transformations the data in the source column Name passes through
before being written in the output column Name.

176
Data Integration

Data lineage
Data lineage shows the data flow from the data destination (output component), through various components and stages,
to the data source (input component). The data lineage results trace the life cycle of the data flow between different
components, including the operations that are performed upon the data.

Warning: All items on which you want to execute impact analysis or data lineage must be centralized in the Repository
tree view under any of the following nodes: Joblets Designs, Contexts , SQL Templates , Reference project or Metadata.

The example below shows the data lineage made on a database connection item stored under the Metadata node in the
Repository tree view.
To launch a data lineage on a metadata item, complete the following:

Procedure
1. In the Repository tree view, expand Metadata > Db Connection and then expand the database connection you want to
analyze, mysql in this example.
2. Right-click the centralized table schema of which you want to analyze the life cycle of the data flow, employees in
this example.
The Impact Analysis view displays the Jobs that use the selected table schema. The names of the selected database
connection and table schema are displayed in the corresponding fields.

3. From the Column list, select the column name for which you want to analyze the data flow from the data destination
(output component), through various components and stages, to the data source (input component). The column to be
analyzed in this example is called Name.
You can skip this step by right-clicking the column Name in the Repository tree view and selecting Impact Analysis from
the contextual menu.
4. Click Data Lineage.
A bar appears to indicate the progress of the analysis operation and the analysis results are displayed in the view.
5. Right-click a listed Job and select Open Job from the contextual menu.
The Job opens in the design workspace.

177
Data Integration

The data lineage results trace backward the components and transformations the data in the output column Name
passes through before being written in this column.
Exporting the results of impact analysis/data lineage to HTML
Talend Studio allows you to produce detailed documentation in HTML of the results of the impact analysis or data lineage
done on the selected repository element. This documentation offers information related to the Jobs that use this repository
element including: project and author detail, project description and a preview of the graphical results of the analysis done
on the impacted Jobs.
To generate an HTML document of an impact analysis or data lineage with customization, complete the following:

Procedure
1. After you analyze a given repository item as outlined in Impact analysis on page 175 or Data lineage on page 177
and in the Impact Analysis view, click the Export to HTML button.

178
Data Integration

The Generate Documentation dialog box opens.

2. Enter the path to where you want to store the generated documentation archive or browse to the desired location and
then give a name for this HTML archive.
3. Select the Custom CSS template to export check box to activate the CSS File field if you need use your own CSS file to
customize the exported HTML files. The destination folder for HTML will contain the html file, a css file, an xml file and
a pictures folder.
4. Click Finish to validate the operation and close the dialog box.
An archive file that contains all required files along with the HTML output file is created in the specified path.
5. Double-click the HTML file in the generated archive to open it in your favorite browser.
The figure below illustrates an example of a generated HTML file.

Note: You can also set CSS customization as a preference for exporting HTML. To do this, see Documentation
preferences (Talend > Documentation) on page 606.

179
Data Integration

Results
The archive file gathers all generated documents including the HTML that gives a description of the project that holds the
analyzed Jobs in addition to a preview of the analysis graphical results.
Exporting the results of impact analysis/data lineage to XML
Talend Studio also allows you to export the results of the impact analysis or data lineage done on the selected repository
element to an XML document. This tree-structured documentation can be processed by automated analytical applications for
Job analysis and reporting purposes.
To generate an XML document of the results of impact analysis or data lineage on the selected a repository item, complete
the following:

Procedure
1. After you analyze a given repository item as outlined in Impact analysis on page 175 or Data lineage on page 177
and in the Impact Analysis view, click the Export to XML button.

180
Data Integration

The Generate XML dialog box appears.

2. Enter the path to where you want to store the generated XML document or browse to the desired location and then give
a name for this XML file.
3. Select the Overwrite existing files without warning check box to suppress the warning message if the specified filename
already exists.
4. Click Finish to validate the operation and close the dialog box.
An XML file that contains the impact analysis or data lineage information is created in the specified path.
The figure below illustrates an example of a generated XML file, opened in a text editor.

181
Data Integration

Searching a Job in the repository

About this task


If you want to open a specific Job in the Repository tree view of the current Integration perspective of Talend Studio and

you can not find it for one reason or another, you can simply click on the quick access toolbar.
To find a Job in the Repository tree view, complete the following:

Procedure
1.
On Talend Studio toolbar, click to open the Find a Job dialog box that lists automatically all the Jobs you created in
the current Studio.

182
Data Integration

2. Enter the Job name or part of the Job name in the upper field.
When you start typing your text in the field, the Job list is updated automatically to display only the Job(s) which
name(s) match(es) the letters you typed in.

183
Data Integration

3. Select the desired Job from the list and click Link Repository to automatically browse to the selected Job in the
Repository tree view.
4. If needed, click Cancel to close the dialog box and then right-click the selected Job in the Repository tree view to
perform any of the available operations in the contextual menu.
Otherwise, click OK to close the dialog box and open the selected Job on the design workspace.

Managing Job versions


When you create a Job in Talend Studio, by default its version is 0.1, where 0 stands for the major version and 1 for the minor
version.
The following sections describe how to manage the version of a Job.
You can also manage the version of several Jobs and/or other project items at the same time from the Project Settings dialog
box. For more information, see Upgrading the version of project items on page 578 and Removing old versions of project
items on page 579.

Updating the version of an inactive Job

Procedure
1. Close your Job if it is open on the design workspace. Otherwise, its properties will be read-only and thus you cannot
modify them.
2. In the Repository tree view, right-click your Job and:
• select Edit properties from the drop-down list to open the Edit properties dialog box, and click the M button next
to the Version field to increment the major version or the m button to increment the minor version.

184
Data Integration

• select Open another version from the drop-down list, then in the dialog box select the Create new version and
open check box and click the M button to increment the major version or the m button to increment the minor
version.
3. Click Finish to validate the modification.

Results
You have created a new version for the Job.

Note: By default, when you open a Job, you open its last version. Any previous version of the Job is read-only and thus
cannot be modified.

Updating the version of an active Job

You can also save your currently active Job and increment its version at the same time.

Procedure
1. Click File > Save As....
2. In the Save As dialog box, set a new version and click Finish.

Note: If you give your Job a new name, this option does not overwrite your current Job, but it saves your Job as a new
one with the same version of the current Job or with a new version if you specify one.

Working on different versions of a Job

You can access a list of the different versions of a Job and perform certain operations.

Procedure
1. In the Repository tree view, select the Job you want to consult the versions of.
2. On the configuration tabs panel, click the Job tab and then click Version to display the version list of the selected Job.
3. Right-click the Job version you want to work on.
4. Select an option:

Select To...

Edit Job open the last version of the Job.

Note: This option is available only when you select the last version of the Job.

Read job consult the Job in read-only mode.

Open Job Hierarchy consult the hierarchy of the Job.

View documentation generate and open the documentation of the Job.

Edit properties edit Job properties.


Note: The Job should not be open on the design workspace, otherwise it will be in read-only
mode.

Note: This option is available only when you select the last version of the Job.

Run job execute the Job.

Generate Doc As HTML generate details documentation about the Job.

Removing a version of a Job

If you are sure that a version of a Job is no longer useful, you can remove it by deleting its resource files.

185
Data Integration

Warning:
• A Job removed this way will not go to the Recycle bin and therefore cannot be restored.
• Mis-deletion of a resource file may cause damage to the integrity of the corresponding Job and thus cause it to stop
functioning.

Note that you can also remove old versions of each item in the Repository tree view through the General > Version
Management > Cleanup view in the Project Settings dialog box. For more information, see Removing old versions of project
items on page 579.

About this task


To remove a version of a Job by deleting its resource files:

Procedure
1. If you want to remove the latest version of a Job and if it is currently open, close it.
2. Select Window > Show view... from the menu, then in the the Show View dialog box, select General > Navigator and
click OK to open the Navigator view in the configuration tabs area.
Skip this step if the Navigator view is already displayed.
3. In the Navigator view, expand to the node named after your project.
This node is in all capitals, MY_PROJECT for example.
4. Go to the process folder to show the resource files of your Job.
If your Job is in a sub folder, go to that sub folder to show the corresponding resource files.
5. Select the three resource files corresponding to your Job name and the version you want to delete, right-click the
selection and click Delete on the context menu, and then click OK in the Delete Resources dialog box.
For example, to delete the 0.1 version of a Job named my_job, delete these files:
• my_job_0.1.item
• my_job_0.1.properties
• my_job_0.1.screenshot
Note that there is no .screenshot file if you are working on a Git managed project and your Job was created when
the Do not create or update screenshots for Jobs, Joblets, Routes and Routelets check box on the General view in the
Project Settings dialog box is selected. For more information, see Configuring screenshot generation on page 573.

Documenting a Job
Talend Studio enables you to generate documentation that gives general information about your projects, Jobs or joblets.
You can automate the generation of such documentation and edit any of the generated documents.

Generating HTML documentation

Talend Studio allows you to generate detailed documentation in HTML of the Job(s) you select in the Repository tree view of
your Studio in the Integration perspective. This auto-documentation offers the following:
• The properties of the project where the selected Jobs have been created,
• The properties and settings of the selected Jobs along with preview pictures of each of the Jobs,
• The list of all the components used in each of the selected Jobs and component parameters.
To generate an HTML document for a Job, complete the following:

Procedure
1. In the Repository tree view, right-click a Job entry or select several items to produce multiple documentations.
2. Select Generate Doc as HTML on the contextual menu.

186
Data Integration

3. Browse to the location where the generated documentation archive should be stored.
4. In the same field, type in a name for the archive gathering all generated documents.
5. Select the Use CSS file as a template to export check box to activate the CSS File field if you need to use a CSS file.
6. In the CSS File field, browse to, or enter the path to the CSS file to be used.
7. Click Finish to validate the generation operation.

Results
The archive file is generated in the defined path. It contains all required files along with the Html output file. You can open
the HTML file in your favorite browser.

Autogenerating documentation

In Talend Studio, you can set permanent parameters for the auto-documentation to be generated every time a Job is created,
saved or updated. To set up the automatic generation of a Job/Joblet documentation, proceed as follows:
1. Click Window > Preferences > Talend> Documentation.
2. Select the check box Automatic update of corresponding documentation of Job/joblet.
3. Click OK if no further customizing of the generated documentation is required.
Now every time a Job is created, saved or updated, the related documentation is generated.
The generated documents are shown directly in the Documentation folder of the Repository tree view.

187
Data Integration

To open automatically generated Job/Joblet documentation, proceed as follows:


1. Click the Documentation node in the Repository tree view.
2. Then look into the Generated folder where all Jobs and Joblets auto-documentation is stored.
3. Double-click the relevant Job or Joblet label to open the corresponding Html file as a new tab in the design workspace.

This documentation gathers all information related to the Job or Joblet. You can then export the documentation in an archive
file as html and pdf:
1. In the Repository tree view, right-click the relevant documentation you want to export.

188
Data Integration

2. Select Export documentation to open the export wizard.


3. Browse to the destination location for the archive to create.
4. Click Finish to export.
The archive file contains all needed files for the Html to be viewed in any web browser.
In addition, you can customize the autogenerated documentation using your own logo and company name with different
CSS (Cascading Style Sheets) styles. The destination folder for HTML will contain the html file, a css file, an xml file and a
pictures folder. To do so:
1. Go to Window > Preferences > Talend > Documentation.

2. In the User Doc Logo field, browse to the image file of your company logo in order to use it on all auto-generated
documentation.
3. In the Company Name field, type in your company name.
4. Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if you need to use a
CSS file.
5. In the CSS File field, browse to, or enter the path to the CSS file to be used.
6. Click Apply and then OK.

Updating the documentation on the spot

You can choose to manually update your documentation on the spot.

189
Data Integration

Procedure
• To update a single document, right-click the relevant Job or Joblet generated documentation entry and select Update
documentation.
• The autogenerated documentation is saved every time you close your Job or Joblet, but you can:
• Update all generated documents in one go: Right-click the Generated folder and select Update all projects
documentation.
• Update only all Jobs' generated documents in one go: Right-click the Jobs folder and select Update all jobs'
documentation.
• Update all Joblets' generated documents in one go: Right-click the Joblets folder and select Update all joblet
documentation.

Working with referenced projects


Adding referenced projects is a property you can set for a project in Talend Studio so that it can be referenced by another
project.

Accessing items in the referenced projects

After the connection between main and reference projects has been established, you can see and share all items from the
reference project upon opening the main project in Talend Studio.
For more information, see Defining project references on page 44.
You can show them either as part of the Job Design folder or as a separate folder, Referenced Project, that has its own
hierarchical tree.

190
Data Integration

Before you begin


You need to have read-write access to the projects only for migration purposes upon migrating to a new version or applying
a patch.

Procedure
1. Launch your Studio using a remote connection and the URL of Talend Administration Center .
2. Open the main project.
All items in the reference project show by default in the repository tree view as a separate reference project preceded
by the @ sign and are read-only.
3.
Click the icon on the toolbar to merge both projects in the tree view.
Names of the items in the referenced projects look unavailable and are followed by the referenced project name they
are part of to distinguish them from items in the main project.

Use the icon to switch between the two display modes.

Using items from referenced projects

When one project references another, all items in the referenced project are available for reuse from the main project.
For example, you might want to create a library of reusable subroutines in a project. Or you might want to create similar
project settings to reuse in other projects.

Before you begin


• You have read-writing authorization for the main and referenced projects. For further information, see Talend
Administration Center User Guide.
• You have opened the main project in the Studio.

Procedure
1. From the main project, open the Job on which you want to use an item from the reference project.
Below is an example of a Project2 tree view in Talend Studio.

191
Data Integration

As you can see in the above figure, all resources in Project1 and Project3 are accessible directly from Project2. The
resources in the two referenced projects are in read-only mode: they are available for reuse but cannot be modified.
2. Use any element from the reference project in the open Job.
For example double-click a component in the Job and call a user-routine from the reference project, or drop context
variables created in the reference project to the open Job in the main project.
3. Run your Job with the new routine or context variables.
Items and resources in the referenced projects are always up-to-date since the refresh option in Talend Studio will
refresh EVERY item in the studio. However, the refresh operation could be quite long if you have too many references
and items in each of your projects. It is therefor preferable not to use very big referenced projects, especially if you use
the database repository.

Comparing Jobs
Talend Studio provides a Compare Job option that enables you to compare Jobs on the same or different branches in order to
list the differences in the items used in the two Jobs. Using this option, you can:
• compare different versions of the same Job,
• compare the same Job in different releases of the Studio, in order to see if any modifications were done on the Job in
the previous/current release, for example,
• compare Jobs that have been designed using the same template, but different parameters, to check the differences
among these Jobs.
Differences between the compared Jobs are displayed in the Compare Result view. The result detail are grouped under the
three categories: Jobsettings, Components and Connectors.

192
Data Integration

The table below gives the description of the comparison results under each of the above categories.

Category Description

Jobsettings lists all differences related to the settings of the compared Job.

Components lists the differences in the components and component parameter used in the two Jobs. A minus sign
appended on top of a component listed in the Compare Result view indicates that this component is
missing in the design of one of the two compared Jobs. A plus sign appended on top of a component
listed in the view indicates that this component is added in one of the two compared Jobs. All differences
in the component parameters will be listed in tables that display under the corresponding component.

Connectors lists differences in all the links used to connect components in the two Jobs.

The procedure to compare two Jobs or two different versions of the same Job are the same.
To compare two different versions of the same Job, complete the following:

Procedure
1. In the Repository tree view, right-click the Job version you want to compare with another version of the same Job and
then select Compare Job from the contextual menu.

The Compare Result view displays in the Studio workspace. The selected Job name and version show, by default, in the
corresponding fields.

2. If the other version of the Job with which you want to compare the current version is on another branch, select the
branch from the Another Branch list.
3. Click the three-dot button next to the Another job field to open the Select a Job/Joblet dialog box.

193
Data Integration

4. In the Name Filter field, type in the name of the Job or Joblet you want to use for this comparison. The dialog box
returns you this Job or Joblet you are searching for.
5. Select the returned Job or Joblet from the list in the dialog box and click OK.
6. From the Current version and Another version lists select the Job versions you want to compare.
7.
Click the button to launch the compare operation.
The two indicated versions of the Job display in the design workspace.

The differences between the two versions are listed in the Compare Result view

194
Data Integration

Results
In this example, differences between the two Job versions are related to components and links (connectors). The figure
below shows the differences in the components used in the two versions.

195
Data Integration

For example, there is one difference in the output schemas used in the tMap and tFileOutputXML components: the length
of the Revenue column is 15 in the second version of the Job while the length is 11 in the first version of the same Job.
The minus sign appended on top of tMysqlOutput indicates that this component is missing in the design of one of the two
compared Jobs. The plus sign appended on top of tOracleOutput indicates that this component is added in one of the two
compared Jobs.

Note: If you click any of the components listed in the Compare Result view, the component will be automatically
selected, and thus identified, in the open Job in the design workspace.

The figure below shows the differences in the links used to link the components in the two versions of the same Job.

196
Data Integration

In this example, there is one difference related to the reject link used in the two versions: the target of this link in the first
version is a tMysqlOutput component, while it is a tOracleOutput component in the second version.

Note: You can export the Job compare results to an html file by clicking Export to html. Then browse to the directory you
to want to save the file in and enter a file name. You have the option of using a default CSS template or a customized
one. The destination folder will contain the html file, a css file, an xml file and a pictures folder. For related topic, see
Exporting the results of impact analysis/data lineage to HTML on page 178.

Running Jobs
You can execute a Job in several ways. This mainly depends on the purpose of your Job execution and on your user level.
This section describes:
• Running a Job in normal mode on page 197
• Running a Job in Java Debug mode on page 198
• Running a Job in Traces Debug mode on page 199
• Setting advanced execution settings on page 201
• Running a Job remotely on page 205
• Customizing log4j output level at runtime on page 203
• Showing JVM resource usage during Job execution on page 204
• Running a Job remotely with SSL enabled on page 205
• Recovering Job execution in case of failure on page 207

Running a Job in normal mode

Note:
Make sure you saved your Job before running it in order for all properties to be taken into account.

To run your Job in a normal mode, do the following:


1. Click the Run view to access it.
2. Click the Basic Run tab to access the normal execution mode.
3. In the Context area to the right of the view, select in the list the proper context for the Job to be executed in. You can
also check the variable values.
If you have not defined any particular execution context, the context parameter table is empty and the context is the default
one. Related topic: Using contexts and variables on page 105.
1. Click Run to start the execution.
2. On the same view, the console displays the progress of the execution. The log includes any error message as well as
start and end messages.
3. To define the lines of the execution progress to be displayed in the console, select the Line limit check box and type in
a value in the field.

197
Data Integration

4. Select the Wrap check box to wrap the text to fit the console width. This check box is selected by default. When it is
cleared, a horizontal scrollbar appears, allowing you to view the end of the lines.

Before running again a Job, you might want to remove the execution statistics and traces from the designing workspace. To
do so, click the Clear button.
If for any reason, you want to stop the Job in progress, simply click the Kill button. You will need to click the Run button
again, to start again the Job.
Talend Studio offers various informative features displayed during execution, such as statistics and traces, facilitating the Job
monitoring and debugging work. For more information, see the following sections.

Running a Job in Java Debug mode

About this task


To follow step by step the execution of a Job to identify possible bugs, you can run it in Debug mode.
Before running your Job in Debug mode, you can add breakpoints to the major steps of your Job flow. This will allow you to
get the Job to automatically stop at each breakpoint. This way, components and their respective variables can be verified
individually and debugged if required.
To add breakpoints to a component, right-click it on the design workspace, and select Add breakpoint on the contextual
menu.
A pause icon displays next to the component where the break is added.

To access the Debug mode:

Procedure
1. Click the Run view to access it.
2. Click the Debug Run tab to access the debug execution modes.
To switch to debug mode, click the Java Debug button on the Debug Run tab of the Run panel. Talend Studio's main
window gets reorganized for debugging.

Results
You can then run the Job step by step and, if you have added breakpoints, check each breakpoint component for the
expected behavior and variable values.

198
Data Integration

Running a Job in Traces Debug mode

The traces feature allows you to monitor data processing when running a Job in the Integration perspective of Talend
Studio.
It provides a row by row view of the component behavior and displays the dynamic result next to the Row link on the design
workspace.

This feature allows you to monitor all the components of a Job, without switching to the debug mode, hence without
requiring advanced Java knowledge.
The Traces function displays the content of processed rows in a table.

Note:
Exception is made for external components which cannot offer this feature if their design does not include it.

You can activate or deactivate Traces or decide what processed columns to display in the traces table that displays on the
design workspace when launching the current Job.You can either choose to monitor the whole data processing or monitor
the data processing row by row or at a certain breakpoint. For more information about the row by row execution of the
Traces mode, see Row by row monitoring on page 200. For more information about the breakpoint usage with the Traces
mode, see Breakpoint monitoring on page 201.
To activate the Traces mode in a Job:

1. Click the Run view.


2. Click the Debug Run tab to access the debug and traces execution modes.
3. Click the down arrow of the Java Debug button and select the Traces Debug option. An icon displays under every flow of
your Job to indicate that process monitoring is activated.
4. Click the Traces Debug to execute the Job in Traces mode.
To deactivate the Traces on one of the flows in your Job:

1. Right-click the Traces icon under the relevant flow.


2. Select Disable Traces from the list. A red minus sign replaces the green plus sign on the icon to indicate that the Traces
mode has been deactivated for this flow.
To choose which columns of the processed data to display in the traces table, do the following:
1. Right-click the Traces icon for the relevant flow, then select Setup Traces from the list. The Setup Traces dialog box
appears.

199
Data Integration

2. In the dialog box, clear the check boxes corresponding to the columns you do not want to display in the Traces table.
3. Click OK to close the dialog box.

Monitoring data processing starts when you execute the Job and stops at the end of the execution.
To remove the displayed monitoring information, click the Clear button in the Debug Run tab.
Row by row monitoring
Talend Studio enables you to monitor your data process row by row.

Procedure
• To manually monitor the data processing of your Job row by row, simply click the Next Row button and the processed
rows will display below its corresponding link on the design workspace.
• To go back to previous rows, click the Previous Row button, within a limit of five rows back.
• If, for any reason, you want to stop the Job in progress, simply click the Kill button; if you want to execute the Job to the
end, click the Basic Run button.

200
Data Integration

• To remove the displayed monitoring information from the design workspace, click the Clear button in the Debug Run
tab.
Breakpoint monitoring
If you want to monitor your data processing at certain defined breakpoints, you can execute your Job in Traces Debug mode
and the Job will automatically be executed until the next breakpoint.
Before monitoring your data processing at certain breakpoints, you need to add breakpoints to the relevant Job flow(s).
This will allow you to automatically stop the Job at each defined breakpoint. This way, components and their respective
variables can be verified individually and debugged if required.
To add a breakpoint to a link:
1. Right-click it on the design workspace, and select Show Breakpoint Setup on the popup menu.
2. On the Breakpoint view, select the Activate conditional breakpoint check box and set the Conditions in the table.
A pause icon displays below the link on which the break is added when you access the Traces mode.

Once the breakpoints are defined, switch to the Traces mode. To do so:
1. Click the Run view, and the Debug Run tab.
2. Click the down arrow of the Java Debug button and select the Traces Debug option.
3. Click the Traces Debug to execute the Job in Traces mode. The data will be processed until the first defined breakpoint.
4. Click the Next Breakpoint button to continue the data process until the next breakpoint.
If, for any reason, you want to stop the Job in progress, simply click the Kill button; if you want to execute the Job to the
end, click the Basic Run button.
To remove the displayed monitoring information from the design workspace, click the Clear button in the Debug Run
tab.

Setting advanced execution settings

In the Advanced settings tab of the Run view, several advanced execution settings are available to make the execution of the
Jobs handier.
Displaying Statistics
The Statistics feature displays each component performance rate, under the flow links on the design workspace.

201
Data Integration

It shows the number of rows processed and the processing time in row per second, allowing you to spot straight away any
bottleneck in the data processing flow.
For trigger links like OnComponentOK, OnComponentError, OnSubjobOK, OnSubjobError and If, the Statistics option displays
the state of this trigger during the execution time of your Job: Ok or Error and True or False.

Note: Exception is made for external components which cannot offer this feature if their design does not include it.

Procedure
• In the Run view, click the Advanced settings tab and select the Statistics check box to activate the Stats feature and
clear the box to disable it.
The calculation only starts when the Job execution is launched, and stops at the end of it.

Note: The statistics thread slows down Job execution as the Job must send these stats data to the design workspace
in order to be displayed.

• Click the Clear button from the Basic or Debug Run views to remove the calculated stats displayed.
• Select the Clear before Run check box to reset the Stats feature before each execution.
Displaying the execution time and other options

Procedure
• To display the total execution time, select in the Advanced settings tab of the Run view the Exec time check box before
running the Job.
This way you can test your Job before going to production.
• To clear the design workspace before each Job execution, select the check box Clear before Run.
• To save your Job before the execution starts, select the relevant option check box.
Displaying special characters in the console

About this task


Talend Studio can display special characters in the console. To enable the display of Chinese, Japanese or Korean characters,
for example, proceed as follows before executing the Job:

202
Data Integration

Procedure
1. Select the Advanced settings tab.
2. In the JVM settings area of the tab view, select the Use specific JVM arguments check box to activate the Argument
table.
3. Next to the Argument table, click the New... button to pop up the Set the VM argument dialog box.
4. In the dialog box, type in -Dfile.encoding=UTF-8.
5. Click OK to close the dialog box.
Specifying the limits of VM memory for a Job
In Talend Studio, you can define the parameters of your current JVM before executing your Job according to your needs.
The default parameters -Xms256M and -Xmx1024M correspond respectively to the minimal and maximal memory sizes
reserved for your Job executions. Edit these parameters to meed your specific needs.
To specify these parameters globally in Talend Studio, see Debug and Job execution preferences (Talend > Run/Debug) on
page 611.

Procedure
1. In the Run view, in the Advanced settings tab, select the Use specific JVM arguments check box.
2. Click the New button and then, in the Set the VM Argument dialog box that opens, enter the argument to use.
For example, to successfully execute a Job that handles an Excel file containing a million records, you may want to
specify -Xmx8192M to increase the maximum VM memory size to 8 GB.
3. Click OK to add the argument.

Customizing log4j output level at runtime

About this task


When activated in components, the Apache logging utility log4j outputs component-related logging information at runtime.
By default, all logging messages of or higher than the level defined in the log4j configuration will be output to the defined
target.
You can change the logging output level for an execution of your Job. To do so, take the following steps:

203
Data Integration

Procedure
1. In the Run view, click the Advanced settings tab.
2. Select the log4jLevel check box, and select the desired output level from the drop-down list.
This check box is displayed only when log4j is activated in components.
For more information on the logging output levels, see Apache documentation at http://logging.apache.org/log4j/1.2/
apidocs/org/apache/log4j/Level.html.
3. Run your Job.

Results
All the logging messages of and higher than the level you set are output to the defined target.
For information on how to activate log4j in components and how to configure the logging behaviors globally, see Activating
and configuring Log4j on page 583.
For more information regarding the components with which you can use the log4j feature, see List of components that support
the Log4j feature on Talend Help Center (https://help.talend.com)..

Showing JVM resource usage during Job execution

About this task


The Memory Run vertical tab of the Run view of your Talend Studio allows you to monitor real-time JVM resource usage
during Job execution, including memory consumption and host CPU usage, so that you can take appropriate actions when the
resource usage is too high and results in low performance of your Talend Studio, such as increasing the memory allocated to
the JVM, stopping unnecessary Jobs, and so on.
To monitor JVM resource usage at Job execution, do the following:

Procedure
1. Open your Job.
In the Run view, click the Memory Run tab.
2. Click Run to run the Job.
You can click Run on the Memory Run tab to monitor the JVM resource usage by your Job at any time even after you
launch your Job from the Basic Run tab.
The Studio console displays curve graphs showing the JVM heap usage and CPU usage respectively during the Job
execution. Warning messages are shown in red on the Job execution information area when the relevant thresholds are
reached.

3. To view the information about resources used at a certain point of time during the Job execution, move the mouse onto
that point of time on the relevant graph. Depending on the graph on which you move your mouse pointer, you can see
the information about allocated heap size, the 90% heap threshold, and the 70% heap threshold, or the CPU usage, at
the point of time.
4. To run the Garbage Collector at a particular interval, select the With Garbage Collector pace set to check box and select
an interval in seconds. The Garbage Collector automatically runs at the specified interval.

204
Data Integration

To run the Garbage Collector once immediately, click the Trigger GC button.
5. To export the log information into a text file, click the Export button and select a file to save the log.
6. To stop the Job, click the Kill button.

Running a Job remotely

Talend Studio allows you to deploy and execute your Jobs on a remote JobServer when you work on either a local project or
on a remote one on the condition that you are connected with the Talend Administration Center.
If you are working on a remote project and your Studio is disconnected from the Talend Administration Center, the JobServer
settings can not be retrieved and you can not configure them manually in the Studio. This makes it not possible to run the
Job remotely.

Before you begin


Make sure:
• the JobServer settings are defined correctly in the Studio.
For a local project, set the remote JobServer details in the Preferences > Talend > Run/Debug > Remote window of
Talend Studio.
For a remote project, check that your Talend Studio is connected with Talend Administration Center. When the Studio is
connected with Talend Administration Center, the JobServer settings are automatically retrieved and are read only.
If the Studio is disconnected from the Talend Administration Center, the JobServer settings can not be retrieved and you
can not configure them manually.
• the agent is running on the remote JobServer.
For more information on how to set JobServer details in your Talend Studio, see Configuring remote execution (Talend >
Run/Debug) on page 613.

Procedure
1. Go to the Run view of Talend Studio design workspace.
2. Click the Target Exec tab in the Run view.
3. Select the relevant remote JobServer on the list.
When working on a remote project while you are connected with Talend Administration Center, you are recommended
to select a virtual server so that the Talend Administration Center will determine which physical server your Job will be
executed on to achieve load balancing.
4. Click the Run button of the Basic Run tab, as usual, to connect to the server and deploy then execute in one go the
current Job.

Note: If you get a connection error, check that the agent is running, the ports are available and the server IP address
is correct.

You can also execute your Job on the specified JobServer by clicking the Run button of the Memory Run tab if you
want to monitor JVM resource usage during the Job execution. For more information on how to enable resource usage
monitoring, see Configuring remote execution (Talend > Run/Debug) on page 613.

Running a Job remotely with SSL enabled

You have the possibility to run a Job on a remote JobServer with SSL enabled. SSL allows you to encrypt data prior to
transmission.

Before you begin


Make sure:
• SSL is enabled in the JobServer configuration file conf/TalendJobServer.properties.
• The JobServer is up and running.

205
Data Integration

About this task


When working on a local project, follow the procedure below to configure the remote server with SSL support on the Studio
side.

Note: If you are working on a remote project and you are connected with the Talend Administration Center, the JobServer
settings are retrieved from the Talend Administration Center and are read only.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend and the Run/Debug nodes in succession and then click Remote.
3. In the Remote Jobs Servers area, click the [+] button to add a new line in the table.
4. Fill in all fields as configured for the Job execution server: Name, Host name (or IP address), Standard port, Username,
Password, and File transfer Port.
The Username and Password fields are not required if you have not configured users into the configuration file of the
JobServer.
5. Select true from the Enable SSL list to enable SSL.

6. Click Apply and Apply and Close to validate the changes.


7. Go to the Run view of Talend Studio design workspace, and click the Target Exec tab.
8. From the list, select the remote server you have just created.

9. Click the Run button of the Basic Run tab, as usual, to connect to the server and deploy then execute in one go the
current Job with SSL enabled.

206
Data Integration

You can also execute your Job on the specified JobServer in the Memory Run tab if you want to monitor JVM resource
usage during the Job execution. For more information on how to enable resource usage monitoring, see Configuring
remote execution (Talend > Run/Debug) on page 613.

Note: If you get a connection error, check that the agent is running, the ports are available and the server IP address
is correct. Check also that SSL is configured at Studio and JobServer sides.

Recovering Job execution in case of failure

Talend Studio along with Talend Administration Center offer the concept of "recovery checkpoints" as Job execution restore
facility. Checkpoints are taken in anticipation of the potential need to restart a Job execution beyond its starting point.
General concept
Job execution processes can be time-consuming, as are backup and restore operations. If checkpointing is possible, this will
minimize the amount of time and effort wasted when the process of Job execution is interrupted by failure.
With Talend Studio, you can set checkpoints in your Job design at specified intervals (On Subjob Ok and On Subjob Error
connections) in terms of bulks of the data flow.
With Talend Administration Center, and in case of failure during Job execution, the execution process can be restarted from
the latest checkpoint previous to the failure rather than from the beginning.
A two-step procedure

About this task


The only prerequisite for this facility offered in Talend Studio , is to have trigger connections of the types On Subjob OK and
On Subjob Error in your Job design.
To be able to recover Job execution in case of failure, you need to:

Procedure
1. Define checkpoints manually on one or more of the trigger connections you use in the Job you design in Talend Studio.
For more information on how to initiate recovery checkpoints, see Setting checkpoints on trigger connections on page
123.
2. In case of failure during the execution of the designed Job, recover Job execution from the latest checkpoint previous to
the failure through the Error recovery Management page in Talend Administration Center.
For more information, see Talend Administration Center User Guide.

Using parallelization to optimize Job performance


Parallelization in terms of Talend Jobs means to accomplish technical processes through parallel executions. When properly
designed, a parallelization-enabled technical process can be completed within a shorter time frame.
Talend Studio allows you to implement different types of parallelization depending on ranging circumstances. These
circumstances could be:
1. Parallel executions of multiple subJobs. For further information, see Executing multiple subJobs in parallel on page
208.
2. Parallel iterations for reading data. For further information, see Launching parallel iterations to read data on page
208.
3. Orchestrating executions of subJobs. For further information, see Orchestrating parallel executions of subJobs on page
209.
4. Speeding-up data writing into a database. For further information, see Writing data in parallel on page 210.
5. Speeding-up processing of a data flow. For further information, see Enabling parallelization of data flows on page
211.
Parallelization is an advanced feature and requires basic knowledge about a Talend Job such as how to design and execute
a Job or a subJob, how to use components and how to use the different types of connections that link components or Jobs. If
you feel that you need to acquire this kind of knowledge, see What is a Job design? on page 47.

207
Data Integration

Executing multiple subJobs in parallel

The Multi thread execution feature allows you to run multiple subJobs that are active in the workspace in parallel.
As explained in the previous sections, a Job opened in the workspace can contain several subJobs and you are able to
arrange their execution order using the trigger links such as OnSubjobOK. However, when the subJobs do not have any
dependencies between them, you might want to launch them at the same time. For example, the following image presents
four subJobs within a Job and with no dependencies in between.

The tRunJob component is used in this example to call each subJob they represent.
Then with the Job opened in the workspace, you need simply proceed as follows to run the subJobs in parallel:

Procedure
1. Click the Job tab, then the Extra tab to display it.

2. Select the Multi thread execution check box to enable the parallel execution.
This feature is optimal when the number of threads (in general a subJob count one thread) do not exceed the number
of processors of the machine you use for parallel executions. Otherwise, some of the subJobs have to wait until any
processor is freed up.
3. If needed, fill the Parallelize Buffer Unit Size field with the number of rows you want to buffer for each of the threads
handled in parallel before the data is processed and the buffer is cleaned.
This setting is meaningful only if the Enable parallel execution check box is selected and the child Jobs or subJobs
contain database output components.
For a use case of using the Multi-thread Execution feature to run Jobs in parallel, see Data Integration Job Examples.

Launching parallel iterations to read data

A parallelization-enabled Iterate connection allows the component that receives threads from the connection to read those
threads in parallel.

208
Data Integration

Warning: Note that the globalMap is error-prone in parallel execution. Be cautious when using globalMap.put(
"key","value") and globalMap.get("key") to create your own global variables and then retrieve their values in
your Jobs, especially after an Iterate connection with the parallel execution option enabled.

About this task


You need to proceed as follows to set the parallel iterations:

Procedure
1. Simply select the Iterate link of your subJob to display the related Basic settings view of the Components tab.
2. Select the Enable parallel execution check box and set the number of executions to be carried out in parallel.

When executing your Job, the number of parallel iterations will be distributed onto the available processors.

3. Select the Statistics check box of the Run view to show the real time parallel executions on the design workspace.

Orchestrating parallel executions of subJobs

The Studio uses the tParallelize component to orchestrate the parallel executions of subJobs that are active within a Job.
When a Job contains several subJobs, you might want to execute some of the subJobs in parallel and then synchronize the
executions of the other subJobs at the end of the parallel executions.
To do this, you can simply use tParallelize to orchestrate all of the subJobs to be executed.

209
Data Integration

In the example presented in the image, tParallelize launches at first the following subJobs: workflow_sales,
workflow_rd and workflow_finance; after the executions are completed, it launches workflow_hr.

Writing data in parallel

Parallel data writing refers to the concept of speeding-up the execution of a Job by dividing the data flow into multiple
fragments that can be written simultaneously.

About this task


Note that when parallel execution is enabled, it is not possible to use global variables to retrieve return values in a subJob.
The Advanced settings for all database output components include the option Enable Parallel Execution which, if selected,
allows to perform high-speed data processing, that is treating multiple data flows simultaneously.

When you select the Enable parallel execution check box, the Number of parallel executions field displays where you can
enter the number by which the current processed data is devised to achieve N level of parallel processings.

210
Data Integration

The current processed data being executed across N fragments might execute N times faster than it would if processed as a
single fragment.
You can also set the data flow parallelization parameters from the design workspace of the Integration perspective. To do
that:

Procedure
1. Right-click a DB output component on the design workspace and select Parallelize from the drop-down list to display a
dialog box.
2. Select the Enable parallel execution check box and enter the number of parallel executions in the corresponding field.
Alternatively, press Ctrl + Space and select the appropriate context variable from the list.

3. Click OK to validate data flow parallelization parameters.


The number of parallel executions displays next to the DB output component in the design workspace.

Enabling parallelization of data flows

In Talend Studio, parallelization of data flows means to partition an input data flow of a subJob into parallel processes and
to simultaneously execute them, so as to gain better performance. These processes are executed always in a same machine.
Note that this type of parallelization is available only on the condition that you have subscribed to one of the Platform
solutions or Big Data solutions.
You can use dedicated components or the Set parallelization option in the contextual menu within a Job to implement this
type of parallel execution.
The dedicated components are tPartitioner, tCollector, tRecollector and tDepartitioner.
The following sections explains how to use the Set parallelization option and the related Parallelization vertical tab
associated with a Row connection.
You can enable or disable the parallelization by one single click, and then the Studio automates the implementation across a
given Job.

211
Data Integration

The implementation of the parallelization requires four key steps as explained as follows:
1.
Partitioning ( ): In this step, the Studio splits the input records into a given number of threads.
2.
Collecting ( ): In this step, the Studio collects the split threads and sends them to a given component for processing.
3.
Departitioning ( ): In this step, the Studio groups the outputs of the parallel executions of the split threads.
4.
Recollecting ( ): In this step, the Studio captures the grouped execution results and outputs them to a given
component.
Once the automatic implementation is done, you can alter the default configuration by clicking the corresponding
connection between components.
The Parallelization tab
The Parallelization tab is available as one of the settings tab you can use to configure a Row connection.

212
Data Integration

You define the parallelization properties on your row connections according to the following table.

Field/Option Description

Partition row Select this option when you need to partition the input records into a specific number of threads.

Note:
It is not available to the last row connection of the flow.

Departition row Select this option when you need to regroup the outputs of the processed parallel threads.

Note:
It is not available to the first row connection of the flow.

Repartition row Select this option when you need to partition the input threads into a specific number of threads and regroup
the outputs of the processed parallel threads.

Note:
It is not available to the first or the last row connection of the flow.

None Default option. Select this option when you do not want to take any action on the input records.

Merge sort partitions Select this check box to implement the Mergesort algorithm to ensure the consistency of data.
This check box appears when you select the Departition row or Repartition row option.

Number of Child Threads Type in the number of threads into which you want to split the input records.
This field appears when you select the Partition row or Departition row option.

Buffer Size Type in the number of rows to cache for each of the threads generated.
This field does not appear if you select the None option.

213
Data Integration

Field/Option Description

Use a key hash for partitions Select this check box to use the hash mode for dispatching the input records, which will ensure the records
meeting the same criteria are dispatched to the same threads. Otherwise, the dispatch mode is Round-robin.
This check box appears if you select the Partition row or Repartition row option.
In the Key Columns table that appears after you select the check box, set the columns on which you want to use
the hash mode.

Scenario: sorting the customer data of large size in parallel


The Job in this scenario puts in order 20 million customer records by running parallelized executions.

Creating a Job to sort customer data

Procedure
1. In the Integration perspective of your Studio, create an empty Job from the Job Designs node in the Repository tree
view.
For further information about how to create a Job, see What is a Job design? on page 47.
2. Drop the following components onto the workspace: tFileInputDelimited, tSortRow and tFileOutputDelimited.
The tFileInputDelimited component (labeled test file in this example) reads the 20 million customer records from
a .txt file generated by tRowGenerator.
3. Connect the components using the Row > Main link.
Enabling parallelization

Procedure
Right-click the start component of the Job, tFileInputDelimited in the scenario, and from the contextual menu, select Set
parallelization.
Then the parallelization is automatically implemented.
Splitting the input data flow
At the end of this link, the Studio automatically collect the split thread to accomplish the collecting step.
Configuring the input flow

Procedure
1. Double-click tFileInputDelimited to open its Component view.

214
Data Integration

2. In the File name/Stream field, browse to, or enter the path to the file storing the customer records to be read.
3.
Click the button to open the schema editor where you need to create the schema to reflect the structure of the
customer data.

4. Click the button five times to add five rows and rename them as follows: FirstName, LastName, City, Address
and ZipCode.
In this scenario, we leave the data types with their default value String. In the real-world practice, you can change them
depending on the data types of your data to be processed.
5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.
6. If needs be, complete the other fields of the Component view with values corresponding to your data to be processed. In
this scenario, we leave them as is.
Configuring the partitioning step

Procedure
1. Click the link representing the partitioning step to open its Component view and click the Parallelization tab.

215
Data Integration

The Partition row option has been automatically selected in the Type area. If you select None, you are actually disabling
parallelization for the data flow to be handled over this link. Note that depending on the link you are configuring, a
Repartition row option may become available in the Type area to repartition a data flow already departitioned.
In this Parallelization view, you need to define the following properties:
• Number of Child Threads: the number of threads you want to split the input records up into. We recommend that
this number be N-1 where N is the total number of CPUs or cores on the machine processing the data.
• Buffer Size: the number of rows to cache for each of the threads generated.
• Use a key hash for partitions: this allows you to use the hash mode to dispatch the input records into threads.
Once selecting it, the Key Columns table appears, in which you set the column(s) you want to apply the hash mode
on. In the hash mode, the records meeting the same criteria are dispatched into the same threads.
If you leave this check box clear, the dispatch mode is Round-robin, meaning records are dispatched one-by-one to
each thread, in a circular fashion, until the last record is dispatched. Be aware that this mode cannot guarantee that
records meeting the same criteria go into the same threads.
2. In the Number of Child Threads field, enter the number of the threads you want to partition the data flow into. In this
example, enter 3 because we are using 4 processors to run this Job.
3. If required, change the value in the Buffer Size field to adapt the memory capacity. In this example, we leave the default
one.
Sorting the input records
At the end of this link, the Studio automatically accomplish the recollecting step to group and output the execution results
to the next component.
Configuring tSortRow

Procedure
1. Double-click tSortRow to open its Component view.

216
Data Integration

2. Under the Criteria table, click the button three times to add three rows to the table.
3. In the Schema column column, select, for each row, the schema column to be used as the sorting criterion. In this
example, select ZipCode, City and Address, sequentially.
4. In the Sort num or alpha? column, select alpha for all the three rows.
5. In the Order asc or desc column, select asc for all the three rows.
6. If the schema does not appear, click the Sync columns button to retrieve the schema from the preceding component.
7. Click Advanced settings to open its view.

8. Select Sort on disk. Then the Temp data directory path field and the Create temp data directory if not exist check box
appear.
9. In Temp data directory path, enter the path to, or browse to the folder you want to use to store the temporary data
processed by tSortRow. In this approach, tSortRow is enabled to sort considerably more data.
As the threads will overwrite each other if they are written in the same directory, you need to create the folder for each
thread to be processed using its thread ID.
To use the variable representing the thread IDs, you need to click Code to open its view and in that view, find this
variable by searching for thread_id. In this example, this variable is tCollector_1_THREAD_ID.

217
Data Integration

Then you need to enter the path using this variable This path reads like:
"E:/Studio/workspace/temp"+((Integer)globalMap.get("tCollector_1_THREAD_ID")).
10. Ensure that the Create temp data directory if not exists check box is selected.
Configuring the departitioning step

Procedure
1. Click the link representing the departitioning step to open its Component view and click the Parallelization tab.

The Departition row option has been automatically selected in the Type area. If you select None, you are actually
disabling parallelization for the data flow to be handled over this link. Note that depending on the link you are
configuring, a Repartition row option may become available in the Type area to repartition a data flow already
departitioned.
In this Parallelization view, you need to define the following properties:
• Buffer Size: the number of rows to be processed before the memory is freed.
• Merge sort partitions: this allows you to implement the Mergesort algorithm to ensure the consistency of data.
2. If required, change the values in the Buffer Size field to adapt the memory capacity. In this example, we leave the
default value.

218
Data Integration

Outputting the sorted data

Procedure
1. Double click the tFileOutputDelimited component to open its Component view.

2. In the File Name field, browse to the file, or enter the directory and the name of the file, that you want to write the
sorted data in. At runtime, this file will be created if it does not exist.
3. Press F6 to run this Job.

Results
Once done, you can check the file holding the sorted data and the temporary folders created by tSortRow for sorting data on
disk. These folders are emptied once the sorting is been done.

Executing the Job and checking the result

About this task


Now that components are configured, the Job can be executed.
To do so, proceed as follows:

Procedure
1. Press Ctrl+S to save the Job.
2. Go to Run tab, and click on Run to execute the Job.

219
Data Integration

Results
The file is read row by row and the extracted fields are displayed on the Run console and written to the specified output file.

Testing Jobs using test cases


Talend Studio comes with a test framework that allows you to create test cases to test your Jobs during Continuous
Integration development to make sure they will function as expected when they are actually executed to handle large
datasets.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.
By creating a test case, you can verify whether the following will work as expected:
• a data flow
• one or more components
• multiple subJobs
Note that when you create a test case for one or more components that do not constitute a data flow or for multiple subJobs,
the Create a Test Skeleton check box will not be operable and you need to complete the test case by adding components of
your choice manually.
This section describes how to create, set up, and execute a test case based on the tMap Job example elaborated in Data
Integration Job Examples.
Before creating a test case for a Job, make sure all the components of your Job have been configured.

Creating a test case

To create a test case for a Job, do the following:

Procedure
1. Open the Job for which you want to create a test case.
2. Right-click the functional part of the Job you want to test, which is the tMap component in this example, and select
Create Test Case from the contextual menu.

220
Data Integration

3. In the Create Test Case dialog box, enter a name for the test case in the Name field, and the optional information, if
needed, such as purpose and description in the corresponding fields.

221
Data Integration

4. Select the Create a Test Skeleton check box so that the components required for the test case to work are automatically
added, and click Finish.

Note:
If you clear this check box, you will need to complete the test case by adding components of your choice manually.

The test case is then created and opened in the design workspace, with all the required components automatically
added. In the Repository tree view, the newly created test case appears under your Job.

222
Data Integration

By default, a Test Skeleton includes:


• one or more tFileInputDelimited components, depending on the number of input flows in the Job, to load the input
file(s),
• one or more tCreateTemporaryFile components, depending on the number of output flows in the Job, to create one
or more temporary files to hold the output data,
• one or more tFileOutputDelimited components, depending on the number of output flows in the Job, to write data
from the output flow(s) to the previously created temporary file(s),
• one or more tFileCompare components, depending on the number of output flows in the Job, to compare the
temporary output file(s) with the reference file(s). The test is considered successful if the compared pair of files are
identical.
• one or more tAssert components, depending on the number of output flows in the Job, to provide an alert message
if the compared pair of files are different, indicating a failure of the test.
In addition, depending on the number of input and output flows, a variable number of context variables are
automatically created to specify the input and reference files.
In this example, when executed, the test case will:
• read the source data for the testing from two input files, one main input and one lookup input,
• process the data in the tMap component, which is the part under test,
• write the processing result to a local temporary file,
• compare the temporary output file with a reference file, which contains the expected result of data processing.

Setting up a test case

After creating a test case, you need to complete a few settings for your test case to work. These settings include adding test
instances, if required, and specifying the input and reference files.
Before you can run the test case or its instances, you need to specify the input and reference files in the Contexts view and/
or define embedded data sets in the Test Cases view.
Adding test instances

About this task


Upon creation, a test case has one test instance named Default. You can add as many instances as you need to run the
same test case with different sets of data files. From the Test Cases view, you run an instance individually or run all the
instances of the test case at the same time. To add a test instance, do the following:

223
Data Integration

Procedure
1. From the Repository tree view, select the test case or the Job for which you created the test case and go to the Test
Cases view.

Note: If you have created more than one test cases for a Job, when you select the Job from the Repository tree view,
all its test cases are displayed in the Test Cases view.

2. On the left panel of the Test Cases, right-click the test case you want to set up, and select Add Instance from the
contextual menu.

3. Type in a name for the instance or accept the proposed name.


The newly created test instance appears under the test case name node.

You can remove the instance, add test data to all existing instances, or run the instance by right-clicking on the instance
and select the relevant item of the contextual menu. You can also remove a test data item by right-clicking it and select
Remove TestData from the context menu.

Note:
Note that if you remove a test data item from an instance, this item is also removed from all the other instances.

4. Specify a new context for the newly created test instance. For more information, see the procedure below.
Defining context variables for the test data

Procedure
1. Go to the Contexts view of the test case.
By default, the required variables have been created under the context named Default. You can define as many
contexts as you need to perform your tests for different environments or using different text instances. For more
information on how to define contexts and variables, see Using contexts and variables on page 105.
2. Click in the Value field of the variable for the file you want to specify, click the button, browse to your file in the
Open dialog box, and double-click it to specify the file path for the variable.

224
Data Integration

3. In the Test Cases view, click each test instances on the left panel and select the related context from the context list
box on the right panel.

4. Expand each test instance to show the test data, click each test data item on the left panel and check the context
variable mapped to the data set. If needed, select the desired variable from the Context Value list box on the right
panel.

Defining embedded data sets

About this task

Note:
Embedded data sets defined in the Test Cases view are used only for test case execution from Test Cases view and
override the files specified in the Contexts view.

Procedure
1. Go to the Test Cases view of the test case.
2. Select the data file to be defined from the left panel, click the File Browse button from the right panel, browse to your
file in the Open dialog box, and double-click it to load the file to the repository.
Once a data file is loaded, the warning sign on the data set icon disappears, the text field at the lower part of the right
panel displays the content of the loaded file, and the test case will use the data from the repository rather than from the
local file system.

225
Data Integration

Executing test cases

You can run your test cases in different ways: from the Run view, from the Test Cases view, and from the Repository tree
view.

Note:
While you can run a test case from the Run view for debugging purposes, it is the standard way to run test cases from
the Test Cases view. The Test Cases view allows you to add instances for the same test case and execute all of them
simultaneously, and view the test case execution history.

From the Test Cases view, you can also run a test instance individually. To do so, right-click the test instance on the left
panel and select Run Instance from the contextual menu.
Running a test case from the Run view

Procedure
1. Open the test case and go to the Run tab view.
2. If you have defined different contexts for your test case, select the desired context for the test form the Context list.
3. Click Run on the Basic Run vertical tab to run the test case, or debug it on the Debug Run vertical tab view.
The Run console shows whether the compared files are identical.

Running a test case from the Test Cases view

Procedure
1. Open the test case and go to the Test Cases view.

226
Data Integration

2. Right-click the test case name on the left panel and select Run TestCase from the contextual menu.

All the instances of the test case are executed at the same time. The right panel displays the test case execution results,
including history execution information.
To view the execution results of a test instance including the execution history, or the details of a particular execution,
click the corresponding [+] button.

Running a test case or all test cases of a Job from the Repository tree view

Procedure
1. To run a particular test case, right-click the test case in the Repository tree view, and select Run TestCase from the
contextual menu.
To run all the test cases of a Job, right-click the Job and select Run All TestCases from the contextual menu.
2. When the execution is complete, go to the Test Cases view to check the execution result.
All the instances of the test case(s) are executed at the same time. The left panel displays the selected test case or
all the test cases of the selected Job, and the right panel displays the test case execution results, including history
execution information.

Managing test cases

You can create as many test cases as you need for a Job, and manage your test cases in a similar manner like for normal Jobs.
From the Repository tree view, you can:
• Select the Job to display all its test cases in the Test Cases view.

227
Data Integration

• Expand the Job and select the test case of interest to show it in the Test Cases view.
• Expand the Job and double-click the test case of interest to open it in the design workspace.
• Expand the Job and right-click the test case of interest to open, run, open a read-only copy of, rename, or delete it.

For a Job that has test cases:


• When importing the Job, you can selectively import one or more of its test cases together with the Job. However, you
cannot export a test case without exporting the Job it was created for.
• When building a Job that has test cases, you can select whether to execute the test cases created for it upon the Job
build process.
• When working collaboratively, you can lock and unlock a test case independently of the Job for which is was created.
For more information on importing, exporting and building a Job, see Importing/exporting items and building Jobs on page
157. For more information on working collaboratively on project items, see Working collaboratively on project items on
page 26.

Managing items on different branches and tags


Talend Studio supports the Git version control system, which enable you to have different copies of your items in different
branches or tags. The items on one branch or tag will exist independently of those on another.

Copying items from a branch or a tag

Talend Studio allows you to copy one or more items from a remote branch or a tag to the branch you are currently working
on.

Note:
• You cannot copying items when working in the Git offline mode.
• You can copy items from a tag to a branch but not vice versa, because a tag is a read-only copy of a project.

About this task


To copy an item or multiple items from a remote branch or a tag to the current branch:

Procedure
1. In the Repository tree view, right-click an item such as a Job, or a node such as Job Designs, or a folder if you want to
copy items of the corresponding type under it, and select Copy From Branch from the contextual menu.
The Copy From Branch dialog box opens.
2. Select a branch or a tag from which you want to copy items and click Next.

228
Data Integration

3. Select the items you want to copy by selecting the corresponding check boxes.

229
Data Integration

The items on the remote branch or the tag that have the same names as the items on the current branch can be listed in
the tree view only if the Overwrite existing items check box is selected.
4. Select the Import dependencies check box if you want to copy dependencies along with the items you have selected.
The selection status of the Import dependencies check box is automatically saved for the next copy action.
5. Select the Overwrite existing items check box if you want to overwrite existing items with those having the same
names to be copied.
6. Click Finish.
The items on the remote branch or the tag you have selected are copied to the corresponding node or folder on the
branch you are working on.

Reverting a project item on a tag

About this task


A tag is a read-only copy of a Git managed project that records the current state of the project at the point of time the tag is
created. Although a tag is not meant to be a working copy of your project, you may still want to work on it for some reason
and save your changes to a Job by copying it to a branch.
Once a project item is changed, a > symbol appears in front of it in the Repository tree view.
At any point while you are working on a tag, you can discard all the changes you made to a particular project item since the
tag creation by reverting the item to its initial state, without affecting the changes to other project items.
To revert an item to its initial state, do the following:

230
Data Integration

Procedure
1. In the Repository tree view, right-click the item and select Revert from the contextual menu.

Warning: If you revert a project item created on the tag, the whole item will be deleted.

2. In the confirmation dialog box, click Yes to confirm your operation.

Mapping data flows


Map editor interfaces
The most common way to handle multiple input and output flows including transformations and data re-routing is to use
dedicated mapping components.
Mapping components are advanced components which require more detailed explanation than other Talend components.
The Map Editor is an "all-in-one" tool allowing you to define all parameters needed to map, transform and route your data
flows via a convenient graphical interface.
You can minimize and restore the Map Editor and all tables in the Map Editor using the window icons.

This figure presents the interface of tMap. Those of the other mapping components differ slightly in appearance. For
example, in addition to the Schema editor and the Expression editor tabs on the lower part of this interface, tXMLMap has a
third tab called Tree schema editor. For further information about tXMLMap, see tXMLMap operation on page 266.
The Map Editor is made of several panels:
• The Input panel is the top left panel on the editor. It offers a graphical representation of all (main and lookup) incoming
data flows. The data are gathered in various columns of input tables. Note that the table name reflects the main or
lookup row from the Job design on the design workspace.
• The Variable panel is the central panel in the Map Editor. It allows the centralization of redundant information through
the mapping to variable and allows you to carry out transformations.

231
Data Integration

• The Search panel is above the Variable panel. It allow you to search in the editor for columns or expressions that
contain the text you enter in the Find field.
• The Output panel is the top right panel on the editor. It allows mapping data and fields from Input tables and Variables
to the appropriate Output rows.
• Both bottom panels are the Input and Output schemas description. The Schema editor tab offers a schema view of all
columns of input and output tables in selection in their respective panel.
• Expression editor is the edition tool for all expression keys of Input/Output data, variable expressions or filtering
conditions.
The name of input/output tables in the Map Editor reflects the name of the incoming and outgoing flows (row connections).
The following sections present separately different mapping components of which each is able to map flows of a specific
nature.

tMap operation
tMap allows the following types of operations:
• data multiplexing and demultiplexing,
• data transformation on any type of fields,
• fields concatenation and interchange,
• field filtering using constraints,
• data rejecting.
As all these operations of transformation and/or routing are carried out by tMap, this component cannot be a start or end
component in the Job design.

tMap uses incoming connections to pre-fill input schemas with data in the Map Editor. Therefore, you cannot create new
input schemas directly in the Map Editor. Instead, you need to implement as many Row connections incoming to tMap
component as required, in order to create as many input schemas as needed.
The same way, create as many output row connections as required. However, you can fill in the output with content directly
in the Map Editor through a convenient graphical editor.
Note that there can be only one Main incoming rows. All other incoming rows are of Lookup type. Related topic: Row
connection on page 97.
Lookup rows are incoming connections from secondary (or reference) flows of data. These reference data might depend
directly or indirectly on the primary flow. This dependency relationship is translated with a graphical mapping and the
creation of an expression key.
The Map Editor requires the connections to be implemented in your Job in order to be able to define the input and output
flows in the Map Editor. You also need to create the actual mapping in your Job in order to display the Map Editor in the
Preview area of the Basic settings view of the tMap component.

232
Data Integration

To open the Map Editor in a new window, double-click the tMap icon in the design workspace or click the three-dot button
next to the Map Editor in the Basic settings view of the tMap component.
The following sections give the information necessary to use the tMap component in any of your Job designs.

Setting the input flow in the Map Editor

The order of the Input tables is essential. The top table reflects the Main flow connection, and for this reason, is given
priority for reading and processing through the tMap component.
For this priority reason, you are not allowed to move up or down the Main flow table. This ensures that no Join can be lost.

233
Data Integration

Although you can use the up and down arrows to interchange Lookup tables order, be aware that the Joins between two
lookup tables may then be lost.
Related topic: Using Explicit Join on page 235.
Filling in Input tables with a schema
To fill in the input tables, you need to define either the schemas of the input components connected to the tMap component
on your design workspace, or the input schemas within the Map Editor.
For more information about setting a component schema, see Defining component properties on page 74.

234
Data Integration

For more information about setting an input schema in the Map Editor, see Setting schemas in the Map Editor on page 253.
Main and Lookup table content
The order of the Input tables is essential.
The Main Row connection determines the Main flow table content. This input flow is reflected in the first table of the Map
Editor's Input panel.
The Lookup connections' content fills in all other (secondary or subordinate) tables which displays below the Main flow
table. If you have not define the schema of an input component yet, the input table displays as empty in the Input area.
The key is also retrieved from the schema defined in the Input component. This Key corresponds to the key defined in the
input schema where relevant. It has to be distinguished from the hash key that is internally used in the Map Editor, which
displays in a different color.
Variables
You can use global or context variables or reuse the variable defined in the Variables area. Press Ctrl+Space bar to access the
list of variables. This list gathers together global, context and mapping variables.
The list of variables changes according to the context and grows along new variable creation. Only valid mappable variables
in the context show on the list.

Docked at the Variable list, a metadata tip box display to provide information about the selected column.
Related topic: Mapping variables on page 239
Using Explicit Join
In fact, Joins let you select data from a table depending upon the data from another table. In the Map Editor context, the
data of a Main table and of a Lookup table can be bound together on expression keys. In this case, the order of table does
fully make sense.
Simply drop column names from one table to a subordinate one, to create a Join relationship between the two tables. This
way, you can retrieve and process data from multiple inputs.
The join displays graphically as a purple link and creates automatically a key that will be used as a hash key to speed up the
match search.
You can create direct joins between the main table and lookup tables. But you can also create indirect joins from the main
table to a lookup table, via another lookup table. This requires a direct join between one of the Lookup table to the Main
one.

Note: You cannot create a Join from a subordinate table towards a superior table in the Input area.

235
Data Integration

The Expression key field which is filled in with the dragged and dropped data is editable in the input schema, whereas the
column name can only be changed from the Schema editor panel.
You can either insert the dragged data into a new entry or replace the existing entries or else concatenate all selected data
into one cell.

For further information about possible types of drag and drops, see Mapping the Output setting on page 247.

Note: If you have a big number of input tables, you can use the minimize/maximize icon to reduce or restore the table
size in the Input area. The Join binding two tables remains visible even though the table is minimized.

Creating a Join automatically assigns a hash key onto the joined field name. The key symbol displays in violet on the input
table itself and is removed when the Join between the two tables is removed.
Related topics:
• Setting schemas in the Map Editor on page 253
• Using Inner Join on page 237
Along with the explicit Join you can select whether you want to filter down to a unique match or if you allow several
matches to be taken into account. In this last case, you can choose to consider only the first or the last match or all of them.

236
Data Integration

Defining the match model for an explicit Join

Before you begin


To define the match model for an explicit Join:

Procedure
1. Click the tMap settings button at the top of the table to which the Join links to display the table properties.
2. Click in the Value field corresponding to Match Model and then click the three-dot button that appears to open the
Options dialog box.
3. In the Options dialog box, double-click the wanted match model, or select it and click OK to validate the setting and
close the dialog box.

Unique Match
This is the default selection when you implement an explicit Join. This means that only the last match from the Lookup flow
will be taken into account and passed on to the output.
The other matches will be then ignored.
First Match
This selection implies that several matches can be expected in the lookup. The First Match selection means that in the
lookup only the first encountered match will be taken into account and passed onto the main output flow.
The other matches will then be ignored.
All Matches
This selection implies that several matches can be expected in the lookup flow. In this case, all matches are taken into
account and passed on to the main output flow.
Using Inner Join
The Inner join is a particular type of Join that distinguishes itself by the way the rejection is performed.
This option avoids that null values are passed on to the main output flow. It allows also to pass on the rejected data to a
specific table called Inner Join Reject table.
If the data searched cannot be retrieved through the explicit Join or the filter Join, in other words, the Inner Join cannot be
established for any reason, then the requested data will be rejected to the Output table defined as Inner Join Reject table if
any.
Simply drop column names from one table to a subordinate one, to create a Join relationship between the two tables. The
Join is displayed graphically as a purple link and creates automatically a key that will be used as a hash key to speed up the
match search.

About this task


To define the type of an explicit Join:

237
Data Integration

Procedure
1. Click the tMap settings button at the top of the table to which the Join links to display the table properties.
2. Click in the Value field corresponding to Join Model and then click the three-dot button that appears to open the
Options dialog box.
3. In the Options dialog box, double-click the wanted Join type, or select it and click OK to validate the setting and close
the dialog box.

Note: An Inner Join table should always be coupled to an Inner Join Reject table. For how to define an output table
as an Inner Join Reject table, see Lookup Inner Join rejection on page 251.

You can also use the filter button to decrease the number of rows to be searched and improve the performance (in Java).
Related topics:
• Lookup Inner Join rejection on page 251.
• Filtering an input flow on page 238.
Using the All Rows option
By default, without a Join set up, in each input table of the input area of the Map Editor, the All rows match model option is
selected. This All rows option means that all the rows are loaded from the Lookup flow and searched against the Main flow.
The output corresponds to the Cartesian product of both table (or more tables if need be).

Note: If you create an explicit or an inner Join between two tables, the All rows option is no longer available. You then
have to select Unique match, First match or All matches. For more information, see Using Explicit Join on page 235 and
Using Inner Join on page 237.

Filtering an input flow


Click the Filter button next to the tMap settings button to add a Filter field.

238
Data Integration

In the Filter field, type in the condition to be applied. This allows to reduce the number of rows parsed against the main
flow, enhancing the performance on long and heterogeneous flows.
You can use the Auto-completion tool via the Ctrl+Space keystrokes in order to reuse schema columns in the condition
statement.
Removing input entries from table
To remove input entries, click the red cross sign on the Schema Editor of the selected table. Press Ctrl or Shift and click
fields for multiple selection to be removed.

Note: If you remove Input entries from the Map Editor schema, this removal also occurs in your component schema
definition.

Mapping variables

The Var table (variable table) regroups all mapping variables which are used numerous times in various places.
You can also use the Expression field of the Var table to carry out any transformation you want to, using Java Code.
Variables help you save processing time and avoid you to retype many times the same data.

There are various possibilities to create variables:


• Type in freely your variables in Java. Enter the strings between quotes or concatenate functions using the relevant
operator.
• Add new lines using the plus sign and remove lines using the red cross sign. And press Ctrl+Space to retrieve existing
global and context variables.
• Drop one or more Input entries to the Var table.

Select an entry on the Input area or press Shift key to select multiple entries of one Input table.
Press Ctrl to select either non-appended entries in the same input table or entries from various tables. When selecting
entries in the second table, notice that the first selection displays in grey. Hold the Ctrl key down to drag all entries together.
A tooltip shows you how many entries are in selection.

239
Data Integration

Then various types of drag-and-drops are possible depending on the action you want to carry out.

To... You need to...

Insert all selected entries as separated variables. Simply drag & drop to the Var table. Arrows show you where the new Var entry can be
inserted. Each Input is inserted in a separate cell.

Concatenate all selected input entries together with an Drag & drop onto the Var entry which gets highlighted. All entries gets concatenated
existing Var entry. into one cell. Add the required operators using Java operations signs. The dot
concatenates string variables.

Overwrite a Var entry with selected concatenated Input Drag & drop onto the relevant Var entry which gets highlighted then press Ctrl and
entries. release. All selected entries are concatenated and overwrite the highlighted Var.

Concatenate selected input entries with highlighted Var Drag & drop onto an existing Var then press Shift when browsing over the chosen
entries and create new Var lines if needed Var entries. First entries get concatenated with the highlighted Var entries. And if
necessary new lines get created to hold remaining entries.

Accessing global or context variables


Press Ctrl+Space to access the global and context variable list.
Appended to the variable list, a metadata list provides information about the selected column.
Removing variables
To remove a selected Var entry, click the red cross sign. This removes the whole line as well as the link.
Press Ctrl or Shift and click fields for multiple selection then click the red cross sign.

Working with expressions

All expressions (Input, Var or Output) and constraint statements can be viewed and edited directly in the expression fields, in
the expression editor, and in the Expression Builder.
Accessing the expression editor

About this task


The expression editor provides visual comfort to write any function or transformation in a handy dedicated view.
You can write the expressions necessary for the data transformation directly in the Expression editor view located in the
lower half of the expression editor.
To open the Expression editor view, complete the following:

Procedure
1. Double-click the tMap component in your Job design to open the Map Editor.
2. In the lower half of the editor, click the Expression editor tab to open the corresponding view.

Note: To edit an expression, select it in the Input panel and then click the Expression editor tab and modify the
expression as required.

240
Data Integration

3. Enter the Java code according to your needs. The corresponding expression in the output panel is synchronized.

Results

Note: Refer to the Java documentation for more information regarding functions and operations.

Writing code using the Expression Builder


Some Jobs require pieces of code to be written in order to provide components with parameters. In the Component view of
some components, an Expression Builder interface can help you write such pieces of code (in Java), known as expressions.
Using the Expression Builder of tMap, you can edit the expression for an input column, an output column, or a variable, or
change the expressions for multiple output columns at the same time.
Editing individual expressions

About this task


The following example shows how to use the Expression Builder to edit two individual expressions.

In this example, two input flows are connected to the tMap component.
• From the DB input, comes a list of names made of a first name and a last name separated by a space char.
• From the File input, comes a list of US states, in lower case.
In the tMap, use the expression builder to: First, replace the blank char separating the first and last names with an
underscore char, and second, change the states from lower case to upper case.

Procedure
1. In the tMap, set the relevant inner join to set the reference mapping.
For more information regarding tMap, see tMap operation on page 232 and Map editor interfaces on page 231.

241
Data Integration

2. From the main (row1) input, drop the Names column to the output area, and the State column from the lookup (
row2) input towards the same output area.
3. Click in the first Expression field (row1.Name), and then click the ... button that appears next to the expression.

The Expression Builder dialog box opens up.

4. In the Category area, select the relevant action you want to perform. In this example, select StringHandling and
select the EREPLACE function.
5. In the Expression area, paste row1.Name in place of the text expression, in order to get: StringHandling
.EREPLACE(row1.Name," ","_"). This expression will replace the separating space char with an underscore char
in the char string given.
Note that the CHANGE and EREPLACE functions in the StringHandling category are used to substitute all substrings that
match the given regular expression in the given old string with the given replacement and returns a new string. Their
three parameters are:
• oldStr: the old string
• newStr: the regular expression to match
• replacement: the string to be substituted for every match
6. Now check that the output is correct, by typing in the relevant Value field of the Test area, a dummy value, e.g: Chuck
Norris and clicking Test!. The correct change should be carried out, for example, Chuck_Norris.
7. Click OK to validate the changes, and then proceed with the same operation for the second column (State).
8. In the tMap output, select the row2.State Expression and click the [...] button to open the Expression builder again.

242
Data Integration

This time, the StringHandling function to be used is UPCASE. The complete expression says: StringHandling
.UPCASE(row2.State).
9. Once again, check that the expression syntax is correct using a dummy Value in the Test area, for example indiana.
The Test! result should display INDIANA for this example. Then, click OK to validate the changes.
Both expressions are now displayed in the tMap Expression field.

Results
These changes will be carried out along the flow processing. The output of this example is as shown below.

Setting expressions for multiple output columns simultaneously

About this task


tMap allows you to define the transformation behavior for multiple output columns at the same time.

243
Data Integration

Using a simple transformation Job, the following example shows how to define expressions on multiple columns in a batch
manner in tMap.

Here is the content of the input CSV file used in this example:

id;firstname;lastname;city;state
1; Andrew;Adams;Madison; Rhode Island
2;Andrew; Garfield; Saint Paul;Colorado
3; Woodrow; Eisenhower ; Juneau; New Hampshire
4;Woodrow; Jackson;Denver;Maine
5; Lyndon;Buchanan; Pierre; Kentucky
6; Bill;Tyler; Helena; New York
7;George;Adams;Oklahoma City ;Alaska
8;Ulysses; Garfield;Santa Fe;Massachusetts
9; Thomas;Coolidge ;Charleston; Mississippi
10;John;Polk; Carson City; Louisiana

In this example, all the output columns of type String will be trimmed to remove preceding and training whitespace and the
last names and state names will be transformed to upper case.

Procedure
1. In the Map Editor, complete the input-output mappings.

2. Select the columns of type String in the output table, namely firstname, lastname, city, and state in this ex
ample, and right-click the selection so that the Apply Routine button shows up.

244
Data Integration

3. Click the Apply Routine button to open the Expression Builder dialog box.

4. Select StringHandling in the Categories area, and then double-click the TRIM function in the Functions area to get
StringHandling.TRIM(${0}) in the Expression field.

245
Data Integration

5. Click OK to close the Expression Builder dialog box.


6. Select the lastname and state columns in the output table of the Map Editor, right-click the selection, and then click
the Apply Routine button to open the Expression Builder dialog box.
7. Select StringHandling in the Categories area, and then double-click the UPPERCASE function in the Functions area
to get StringHandling(${0}) in the Expression field.

8. Click OK to close the Expression Builder dialog box.

Results
Now the expressions on those output columns look like below:

246
Data Integration

The functions will be carried out along the flow processing. The output of this example is as shown below.

Mapping the Output setting

Tip:
There is no order among the output flows of tMap. To make the output flows to be executed one by one, you can output
them to temporary files or memory, and then read and insert them into files or databases using different subjobs linked
by Trigger > OnSubjobOK connections.

On the design workspace, the creation of a Row connection from the tMap component to the output components adds
Output schema tables in the Map Editor.
You can also add an Output schema in your Map Editor, using the plus sign from the tool bar of the Output area.
You have as well the possibility to create a join between your output tables. The join on the tables enables you to process
several flows separately and unite them in a single output.

Note: The join table retrieves the schema of the source table.

When you click the [+] button to add an output schema or to make a join between your output tables, a dialog box opens.
You have then two options.

247
Data Integration

Select... To...

New output Add an independent table.

Create join table from Create a join between output tables. In order to do so, select in the drop down list the
table from which you want to create the join. In the Named field, type in the name of
the table to be created.

Unlike the Input area, the order of output schema tables does not make such a difference, as there is no subordination
relationship between outputs (of Join type).
Once all connections, hence output schema tables, are created, you can select and organize the output data via drag &
drops.
You can drop one or several entries from the Input area straight to the relevant output table.
Press Ctrl or Shift, and click entries to carry out multiple selection.
Or you can drag expressions from the Var area and drop them to fill in the output schemas with the appropriate reusable
data.
Note that if you make any change to the Input column in the Schema Editor, a dialog prompts you to decide to propagate the
changes throughout all Input/Variable/Output table entries, where concerned.

Action Result

Drag & Drop onto existing expressions. Concatenates the selected expression with the existing expressions.

Drag & Drop to insertion line. Inserts one or several new entries at start or end of table or between two existing
lines.

Drag & Drop + Ctrl. Replaces highlighted expression with selected expression.

Drag & Drop + Shift. Adds the selected fields to all highlighted expressions. Inserts new lines if needed.

Drag & Drop + Ctrl + Shift. Replaces all highlighted expressions with selected fields. Inserts new lines if needed.

You can add filters and rejections to customize your outputs.


Setting automatic input-output mappings
The Auto map! button at the top of the output area allows you to create mappings between input and output columns in
one go. By default, an output column can be automatically mapped with an input column only if they have exactly the same
column name. However, you can set fuzzy match in the Property Settings dialog box of the map editor to enable automatic
mappings between input and output columns that do not have exactly the same column names.

Procedure
1.
In the map editor, click the button to open the Property Settings dialog box.

248
Data Integration

2. Drag the slide control under Auto Map from Exact Match to your preferred match mode according to the degree of
similarity between your input and output column names.
Each match mode corresponds to a combination of Levenshtein and Jaccard settings. You can also set a combination of
Levenshtein and Jaccard settings by directly pulling the Levenshtein and Jaccard slide controls.
3. Click OK to validate the settings and close the Property Settings dialog box.
Creating complex expressions
If you have complex expressions to create, or advanced changes to be carried out on the output flow, then the Expression
Builder interface can help in this task.

Procedure
1. Click the Expression field of your input or output table to display the [...] button.
2. Then click this three-dot button to open the Expression Builder.
For more information regarding the Expression Builder, see Writing code using the Expression Builder on page 241.
Filters
Filters allow you to make a selection among the input fields, and send only the selected fields to various outputs.

Click the button at the top of the table to add a filter line.

You can enter freely your filter statements using Java operators and functions.
Drop expressions from the Input area or from the Var area to the Filter row entry of the relevant Output table.

249
Data Integration

An orange link is then created. Add the required Java operator to finalize your filter formula.
You can create various filters on different lines. The AND operator is the logical conjunction of all stated filters.
Output rejection

About this task


Reject options define the nature of an output table.
It groups data which do not satisfy one or more filters defined in the standard output tables. Note that as standard output
tables, are meant all non-reject tables.
This way, data rejected from other output tables, are gathered in one or more dedicated tables, allowing you to spot any
error or unpredicted case.
The Reject principle concatenates all non Reject tables filters and defines them as an ELSE statement.
To define an output table as the Else part of the regular tables:

Procedure
1. Click the tMap settings button at the top of the output table to display the table properties.
2. Click in the Value field corresponding to Catch output reject and then click the [...] button that appears to display the
Options dialog box.
3. In the Options dialog box, double-click true, or select it and click OK to validate the setting and close the dialog box.

Results
You can define several Reject tables, to offer multiple refined outputs. To differentiate various Reject outputs, add filter
lines, by clicking on the plus arrow button.
Once a table is defined as Reject, the verification process will be first enforced on regular tables before taking in
consideration possible constraints of the Reject tables.
Note that data are not exclusively processed to one output. Although a data satisfied one constraint, hence is routed to the
corresponding output, this data still gets checked against the other constraints and can be routed to other outputs.

250
Data Integration

Lookup Inner Join rejection

About this task


The Inner Join is a Lookup Join. The Inner Join Reject table is a particular type of Rejection output. It gathers rejected data
from the main row table after an Inner Join could not be established.
To define an Output flow as container for rejected Inner Join data, create a new output component on your Job that you
connect to the Map Editor. Then in the Map Editor, follow the steps below:

Procedure
1. Click the tMap settings button at the top of the output table to display the table properties.
2. Click in the Value field corresponding to Catch lookup inner join reject and then click the [...] button that appears to
display the Options dialog box.
3. In the Options dialog box, double-click true, or select it and click OK to validate the setting and close the dialog box.

Removing Output entries


To remove Output entries, click the cross sign on the Schema Editor of the selected table.
Handling errors

About this task


The Die on error option prevents error to be processed. To do so, it stops the Job execution as soon as an error is
encountered. The tMap component provides this option to prevent processing erroneous data. The Die on error option is
activated by default in tMap.
Deactivating the Die on error option will allow you to skip the rows on error and complete the process for error-free rows on
one hand, and to retrieve the rows on error and manage them if needed.
To deactivate the Die on error option:

Procedure
1. Double-click the tMap component on the design workspace to open the Map Editor.
2. Click the Property Settings button at the top of the input area to display the Property Settings dialog box.
3. In Property Settings dialog box, clear the Die on error check box and click OK.

251
Data Integration

Results
A new table called ErrorReject appears in the output area of the Map Editor. This output table automatically comprises two
columns: errorMessage and errorStackTrace, retrieving the message and stack trace of the error encountered during the Job
execution. Errors can be unparseable dates, null pointer exceptions, conversion issues, etc.
You can also drag and drop columns from the input tables to this error reject output table. Those erroneous data can be
retrieved with the corresponding error messages and thus be corrected afterward.

Once the error reject table is set, its corresponding flow can be sent to an output component.

252
Data Integration

To do so, on the design workspace, right-click the tMap component, select Row > ErrorReject in the menu, and click the
corresponding output component, here tLogRow.
When you execute the Job, errors are retrieved by the ErrorReject flow.

The result contains the error message, its stack trace, and the two columns, id and date, dragged and dropped to the
ErrorReject table, separated by a pipe "|".

Setting schemas in the Map Editor

In the Map Editor, you can define the type of a table schema as Built-In so that you can modify the data structure in the
Schema editor panel, or Repository and retrieve the data structure from the Repository. By default, the schema type is set to
Built-In for all tables.
Retrieving the schema structure from the Repository

About this task


To retrieve the schema structure of the selected table from the Repository:

253
Data Integration

Procedure
1. Click the tMap Settings button at the top of the table to display the table properties.
2. Click in the Value field of Schema Type, and then click the three-dot button that appears to open the Options dialog b
ox.

3. In the Options dialog box, double-click Repository, or select it and click OK, to close the dialog box and display the
Schema Id property beneath Schema Type.

Note: If you close the Map Editor now without specifying a Repository schema item, the schema type changes back
to Built-In.

4. Click in the Value field of Schema Id, and then click the [...] button that appears to display the Repository Content dialog
box.
5. In the Repository Content dialog box, select your schema as you define a centrally stored schema for any component,
and then click OK.
The Value field of Schema Id is filled with the schema you just selected, and everything in the Schema editor panel for
this table becomes read-only.

254
Data Integration

Warning: Changing the schema type of the subordinate table across a Join from Built-In to Repository causes the
Join to get lost.

Note: Changes to the schema of a table made in the Map Editor are automatically synchronized to the schema of the
corresponding component connected with the tMap component.

Searching schema columns

About this task


The schema column filter of tMap allows you to quickly search an input or output schema column or multiple columns
among hundreds of them in one go.
The following example shows how to find columns containing the string "customer" in the output table of the Map Editor.

Procedure
1.
Open the Map Editor, and click the button at the top of the table to open the filter area.

255
Data Integration

2. In the filter area, type in your search string, customer in this example.
As you start to type, the table displays the columns that match the characters.

Using the Schema Editor


The Schema Editor details all fields of the selected table. With the schema type of the table set to Built-In, you can modify
the schema of the table.

Use the tool bar below the schema table, to add, move or remove columns from the schema.

256
Data Integration

You can also load a schema from the repository or export it into a file.

Metadata Description

Column Column name as defined on the Map Editor schemas and on the Input or Output component schemas.

Key The Key shows if the expression key data should be used to retrieve data through the Join link. If unchecked,
the Join relation is disabled.

Type Type of data: String, Integer, Date, etc.

Note: This column should always be defined in a Java version.

Length -1 shows that no length value has been defined in the schema.

Precision Defines the number of digits to the right of the decimal point.

Nullable Clear this check box if the field value should not be null.

Default Shows any default value that may be defined for this field.

Comment Free text field. Enter any useful comment.

Note: Input metadata and output metadata are independent from each other. You can, for instance, change the label of a
column on the output side without the column label of the input schema being changed.

However, any change made to the metadata are immediately reflected in the corresponding schema on the tMap relevant
(Input or Output) area, but also on the schema defined for the component itself on the design workspace.
A Red colored background shows that an invalid character has been entered. Most special characters are prohibited in order
for the Job to be able to interpret and use the text entered in the code. Authorized characters include lower-case, upper-case,
figures except as start character.

Enabling automatic data type conversion

Before you begin


When processing data flows using a tMap, if the input and output columns across a mapping are of different data types,
compiling errors may occur at the Job execution time. The Enable Auto-Conversion of types option in the tMap helps avoid
such errors.
To enable this feature in tMap in a Job:

Procedure
1.
Click the button at the top of the Map Editor to open the Property Settings dialog box.
2. Select the Enable Auto-Conversion of types check box and then click OK.

257
Data Integration

What to do next
You can activate the automatic conversion option at the project level so that any tMap component added afterwards in the
project will have this feature enabled.

Defining rules to override the default conversion behavior

If needed, you can also define conversion rules to override the default conversion behavior of tMap.

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand General and select Auto-Conversion of types to open the relevant view.

258
Data Integration

3. Select the Enable Auto-Conversion of types check box to activate the automatic type conversion feature for all tMap
components added afterwards in the project.
4. If needed, click the [+] button to add a line, select the source and target data types, and define a Java function for data
type conversion to create a conversion rule to override the default conversion behavior of tMap for data that matches
the rule.
You can press Ctrl+Space in the Conversion Function field to access a list of available Java functions.
The rule shown in this example will match mappings with the input data type of String and output data type of Integer.
You can created as many conversion rules as you want.
5. Click Apply to apply your changes and then OK to close the dialog box.

Solving memory limitation issues in tMap use

When handling large data sources, including for example, numerous columns, large number of lines or of column types, your
system might encounter memory shortage issues that prevent your Job, to complete properly, in particular when using a
tMap component for your transformation.
A feature has been added (in Java only for the time being) to the tMap component, in order to reduce the memory in use for
lookup loading. In fact, rather than storing the temporary data in the system memory and thus possibly reaching the memory
limitation, the Store temp data option allows you to choose to store the temporary data onto a directory of your disk instead.
This feature comes as an option to be selected in the Lookup table of the input data in the Map Editor.
To enable the Store temp data option:
1. Double-click the tMap component in your Job to launch the Map Editor.
2. In input area, click the Lookup table describing the temporary data you want to be loaded onto the disk rather than in
the memory.
3. Click the tMap settings button to display the table properties.
4. Click in the Value field corresponding to Store temp data, and then click the [...] button to display the Options dialog
box.
5. In the Options dialog box, double-click true, or select it and click OK, to enable the option and close the dialog box.

259
Data Integration

For this option to be fully activated, you also need to specify the directory on the disk, where the data will be stored, and the
buffer size, namely the number of rows of data each temporary file will contain. You can set the temporary storage directory
and the buffer size either in the Map Editor or in the tMap component property settings.
To set the temporary storage directory and the buffer size in the Map Editor:
1. Click the Property Settings button at the top of the input area to display the Property Settings dialog box.
2. In Property Settings dialog box, fill the Temp data directory path field with the full path to the directory where the
temporary data should be stored.
3. In the Max buffer size (nr of rows) field, specify the maximum number of rows each temporary file can contain. The
default value is 2,000,000.
4. Click OK to validate the settings and close the Property Settings dialog box.

To set the temporary storage directory in the tMap component property settings without opening the Map Editor:
1. Click the tMap component to select it on the design workspace, and then select the Component tab to show the Basic
settings view.
2. In the Store on disk area, fill the Temp data directory path field with the full path to the directory where the temporary
data should be stored.
Alternatively, you can use a context variable through the Ctrl+Space bar if you have set the variable in a Context group
in the repository. For more information about contexts, see Using contexts and variables on page 105.

260
Data Integration

At the end of the subJob, the temporary files are cleared.


This way, you will limit the use of allocated memory per reference data to be written onto temporary files stored on the disk.

Note: As writing the main flow onto the disk requires the data to be sorted, note that the order of the output rows cannot
be guaranteed.

On the Advanced settings view, you can also set a buffer size if needed. Simply fill out the field Max buffer size (nb of rows)
in order for the data stored on the disk to be split into as many files as needed.

Handling Lookups

When implementing a join (including Inner Join and Left Outer Join) in a tMap between different data sources, there is
always only one main flow and one or more lookup flows connected to the tMap. All the records of the lookup flow need to
be loaded before processing each record of the main flow. Three types of lookup loading models are provided suiting various
types of business requirement and the performance needs: Load once, Reload at each row, and Reload at each row (cache).
• Load once: it loads once (and only once) all the records from the lookup flow either in the memory or in a local file
before processing each record of the main flow in case the Store temp data option is set to true. This is the default
setting and the preferred option if you have a large set of records in the main flow to be processed using a join to the
lookup flow.
• Reload at each row: it loads all the records of the lookup flow for each record of the main flow. Generally, this option
increases the Job execution time due to the repeated loading of the lookup flow for each main flow record. However,
this option is preferred in the following situations:
• The lookup data flow is constantly updated and you want to load the latest lookup data for each record of the main
flow to get the latest data after the join execution;
• There are very few data from the main flow while a large amount of data from a database table in the lookup flow.
In this case, it might cause an OutOfMemory exception if you use the Load once option. You can use dynamic
variable settings such as where clause to update the lookup flow on the fly as it gets loaded, before the main flow
join is processed. For an example, refer to Reloading data at each row on page 262.
Note that Reload at each row in a streaming Job is supported by the Lookup Input components only such as
tMongoDBLookupInput.
• Reload at each row (cache): it functions like the Reload at each row model, all the records of the lookup flow are loaded
for each record of the main flow. However, this model can't be used with the Store temp data on disk option. The lookup
data are cached in memory, and when a new loading occurs, only the records that are not already exist in the cache
will be loaded, in order to avoid loading the same records twice. This option optimizes the processing time and helps
improve processing performance of the tMap component. Note that you can not use Reload at each row (cache) and
Store temp data at the same time.
Note that when your lookup is a database table, the best practice is to open the connection to the database in the beginning
of your Job design in order to optimize performance.
Setting the loading mode of a lookup flow

About this task


To set the loading mode of a lookup flow:

261
Data Integration

Procedure
1. Click the tMap settings button at the top right of the lookup table to display the table properties.
2. Click in the Value field corresponding to Lookup Model, and then click the [...] button to display the Options dialog box.

3. In the Options dialog box, double-click the wanted loading mode, or select it and then click OK, to validate the setting
and close the dialog box.

Results
For use cases using these options, see the related documentation of the tMap component.
Reloading data at each row

About this task


The Reload at each row option is used to load all the records of a lookup flow for each record of the main flow.
When the main flow has much less rows than the lookup flow (for example, with a ratio of 1000 or more) and the lookup
input is a database component, the advantage of this approach is that it helps deal with the fact that the amount of lookup
data increases over time, since you can run queries against the data from the main flow in the database component to select
only the lookup data that is relevant for each record in the main flow, such as in the following example which uses lookup
data from a MySQL database.

The schemas of the main flow, the lookup flow and the output flow read as follows:

262
Data Integration

You can select from the MySQL database only the data that matches the values of the id column of the main flow. To do
this, proceed as follows:

Procedure
1. Double-click tMysqlInput to open its Component view.

2. In the Query field, enter the query to select the data that matches the id column of the main flow. In this example, this
query reads: Select * from person where id="+(Integer)globalMap.get("id")

Results
Refer to the related documentation of the components used in this example for more information.

Loading multiple lookup flows in parallel

About this task


By default, when multiple lookup flows are handled in the tMap component, these lookup flows are loaded and processed
one after another, according to the sequence of the lookup connections. When a large amount of data is processed, the Job
execution speed is slowed down. To maximize the Job execution performance, the tMap component allows parallel loading
of multiple lookup flows.

263
Data Integration

To enable parallel loading of multiple lookup flows:

Procedure
1. Double-click the tMap component to launch the Map Editor.
2. Click the Property Settings button at the top of the input area to open the Property Settings dialog box.
3. Select the Lookup in parallel check box and click OK to validate the setting and close the dialog box.
4. Click OK to close the Map Editor.

Results
With this option enabled, all the lookup flows will be loaded and processed in the tMap component simultaneously, and then
the main input flow will be processed.

Previewing data

Like the Traces Debug mode of the Run view that allows you to monitor data processing during Job execution, the tMap
component offers you that same functionality in its editor. This allows you to monitor the data mapping and processing
before the execution while configuring the tMap component.
To preview data mapping and processing during tMap configuration:
1. Activate the Traces Debug mode in the Run view. For more information regarding the Traces Debug execution mode
and how to activate it, see Running a Job in Traces Debug mode on page 199.
2. Double-click tMap on the design workspace to open its editor.

264
Data Integration

A new Preview column displays in the main input table and in the output tables showing a preview of the data
processed, and a new tool bar displays on the top left hand corner of the Map Editor.

To monitor data processing row by row or at a certain breakpoint, simply:

Click the Previous Row button to display the data preview of the previous row, within a limit of five rows back.

Click the Next Row button to display the data preview of the next row.

Click the Next Breakpoint button to display the data preview of the next breakpoint.

Note: To monitor your data processing at a breakpoint you first need to define one on the relevant link. To do so, right-
click the relevant link on the design workspace, select Show Breakpoint Setup on the popup menu and select the Activate
conditional breakpoint check box and set the Conditions in the table. A pause icon displays below the link when you access
the Traces Debug mode.

265
Data Integration

Click the Kill button to stop the data processing.

To deactivate the data preview in the Map Editor:


1. Click OK to close the Map Editor.
2. Go back to the Run view and click the Basic Run tab.
3. Double-click tMap on the design workspace to open the Map Editor.

tXMLMap operation

Note: Before starting this section, we recommend reading the previous tMap sections for the basic knowledge of a
Talend mapping component.

tXMLMap is fine-tuned to leverage the Document data type for processing XML data, a case of transformation that often
mixes hierarchical data (XML) and flat data together. This Document type carries a complete user-specific XML flow. In using
tXMLMap, you are able to add as many input or output flows as required into a visual map editor to perform, on these flows,
the operations as follows:
• data multiplexing and demultiplexing,
• data transformation on any type of fields, particularly on the Document type,
• data matching via different models, for example, the Unique match mode (related topic: Using Explicit Join on page
235),
• Automated XML tree construction on both of the input and the output sides,
• inner join and left outer join (related topic: Using Inner Join on page 237)
• lookup between data sources whatever they are flat or XML data using models like Load once (related topic: Handling
Lookups on page 261),
• fields concatenation and interchange,
• field filtering using constraints,
• data rejecting.
Like tMap, a map editor is required to configure these operations. To open this map editor, you can double-click the
tXMLMap icon in the design workspace, or alternatively, click the three-dot button next to the Map Editor in the Basic
settings view of the tXMLMap component.
tXMLMap and tMap use the common approaches to accomplish most of these operations. Therefore, the following sections
explain only the particular operations to which tXMLMap is dedicated for processing the hierarchical XML data.
The operations focusing on hierarchical data are:
• using the Document type to create the XML tree;
• managing the output XML data;
• editing the XML tree schema.
The following sections present more relevant details.

Note: Different from tMap, tXMLMap does not provide the Store temp data option for storing temporary data onto the
directory of your disk. For further information about this option of tMap, see Solving memory limitation issues in tMap use
on page 259.

Using the document type to create the XML tree

The Document data type fits perfectly the conception of defining XML structure as easily as possible. When you need the
XML tree structure to map the input or output flow or both, use this type. Then you can import the XML tree structure from
various XML sources and edit the tree directly in the mapping editor, thus saving the manual efforts.
Setting up the Document type

About this task


The Document data type is one of the data types provided by Talend . This Document type is set up when you edit the
schema for the corresponding data in the Schema editor. For further information about the schema editor, see Using the
Schema Editor on page 256.

266
Data Integration

The following figure presents an example in which the input flow, Customer, is set up as the Document type. To replicate
it, in the Map editor, you can simply click the [+] button to add one row on the input side of the Schema editor, rename it and
select Document from the drop-down list of the given data types.

In practice for most cases, tXMLMap retrieves the schema of its preceding or succeeding components, for example, from a
tFileInputXML component or in the ESB use case, from a tESBProviderRequest component. This avoids many manual efforts
to set up the Document type for the XML flow to be processed. However, to continue to modify the XML structure as the
content of a Document row, you need still to use the given Map editor.

Note: Be aware that a Document flow carries a user-defined XML tree and is no more than one single field of a schema,
which, same as the other schemas, may contain different data types between each field. For further information about
how to set a schema, see Basic Settings tab on page 74.

Once the Document type is set up for a row of data, in the corresponding data flow table in the map editor, a basic XML
tree structure is created automatically to reflect the details of this structure. This basic structure represents the minimum
element required by a valid XML tree in using tXMLMap:
• The root element: it is the minimum element required by an XML tree to be processed and when needs be, the
foundation to develop a sophisticated XML tree.
• The loop element: it determines the element over which the iteration takes place to read the hierarchical data of an
XML tree. By default, the root element is set as loop element.

This figure gives an example with the input flow, Customer. Based on this generated XML root tagged as root by default, y
ou can develop the XML tree structure of interest.
To do this, you need to:

Procedure
1. Import the custom XML tree structure from one of the following types of sources:
• XML or XSD files (related topic: Importing the XML tree structure from XML and XSD files on page 269)

267
Data Integration

Note: When you import an XSD file, you will create the XML structure this XSD file describes.

• file XML connections created and stored in the Repository of your Studio (related topic: Importing the XML tree
structure from the Repository on page 270).

Note: If needs be, you can develop the XML tree of interest manually using the options provided on the contextual
menu.

2. Reset the loop element for the XML tree you are creating, if needs be. You can set as many loops as you need to. At this
step, you may have to consider the following situations:
• If you have to create several XML trees, you need to define the loop element for each of them.
• If you import the XML tree from the Repository, the loop element will have been set depending on the set of the
source structure. But you can still reset the loop element.
For further details, see Setting or resetting a loop element for an imported XML structure on page 271
3. Optional: If needed, you can continue to modify the imported XML tree using the options provided in the contextual
menu. The following table presents the operations you can perform through the available options.

Options Operations

Create Sub-element and Create Attribute Add elements or attributes to develop an XML tree. Related topic: Adding a
sub-element or an attribute to an XML tree structure on page 272

Set a namespace Add and manage given namespaces on the imported XML tree. Related topic:
Managing a namespace on page 273

Delete Delete an element or an attribute. Related topic: Deleting an element or an


attribute from the XML tree structure on page 272

Rename Rename an element or an attribute.

As loop element Set or reset an element as loop element. Multiple loop elements and optional
loop element are supported.

As optional loop This option is not available unless to the loop element you have defined.
When the corresponding element exists in the source file, an optional loop
element works the same way as a normal loop element; otherwise, it resets
automatically its parent element as loop element or in absence of parent
element in the source file, it takes the element of the higher level until the
root element. But in the real-world practice, with such differences between
the XML tree and the source file structure, we recommend adapting the XML
tree to the source file for better performance.

As group element On the XML tree of the output side, set an element as group element. Related
topic: Grouping the output data on page 275

As aggregate element On the XML tree of the output side, set an element as aggregate element.
Related topic: Aggregating the output data on page 276

Add Choice Set the Choice element. Then all of its child elements developed underneath
will be contained in this declaration. This Choice element originates from one
of the XSD concepts. It enables tXMLMap to perform the function of the XSD
Choice element to read or write a Document flow.
When tXMLMap processes a choice element, the elements contained in its
declaration will not be outputted unless their mapping expressions are
appropriately defined.

Note:
The tXMLMap component declares automatically any Choice element set
in the XSD file it imports.

268
Data Integration

Options Operations

Set as Substitution Set the Substitution element to specify the element substitutable for a given
head element defined in the corresponding XSD. The Substitution element
enables tXMLMap to perform the function of the XSD Substitution element
to read or write a Document flow
When tXMLMap processes a substitution element, the elements contained in
its declaration will not be outputted unless their mapping expressions are
appropriately defined.

Note:
The tXMLMap component declares automatically any Substitution
element set in the XSD file it imports.

The following sections present more details about the process of creating the XML tree.
Importing the XML tree structure from XML and XSD files
Importing the XML tree structure from an XML file

Procedure
1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example, it is
Customer.

2. From this menu, select Import From File.


3. In the pop-up dialog box, browse to the XML file you need to use to provide the XML tree structure of interest and
double-click the file.
Importing the XML tree structure from an XSD file

Procedure
1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example, it is
Customer.

2. From this menu, select Import From File.


3. In the pop-up dialog box, browse to the XSD file you need to use to provide the XML tree structure of interest and
double-click the file.
4. In the dialog box that appears, select an element from the Root list as the root of your XML tree, and click OK. Then the
XML tree described by the XSD file imported is established.

269
Data Integration

Note: The root of the imported XML tree is adaptable:


• When importing either an input or an output XML tree structure from an XSD file, you can choose an element as
the root of your XML tree.
• Once an XML structure is imported, the root tag is renamed automatically with the name of the XML source. To
change this root name manually, you need use the tree schema editor. For further information about this editor,
see Editing the XML tree schema on page 282.

What to do next
Then, you need to define the loop element in this XML tree structure. For further information about how to define a loop
element, see Setting or resetting a loop element for an imported XML structure on page 271.
Importing the XML tree structure from the Repository

About this task


To do this, proceed as follows:

Procedure
1. In any input flow table, right click the column name to open the contextual menu. In this example, it is Customer.

2. From this menu, select Import From Repository.


3. In the pop-up repository content list, select the XML connection or the MDM connection of interest to import the
corresponding XML tree structure.

270
Data Integration

This figure presents an example of this Repository-stored XML connection.

Note:
To import an XML tree structure from the Repository, the corresponding XML connection should have been created.
For further information about how to create a file XML connection in the Repository, see Centralizing XML file
metadata on page 368.

4. Click OK to validate this selection.

Results
The XML tree structure is created and a loop is defined automatically as this loop was already defined during the creation of
the current Repository-stored XML connection.
Setting or resetting a loop element for an imported XML structure

About this task


You need to set at least one loop element for each XML tree if it does not have any. If it does, you may have to reset the
existing loop element when needs be.
Whatever you need to set or reset a loop element, proceed as follows:

Procedure
1. In the created XML tree structure, right-click the element you need to define as loop. For example, you need to define
the Customer element as loop in the following figure.

2. From the pop-up contextual menu, select As loop element to define the selected element as loop.
Once done, this selected element is marked with the text: loop.

Results

Note:
If you close the Map Editor without having set the required loop element for a given XML tree, its root element will be set
automatically as loop element.

271
Data Integration

Adding a sub-element or an attribute to an XML tree structure

About this task


In the XML tree structure view, you are able to manually add a sub-element or an attribute to the root or to any of the
existing elements when needs be.
To do either of these operations, proceed as follows:

Procedure
1. In the XML tree you need to edit, right-click the element to which you need to add a sub-element or an attribute
underneath and select Create Sub-Element or Create Attribute according to your purpose.

2. In the pop-up Create New Element wizard, type in the name you need to use for the added sub-element or attribute.

3. Click OK to validate this creation. The new sub-element or attribute displays in the XML tree structure you are editing.
Deleting an element or an attribute from the XML tree structure

About this task


From an established XML tree, you may need to delete an element or an attribute. To do this, proceed as follows:

Procedure
1. In the XML tree you need to edit, right-click the element or the attribute you need to delete.

272
Data Integration

2. In the pop-up contextual menu, select Delete.


Then the selected element or attribute is deleted, including all of the sub-elements or the attributes attached to it
underneath.
Managing a namespace
When necessary, you are able to set and edit namespace for each of the element in the a created XML tree of the input or
the output data flow.
Defining a namespace

About this task


To do this, proceed as follows:

Procedure
1. In the XML tree of the input or the output data flow you need to edit, right click the element for which you need to
declare a namespace. For example, in a Customer XML tree of the output flow, you need to set a namespace for the
root.

2. In the pop-up contextual menu, select Set a namespace. Then the Namespace dialog wizard displays.
3. In this wizard, type in the URI you need to use.

273
Data Integration

4. If you need to set a prefix for this namespace you are editing, select the Prefix check box in this wizard and type in the
prefix you need. In this example, we select it and type in xhtml.

5. Click OK to validate this declaration.


Modifying the default value of a namespace

About this task


To do this, proceed as follows:

Procedure
1. In the XML tree that the namespace you need to edit belongs to, right-click this namespace to open the contextual
menu.

2. In this menu, select Change Namespace to open the corresponding wizard.


3. Type in the new default value you need in this wizard.
4. Click OK to validate this modification.

274
Data Integration

Deleting a namespace

About this task


To do this, proceed as follows:

Procedure
1. In the XML tree that the namespace you need to edit belongs to, right-click this namespace to open the contextual
menu.

2. In this menu, click Delete to validate this deletion


Grouping the output data
The tXMLMap component uses a group element to group the output data according to a given grouping condition. This
allows you to wrap elements matching the same condition with this group element.
To set a group element, two restrictions must be respected:
1. the root node cannot be set as group element;
2. the group element must be the parent of the loop element.

Note:
The option of setting group element is not visible until you have set the loop element; this option is also invisible if an
element is not allowed to be set as group element.

Once the group element is set, all of its sub-elements except the loop one are used as conditions to group the output data.
You have to carefully design the XML tree view for the optimized usage of a given group element. For further information
about how to use a group element, see tXMLMap.

Note: tXMLMap provides group element and aggregate element to classify data in the XML tree structure. When
handling a row of XML data flow, the behavioral difference between them is:
• The group element processes the data always within one single flow.
• The aggregate element splits this flow into separate and complete XML flows.

Setting a group element

About this task


To set a group element, proceed as follows:

Procedure
1. In the XML tree view on the output side of the Map editor, right-click the element you need to set as group element.
2. From the opened contextual menu, select As group element.
Then this element of selection becomes the group element. The following figure presents an example of an XML tree
with the group element.

275
Data Integration

Revoking a defined group element

About this task


To revoke a defined group element, proceed as follows:

Procedure
1. In the XML tree view on the output side of the Map editor, right-click the element you have defined as group element.
2. From the opened contextual menu, select Remove group element.
Then the defined group element is revoked.
Aggregating the output data

About this task


With tXMLMap, you can define as many aggregate elements as required in the output XML tree to class the XML data
accordingly. Then this component outputs these classes, each as one complete XML flow.

Procedure
1. To define an element as aggregate element, simply right-click this element of interest in the XML tree view on the
output side of the Map editor and from the contextual menu, select As aggregate element.
Then this element becomes the aggregate element. Texts in red are added to it, reading aggregate. The following figure
presents an example.

2. To revoke the definition of the aggregate element, simply right-click the defined aggregate element and from the
contextual menu, select Remove aggregate element.

276
Data Integration

Results

Note:
To define an element as aggregate element, ensure that this element has no child element and the All in one feature is
being disabled. The As aggregate element option is not available in the contextual menu until both of the conditions are
respected.

For an example about how to use the aggregate element with tXMLMap, see tXMLMap.

Note: tXMLMap provides group element and aggregate element to classify data in the XML tree structure. When
handling one row of data ( one complete XML flow), the behavioral difference between them is:
• The group element processes the data always within one single flow.
• The aggregate element splits this flow into separate and complete XML flows.

Defining the output mode

To define the output mode of the document-type data, you are defining whether to put all of the XML elements into one
single XML flow and when empty element exist, whether to output them. By doing this, you do not change the structure of
the XML tree you have created.
Outputing elements into one document

About this task


Unless you are using the aggregate element which always classifies the output elements and splits an output XML flow, you
are able to determine whether an XML flow is output as one single flow or as separate flows, using the All in one feature in
the tXMLMap editor.
To do this, on the output side of the Map editor, proceed as follows:

Procedure
1. Click the pincer icon to open the map setting panel. The following figure presents an example.

2. Click the All in one field and from the drop-down list, select true or false to decide whether the output XML flow should
be one single flow.
• If you select true, the XML data is output all in one single flow. In this example, the single flow reads as follows:

277
Data Integration

The structure of this flow reads:

278
Data Integration

• If you select false, the XML data is output in separate flows, each loop being one flow, neither grouped nor
aggregated. In this example, these flows read as follows:

279
Data Integration

Each flow contains one complete XML structure. To take the first flow as example, its structure reads:

Note: The All in one feature is disabled if you are using the aggregate element.

Managing empty element in Map editor

About this task


It may be necessary to create and output empty elements during the process of transforming data into XML flow, such as,
when tXMLMap works along with tWriteXMLField that creates empty elements or when there is no input column associated
with certain XML node in the output XML data flow.
By contrast, in some scenarios, you do not need to output the empty element while you have to keep them in the output
XML tree for some reasons.
tXMLMap allows you to set the boolean for the creation of empty element. To do this, on the output side of the Map editor,
perform the following operations:

Procedure
1. Click the pincer icon to open the map setting panel.

280
Data Integration

2. In the panel, click the Create empty element field and from the drop-down list, select true or false to decide whether to
output the empty element.
• If you select true, the empty element is created in the output XML flow and output, for example,
<customer><LabelState/></customer>.
• If you select false, the empty element is not output.
Defining the sequence of multiple input loops

About this task


If a loop element, or the flat data flow, receives mappings from more than one loop element of the input flow, you need
to define the sequence of the input loops. The first loop element of this sequence will be the primary loop, so the transf
ormation process related to this sequence will first loop over this element such that the data outputted will be sorted with
regard to its element values.

For example, in this figure, the types element is the primary loop and the outputted data will be sorted by the values of
this element.

281
Data Integration

In this case in which one output loop element receives several input loop elements, a [...] button appears next to this recei
ving loop element or for the flat data, appears on the head of the table representing the flat data flow. To define the loop
sequence, do the following:

Procedure
1. Click this [...] button to open the sequence arrangement window as presented by the figure used earlier in this section.
2. Use the up or down flash button to arrange this sequence.

Editing the XML tree schema

In addition to the Schema editor and the Expression editor views that tMap is also equipped with, a Tree schema editor view
is provided in the map editor of tXMLMap for you to edit the XML tree schema of an input or output data flow.
To access this schema editor, click the Tree schema editor tab on the lower part of the map editor.

282
Data Integration

The left half of this view is used to edit the tree schema of the input flow and the right half to edit the tree schema of the
output flow.
The following table presents further information about this schema editor.

Metadata Description

XPath Use it to display the absolute paths pointing to each element or attribute in a XML tree and edit the name of the
corresponding element or attribute.

Key Select the corresponding check box if the expression key data should be used to retrieve data through the Join
link. If unchecked, the Join relation is disabled.

Type Type of data: String, Integer, Document, etc.

Note:
This column should always be defined in a Java version.

Nullable Select this check box if the field value could be null.

Pattern Define the pattern for the Date data type.

Note:
Input metadata and output metadata are independent from each other. You can, for instance, change the label of a
column on the output side without the column label of the input schema being changed.

However, any change made to the metadata are immediately reflected in the corresponding schema on the tXMLMap
relevant (Input or Output) area, but also on the schema defined for the component itself on the design workspace.

Change Data Capture (CDC)


CDC architectural overview
Data warehousing involves the extraction and transportation of data from one or more databases into a target system or
systems for analysis. But this involves the extraction and transportation of huge volumes of data and is very expensive in
both resources and time.
The ability to capture only the changed source data and to move it from a source to a target system(s) in real time is known
as Change Data Capture (CDC). Capturing changes reduces traffic across a network and thus helps reduce ETL time.
The CDC feature, introduced in Talend Studio, simplifies the process of identifying the change data since the last extraction.
CDC in Talend Studio quickly identifies and captures data that has been added to, updated in, or removed from database
tables and makes this change data available for future use by applications or individuals. The CDC feature is available for
Oracle, MySQL, DB2, PostgreSQL, Sybase, MS SQL Server, Informix, Ingres, Teradata, and AS/400.

Warning: The CDC feature works only with database systems running on the same server.

Three different CDC modes are available in Talend Studio:


• Trigger: this mode is the by-default mode used by CDC components.
• Redo/Archive log: this mode is used with Oracle v11 and previous versions and AS/400.
• XStream: this mode is used only with Oracle v12 with OCI.
For detailed information on these three modes, see the following sections.

Trigger mode

This mode is available for the following databases: MySQL, Oracle, DB2, PostgreSQL, Sybase, MS SQL Server, Informix,
Ingres, and Teradata.
The Trigger mode places a trigger that launches change data capture on every monitored source table. This, by turn, imposes
little modifications on database structure.

283
Data Integration

With this mode, data extraction takes place at the same time the Insert, Update, or Delete operations occur in the source
tables, and the change data is stored inside the database in change tables. The changed data, thus captured, is then made
available to the target system(s) in a controlled manner, using subscriber views.
In Trigger mode, CDC can have only one publisher but many subscribers. CDC creates subscriber tables to control
accessibility of the change table data by the target system(s). A target system is any application that wants to use the data
captured from the source system.
The below figure shows the basic architecture of a CDC environment in Trigger mode in Talend Studio.

In this example, CDC monitors the changes made to a Product table. The changes are caught and published in a change
table to which two subscribers have access: a CRM application and an Accounting application. These two systems fetch the
changes and use them to update their data.

CDC Redo/Archive log mode

The Redo/Archive log mode is only available for Oracle database Enterprise Editions v11 (and previous versions) and AS/400
databases. It is equivalent to the archive log mode for Oracle and to the journal mode for AS/400.
In an Oracle database, a Redo log is a file which logs the history of changes made to data. In an AS/400 database, these
changes are logged automatically in the database's internal logbook (journal). These changes include the insert, update and
delete operations which data may undergo.
Redo/Archive log mode is less intrusive than Trigger mode because in contrast to Trigger mode, it does not require
modifications to the database structure.
When setting up this Redo/Archive log mode for Oracle, only one subscriber can have access rights to the change table.
This subscriber must be a database user who holds the subscription rights. Also, there is a subscription table which controls
access to the subscriber change table. The subscription change table is a comprehensive, internal table which reflects the
state of the Oracle database at the moment at which the Redo/Archive log option was activated.
When setting up this mode for AS/400, a save file, called fitcdc.savf and provided in your Studio, is restored on AS/400
and used to install a program called RUNCDC. When the subscriber views the changes made (View all changes) or consumes
them for reuse (using a tAS400CDC component), the RUNCDC program reads and analyzes the logbook (journal) and the
attached receiver from the source table and updates the change table accordingly. The AS/400 CDC Redo/Archive log mode
(journal) creates subscription tables to prevent unauthorized target systems from accessing the data in the change tables. A
target system means any application which tries to use data captured in the source system.

284
Data Integration

In this example, the CDC monitors the changes made to a Product table, thanks to the data contained in the database's
logbook (journal). The CDC reads the logbook and records the changes which have been made to the data. These changes are
collected and published in a table of changes to which two subscribers have access, a CRM application and an Accounting
application. These two systems fetch the changes and use them to update their data.

XStream mode

XStream provides a framework for sharing real-time data changes with outstanding performance and usability between
Oracle databases and other systems such as non-Oracle databases and third party software applications. XStream consists of
two major features: XStream Out and XStream In.
XStream Out provides Oracle Database components and application programming interfaces that enable you to share data
changes made to an Oracle database with other systems. It also provides a transaction-based interface for streaming the
changes captured from the redo log of the Oracle database to client applications with an outbound server. An outbound
server is an optional Oracle background process that sends data changes to a client application.
XStream In provides Oracle Database components and application programming interfaces that enable you to share
data changes made to other systems with an Oracle database. It also provides a transaction-based interface for sending
information to an Oracle database from client applications with an inbound server. An inbound server is an optional Oracle
background process that receives data changes from a client application.
The XStream mode is only available for Oracle v12 with OCI in Talend Studio. For more information about the XStream
mode, see http://docs.oracle.com/cd/E11882_01/server.112/e16545/toc.htm.

CDC: a publish/subscribe principle


The CDC architecture is based on the publisher/subscriber model.
The publisher captures the change data and makes it available to the subscribers. The subscribers utilize the change data
obtained from the publisher.
The main tasks performed by the publisher are:
• identifying the source tables from which the change data needs to be captured.
• capturing the change data and storing it in specially created change tables.
• allowing subscribers controlled access to the change data.
In Trigger mode, or the AS/400 Redo/Archive log mode (journal), the subscriber is a table that only lists the applications that
have access rights to the change tables. In the Oracle Redo/Archive log mode, the subscriber is a user of the database. The
subscriber may not be interested in all the data that is published by the publisher.

Setting up a CDC environment


The CDC feature is part of Talend Studio, and you do not need to install any software other than Talend Studio to use CDC.
However, if you want to use CDC in Redo/Archive log mode for an Oracle, you must first of all configure the database so that
it generates the redo records that hold all insert, update or delete changes made in datafiles. For further information, see
Prerequisites for the Oracle Redo/Archive log mode on page 291.
If you want to use CDC in Redo/Archive log mode for AS/400, you must verify that the prerequisites on your AS/400 are all
met. For further information, see The prerequisites on AS/400 on page 308.

Note: For the time being, CDC is only available in Java and is for Oracle, MySQL, DB2, PostgreSQL, Sybase, MS SQL Server,
Informix, Ingres, and Teradata in Trigger mode, for Oracle and AS/400 databases in Redo/Archive log mode, and for
Oracle in XStream mode.
To set up a CDC environment you must understand the basics involved in designing a Job in Talend Studio, and
particularly the definition of metadata items.
When setting up a CDC environment, make sure that the database connection for CDC is on the same server with the
source data to which changes are to be captured.

The CDC Foundation feature in the database metadata wizard is not shipped with your Talend Studio by default. You need to
install it using the Feature Manager. For more information, see Installing features using the Feature Manager.

285
Data Integration

Setting up CDC in Trigger mode

The following two sections provide a two-step guide to set up the CDC environment in Trigger mode in Talend Studio: the
first step explains how to configure your system for CDC and the second step explains how to extract the modified data.
Configuring CDC in Trigger mode
Below are configuration steps that need to be set up just once for a given publisher/subscriber scenario.
Step 1: Set up a publisher

About this task


To set up a publisher, you need to:

Procedure
1. Set up a database connection dedicated to CDC.
2. Set up a connection to the database where data is located.

Results
For more information about how to set up a database connection, see Centralizing database metadata on page 318.

Note: If you work with an MS SQL Server, you must set the two connections to the same database but using two different
schemas.

Step 2: Identify the source table


To identify the table from which data changes will be captured, right-click the newly created data connection to retrieve the
schema of the source table and load it on your repository file system. In this example, the source table is person.

Step 3: Create the subscriber(s) table

About this task


To set up the connection between the CDC and the data:

286
Data Integration

Procedure
1. Right-click the CDC Foundation folder under the data connection node and select Create CDC from the contextual
menu. The Create Change Data Capture dialog box opens up.

2. In the Create Change Data Capture dialog box, click the [...] button next to the Set Link Connection field to select the
database connection dedicated to CDC.

Note that for the database, such as Oracle, which also supports other CDC mode, make sure to select Trigger mode as
the option to capture data changes in this step.

3. Click Create Subscriber and the Create Subscriber and Execute SQL Script dialog box opens up.

287
Data Integration

4. Click Execute to run the SQL script displayed and then click Close to close the dialog box.
5. Click Finish in the Create Change Data Capture dialog box.
In the CDC Foundation folder, the CDC database connection and the subscriber table schema appear.
Step 4: Subscribe to the source table and activate the subscription

About this task


You must specify the table that the subscriber wants to subscribe to and then activate the subscription.

Procedure
1. Right-click the relevant schema of the source table and select add CDC. The Create Subscriber and Execute SQL Script
dialog box appears.

Warning: The source table to be monitored should have a primary key so that the CDC system can identify the rows
on which changes have been made. You cannot set up a CDC environment if your source table schema does not have
a primary key.

Note: For Oracle databases, the CDC system creates an alias for the source table(s) monitored. This helps to avoid
problems due to the length of identifiers upon creation of the change table and its associated view. For CDC systems
which are already set up, the table names are retained.

2. In the Create Subscriber and Execute SQL Script dialog box, check the event(s) you want to catch: Insert, Update or
Delete.

288
Data Integration

3. Click Execute to run the SQL script displayed and then click Close to close the dialog box.
In the CDC Foundation folder, the catch table schemas and the corresponding view schemas appear.

4. To view any data changes made to the source table, right-click the table in the Table schemas folder and select View All
Changes to open the View All Changes dialog box.

289
Data Integration

5. Click Finish to close the dialog box.


Extracting change data in Trigger mode

About this task


After setting up the CDC environment, you can design a Job in Talend Studio using the CDC component which corresponds to
the type of the database being used, in order that changes made to the data from the source system can be extracted. In this
example, the tTeradataCDC component will be used to show how to extract data changes made to the source table person.

Procedure
1. Create a new Job in Talend Studio, add a tTeradataCDC component and a tLogRow component, and link tTeradataCDC
to tLogRow using a Row > Main connection.

2. Double-click tTeradataCDC to open its Basic settings view.

290
Data Integration

3. Select Repository from the Property of the CDC connection drop-down list and click the [...] button next to the field to
retrieve the schema that corresponds to the database connection dedicated to CDC.
4. Select Repository from the Schema using CDC drop-down list and click the [...] button next to the field to retrieve the
schema that corresponds to the table from which changes will be captured.
5. Select the check box(es) for the event(s) to be monitored.
6. Double-click tLogRow and in the Mode area on its Basic settings view select Table (print values in cells of a table) for a
better display of the result.
7. Press Ctrl + S to save the Job and then F6 to execute it.

On the console, you can read the output results which correspond to what you can see in the View All Changes dialog
box.

Setting up CDC in Oracle Redo/Archive log mode

The Oracle Redo/Archive log mode used in Talend is equivalent to the archive log mode of Oracle.
The following three sections detail the prerequisites for using CDC in Redo/Archive log mode for Oracle databases and
provide a two-step example of how to set up a CDC environment using the Oracle Redo/Archive log mode in Talend Studio:
the first step explains how to configure your system for CDC and the second, how to extract the modified data.
Prerequisites for the Oracle Redo/Archive log mode
The CDC feature uses Java. Therefore, make sure Java is enabled when you install Oracle database.
The CDC feature in this mode for Oracle is available for the 10g version of the Oracle database and later versions. Before
being able to use CDC in Redo/Archive log mode in Talend Studio, the administrator of the database to be supervised should
do the following:
1. Activate the active log mode in the Oracle database.
2. Set up CDC in the Oracle database.
3. Create and give all rights to the source user.
4. Create and give all rights to the publisher.
Activate the archive log mode in Oracle
To do so, connect to the Oracle database as an administrator and activate the active log mode using the following queries:

connect / as sysdba;
shutdown;
startup exclusive mount;
alter database archivelog;
alter database open;

Set up CDC in Oracle


To do so, create a tablespace for the source user and the publisher respectively, then create a source user and give it all the
rights necessary to make modifications, and create a publisher and give it all the rights necessary to capture and publish
modifications.

291
Data Integration

In the example below, the $ORACLE_PATH varies depending on where Oracle is installed. The source user is called source,
and the publisher is called publisher:

create tablespace SOURCE datafile '$ORACLE_PATH/oradata/Oracle/SOURCE.dbf' size 50M;

create user source


identified by source
default tablespace SOURCE
quota unlimited on SOURCE;

grant connect, create table to source;


grant unlimited tablespace to source;grant select_catalog_role to source;
grant execute_catalog_role to source;
grant create sequence to source;
grant create session to source;
grant dba to source;
grant execute on SYS.DBMS_CDC_PUBLISH to source;

create tablespace PUBLISHER datafile '$ORACLE_PATH/oradata/Oracle/PUBLISHER.dbf' size


50M;

create user publisher


identified by publisher
default tablespace PUBLISHER
quota unlimited on PUBLISHER;

grant connect, create table to publisher;


grant unlimited tablespace to publisher;
grant select_catalog_role to publisher;
grant execute_catalog_role to publisher;
grant create sequence to publisher;
grant create session to publisher;
grant dba to publisher;
grant execute on SYS.DBMS_CDC_PUBLISH to publisher;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE=>'publisher');

The select_catalog_role procedure allows the publisher to consult all Oracle dictionaries.
The execute_catalog_role procedure allows the publisher to execute the dictionary procedures.
The SYS.DBMS_CDC_PUBLISH procedure allows the publisher to configure the CDC system that will capture and publish
change data in one or more source tables.
The procedure: DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(GRANTEE=>'publisher')gives the user the
administration privileges necessary to carry out data replication (stream). The GRANT_ADMIN_PRIVILEGE procedure
allows the user to carry out data capture and propagation operations.
Configuring CDC in Oracle Redo/Archive log mode
The followings are configuration steps that need to be set up once for a given publisher/subscriber scenario.

Step 1: Set up a publisher in Oracle Redo/Archive log mode

About this task


To set up a publisher, you need to:

Procedure
1. Create a new Job in Talend Studio.
2. Set a DB connection dedicated to CDC by using the "publisher" user that has all necessary rights.
3. Set a DB connection to the database you want to supervise.

292
Data Integration

Results

Step 2: Identify the source table in Oracle Redo/Archive log mode


To identify the table(s) to catch, right-click the DB connection for the database you want to monitor and select Retrieve
Schema, then proceed to retrieve and load the source table schema in the repository.
In this example, the source table is client, which contains three columns id, name and age.

Step 3: Retrieve and process changes in Oracle Redo/Archive log mode

About this task


To retrieve modified data, define the connection between CDC and data:

Procedure
1. Right-click the relevant CDC Foundation folder and proceed to connect to the Oracle database to be monitored.

2. Select Create CDC to open the Create Change Data Capture dialog box.

293
Data Integration

3. Click the three-dot button next to the Set Link Connection field to select the connection that corresponds to CDC.
Then define the user for Oracle - publisher in this example. This user will create the change tables that store
modifications and will activate change captures for the source table.
4. In the Options area, select Log mode as the option for capturing changes.
5. Click Create Subscriber. The Create Subscriber and Execute SQL Script dialog box appears.

6. Click Execute and then Close to close the dialog box.


7. Click Finish in the Create Change Data Capture dialog box.

294
Data Integration

Results
In the CDC Foundation folder, the subscription table schema appears. An icon also appears to show that the change capture
for the source table is activated.
Step 4: Create the change table, subscribe to the source table and activate the subscription

About this task


You must specify the table to which the subscriber wants to subscribe and then activate its subscription.

Procedure
1. Right-click the schema that corresponds to the source table and select Add CDC. The Create Subscriber and Execute
SQL Script dialog box appears.

Note: For Oracle databases and for versions 3.2 + of Talend Studio, the CDC system creates an alias for the source
table(s) monitored. This helps to avoid problems due to the length of identifiers upon creation of the change table
and its associated view. For CDC systems which are already set up, the table names are retained.
The value of the options_string argument (for example tablespace users in the below dialog box) should be the
default tablespace that you are using.

295
Data Integration

2. Click Execute to activate the subscription to the source table and then click Close to close the dialog box.
In the CDC Foundation folder, the table that holds the modified data and the associated view schemas appear.

296
Data Integration

3. To see the changes made to data, right-click the corresponding table in the Table schemas folder and select View All
Changes to open the corresponding dialog box.

The TALEND_CDC_TYPE column of the View All Changes dialog box indicates all of the different changes caught.
The changes are caught as follows: I indicates that the data has been inserted, UN indicates that the data has been
updated, and D indicates that the data has been deleted.
The columns of the source table and their values are also displayed.
4. Click Finish to close the dialog box.
Extracting change data modified in Oracle Redo/Archive log mode

About this task


After setting up the CDC environment, you can now design a Job in Talend Studio using the CDC Oracle component to
extract the change data from the source system.
If you want to use CDC with an Oracle database, proceed as below:

Procedure
1. From the Repository tree view, drop the source table to the design workspace and select the tOracleCDC component
in the Components dialog box, drop tLogRow from the Palette to the design workspace, and link the two components
together using a Row Main connection.

297
Data Integration

2. Double-click tOracleCDC to display its Basic settings.

The Property type is set to Repository since we used the connection information related to CDC stored locally in the
Repository tree view. All connection fields are automatically filled in.
In the Schema using CDC, Repository is selected and this way the schema corresponding to Oracle source table is
automatically retrieved.
The name of the source table that holds change data appears in the Table using CDC field. In this example, the table is
called CLIENT.

Note: The CDC Log Mode check box is selected since you select this mode when setting up the CDC environment.

3. For the Events to catch option, select the check box corresponding to the event(s) you want to catch. In this example,
we want to catch the three events, Insert, Update and Delete.
4. Save your Job and press F6 to execute it.

Results
In the console, you can read the output results that correspond to what you can see in the View All Changes dialog box.

Setting up CDC in Oracle XStream mode

The XStream mode is only available for Oracle v12 with OCI in Talend Studio. The following sections give detailed in
formation about the prerequisites for using CDC in XStream mode for Oracle databases and provide an example of how to
configure the CDC environment using the XStream mode in Talend Studio.
Prerequisites for the XStream mode
Before configuring CDC using XStream mode in Talend Studio, the administrator of the Oracle database should do the
following:
1. Activate the archive log mode in Oracle XStream mode on page 299;
2. Open all PDBs for a CDB in Oracle on page 299;
3. Configure an XStream administrator on page 299.

298
Data Integration

Activate the archive log mode in Oracle XStream mode


Connect to the Oracle database as an administrative user and run the following statement to display its archiving
information:

archive log list;

If the database is not operating in the archive log mode, run the following statements to activate the archive log mode:

shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

Open all PDBs for a CDB in Oracle


During XStream configuration, if the Oracle database is a container database (CDB), you need to ensure that all pluggable
databases (PDBs) in the CDB are in open read/write mode.
To view the open mode of PDBs, connect to the Oracle database as an administrative user and run the following statement.

select con_id, dbid, guid, name, open_mode from v$pdbs;

To open PDBs, connect to the Oracle database as an administrative user and run the following statement.

alter pluggable database all open;

Configure an XStream administrator

About this task


To configure an XStream administrator, connect to the Oracle database as an administrative user with the right to create
users, grant privileges, and create tablespaces, and then proceed with the following steps.

Procedure
1. Create a tablespace for the XStream administrator by running the following statement. Skip this step if you want to use
an existing tablespace.

CREATE TABLESPACE xstream_tbs DATAFILE '$ORACLE_HOME/dbs/xstream_tbs.dbf' SIZE 25M


REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

2. Create a new user to act as the XStream administrator by running the following statements. Skip this step to identify an
existing user.

CREATE USER username IDENTIFIED BY password


DEFAULT TABLESPACE xstream_tbs
QUOTA UNLIMITED ON xstream_tbs;

Note:
• If you are creating an XStream administrator in a CDB, the XStream administrator must be a common user. The
name of a common user must begin with c## or C##, and you need to include the CONTAINER=ALL clause in
the statement.
• If you are creating an XStream administrator using the Oracle default tablespace, you need to remove the
DEFAULT TABLESPACE and QUOTA UNLIMITED ON clauses in the statement.

299
Data Integration

3. Grant privileges to the XStream administrator by running the following statements and procedures:

GRANT DBA TO username;


GRANT CONNECT, CREATE TABLE TO username;
GRANT RESOURCE TO username;
GRANT CREATE TABLESPACE TO username;
GRANT UNLIMITED TABLESPACE TO username;
GRANT SELECT_CATALOG_ROLE TO username;
GRANT EXECUTE_CATALOG_ROLE TO username;
GRANT CREATE SEQUENCE TO username;
GRANT CREATE SESSION TO username;
GRANT CREATE ANY VIEW TO username;
GRANT CREATE ANY TABLE TO username;
GRANT SELECT ANY TABLE TO username;
GRANT COMMENT ANY TABLE TO username;
GRANT LOCK ANY TABLE TO username;
GRANT SELECT ANY DICTIONARY TO username;
GRANT EXECUTE ON SYS.DBMS_CDC_PUBLISH to username;
GRANT CREATE ANY TRIGGER TO username;
GRANT ALTER ANY TRIGGER TO username;
GRANT DROP ANY TRIGGER TO username;

BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'username',
privilege_type => 'CAPTURE',
grant_select_privileges => TRUE);
END;
/

BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'username',
privilege_type => 'APPLY',
grant_select_privileges => TRUE);
END;
/

Results
Note that if you are granting privileges to a common user, you need to include the CONTAINER=ALL clause in the above
GRANT statements and procedures.
Configuring CDC using XStream mode
This section provides detailed information about configuring the XStream Out and XStream In in Talend Studio.
Configure XStream Out in Talend Studio

About this task


To configure XStream Out in Talend Studio, do the following:

Procedure
1. In the Repository tree view, set up a database connection using OCI connection type to an Oracle database, and the
n retrieve the schema of the source table in which data changes are to be captured. In this example, the source table
is PERSON. For detailed information about how to set up a database connection and retrieve table schemas, see
Centralizing database metadata on page 318.

300
Data Integration

2. Right-click CDC Foundation under the newly created Oracle database connection and select Create CDC from the
contextual menu. The Create Change Data Capture dialog box opens up.

3. Select XStream mode and click Show sample initialization script. The Sample Initialization Script dialog box opens up.

301
Data Integration

Note that this is only a sample script for configuring XStream on an Oracle 12c server, you need to update the
username, password, and tablespace information according to your settings and run the statements and procedures in
Oracle. For detailed information, see Prerequisites for the XStream mode on page 298.
Click OK to close the Sample Initialization Script dialog box.
Click Finish to create CDC in Oracle and close the Create Change Data Capture dialog box.
4. Right-click the source table and select add CDC from the contextual menu.

302
Data Integration

5. Right-click the source table and select Generate XStreamsOut Script from the contextual menu. The XStreamsOut
generation script dialog box opens up.

6. Fill in the XStreams server name field with the outbound server name. The name must be a unique one.
Identify the source table(s) by selecting the check box(es) in the corresponding Include in script column.
Click Generate Script. The XStreamsOut Script dialog box pops up.

303
Data Integration

7. Click Execute to create the XStream outbound server in Oracle.


Note that if the script execution fails, you can connect to the Oracle database as an XStream administrator and run the
script in Oracle.
8. Connect to the Oracle database as an XStream administrator and check the status of the outbound server by running the
following statement:

select apply_name, status from dba_apply;

If you need to remove an outbound server, run the following statements:

exec DBMS_XSTREAM_ADM.DROP_OUTBOUND('xout');
exec DBMS_XSTREAM_ADM.REMOVE_XSTREAM_CONFIGURATION(container => 'ALL');

Configure XStream In in Talend Studio

About this task


To configure XStream In in Talend Studio, do the following:

Procedure
1. In the Repository tree view, set up a database connection using OCI connection type to an Oracle database, and the
n retrieve the schema of the target table to which data changes will be replicated. In this example, the target table is
PERSON_BAK. For detailed information about how to set up a database connection and retrieve table schemas, see
Centralizing database metadata on page 318.

304
Data Integration

2. Right-click CDC Foundation under the newly created Oracle database connection and select Create CDC from the
contextual menu. The Create Change Data Capture dialog box opens up.

3. Select XStream mode in the Options area and click Show sample initialization script. The Sample Initialization Script
dialog box opens up.

305
Data Integration

Note that this is only a sample script for configuring XStream on an Oracle 12c server, you need to update the
username, password, and tablespace information according to your settings and run the statements and procedures in
Oracle. For detailed information, see Prerequisites for the XStream mode on page 298.
Click OK to close the Sample Initialization Script dialog box.
Click Finish to create CDC and close the Create Change Data Capture dialog box.
4. Right-click the target table and select add CDC from the contextual menu.

306
Data Integration

5. Right-click the target table and select Generate XStreamsIn Script from the contextual menu. The XStreamsIn
generation script dialog box opens up.

307
Data Integration

6. Fill in the XStreams server name field with the inbound server name.
Fill in the Queue name field with the name of the inbound server's queue.
Click Generate script. The XStream In script will be generated and displayed.
7. Click Execute to create the XStream inbound server in Oracle.
Note that if the script execution fails, you can connect to the Oracle database as an XStream administrator and run the
script in Oracle.
8. Connect to the Oracle database as an XStream administrator and check the status of the inbound server by running the
following statement:

select apply_name, status from dba_apply;

If the inbound server is disabled, start it by running the following statement:

exec DBMS_APPLY_ADM.START_APPLY('xin');

If you need to remove an inbound server, run the following statements:

exec DBMS_XSTREAM_ADM.DROP_INBOUND('xin');
exec DBMS_XSTREAM_ADM.REMOVE_QUEUE('xin_queue');
exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'xin');

Extracting and synchronizing data changes in XStream mode


After setting up the CDC environment using XStream mode, you can now design a Job in Talend Studio using the Oracle CDC
component tOracleCDC to extract data changes from the source system and tOracleCDCOutput to replicate the data changes
to the target system.

Setting up CDC in Redo/Archive log mode (journal) for AS/400

In the AS/400 Redo/Archive log mode, Talend Studio runs its AS/400-dedicated RUNCDC program to read and analyze
journals and receivers, extract change information from the source table, create a CDC table in the CDC library based on the
same name as the source table, update the CDC table (the change table) and make queries on the changes. Both the long
name and the short name of a source table are used but only the short name is used to name the CDC table.
The default mode of RUNCDC is set to DETACHED, and the tAS400CDC component only reads detached journals. The
command CHGJRN JRN(<Source_library_name>/<Journal_name>) JRNRCV(*GEN) is used to ensure that an
older receiver is detached from a journal and a newer one is attached to the journal to retrieve the last change. You are then
recommended to execute this command in your AS/400 system before running the tAS400CDC component. Alternatively,
you can use a custom command through the tAS400CDC component to automate this process. For more information, see
Extracting the change data modified in AS/400 journal mode on page 314.
Talend Studio does not automatically create, modify or delete any journal and can only run the CDC process on the basis of
the journal and receivers that you or the administrator of your AS/400 system can provide depending on the policy of your
company. For this reason, ensure that an old receiver has been treated by RUNCDC before deleting it so as to avoid lost of
the information recorded in that receiver.

Warning:
• Avoid data replication on your AS/400 system during the execution of the RUNCDC program.
• The DLTRCV(*YES) parameter should not be used in the CHGJRN command, or else all detached receivers will be
deleted after the journal is detached and the RUNCDC program cannot read the logs.

The prerequisites on AS/400


Prior to setting up CDC in Redo/Archive log mode (journal) on AS/400, you need to verify the prerequisites as follows on your
AS/400:
• OS AS/400 V5R3M0, V5R4M0 or V6R1M0 is used;
• the AS/400 user account for CDC must have *ALLOBJ privileges or at least all of the following privileges:
- CRTSAVF,
- CLRSAVF,
- DLTF,

308
Data Integration

- RSTLIB,
- DLTLIB,
- CRTLIB,
- CHGCMD,
- FTP (access to the FTP port must be ensured),
- READ access on journal receivers,
- READ access on monitored AS/400 files,
- READ/WRITE access on output library;
• the names of the files of interest should not exceed 10 characters;
• if the files of interest are already journalized, the journal must be created with option IMAGES (*BOTH)
For further information about the setup of these listed prerequisites, see the manual of your AS/400 system.
Configuring CDC in AS/400 journal mode
The following configuration steps only need to be set up once for a given publisher/subscriber scenario.

Step 1: Set up a publisher in AS/400 journal mode

About this task


To set up a publisher, you need to:

Procedure
1. Create a new Job in Talend Studio.
2. Set a DB connection dedicated to CDC by filling in DB connection data. For example, a connection called AS400_CDC.
3. Set a DB connection to where data is located by filling in DB connection data. For example, a connection called
AS400_DATA.

Results

Step 2: Identify the source table in AS/400 journal mode


To identify the table(s) to catch, right-click the newly created data connection to retrieve the schema of the source table and
load it in the repository. In this example, this data connection is AS400_DATA.

309
Data Integration

Step 3: Create the subscriber(s) table in AS/400 journal mode

About this task


To set the connection between CDC and the data:

Procedure
1. Right-click the CDC Foundation folder of the data connection and select Create CDC to open the Create Change Data
Capture dialog box. In this example, this data connection is AS400_DATA.

2. In the Create Change Data Capture dialog box, click the [...] button next to the Set Link Connection field to select
connection to the database that corresponds to CDC. In this example, select AS400_CDC.

3. Click Create Subscriber to create the subscribers. Then the command to be executed is displayed. The following image
presents an example of this command.

310
Data Integration

In general, this command reads as follows:

open <AS400_server_host>
user <Username> <Password>
quote rcmd "crtsavf qgpl/instfitcdc"
quote rcmd "clrsavf qgpl/instfitcdc"
bin
cd qgpl
put "<Studio_install>\plugins\org.talend.designer.cdc_<version>\resource\f
itcdc.savf" instfitcdc
quote rcmd "rstlib savlib(fitcdc) dev(*savf) savf(qgpl/instfitcdc) RSTLIB(<CDC_li
brary_name>)"
quote rcmd "CHGCMD CMD(<CDC_library_name>/RUNCDC) PGM(<CDC_library_name>/F2CD00)
CURLIB(<CDC_library_name>)"
quote rcmd "dltf qgpl/instfitcdc"
quit

It is automatically executed via FTP by the Studio to install the RUNCDC program, restore the CDC library (the CDC
database), and create the TSUBSCRIBERS table which provides information on all tables where CDC has been set up.
The CDC library by default contains the following tables:
• FITAB: contains the information about the last executions and receivers used in its TBDT1 field.
• position 1-10: the library of the last receiver of previous run
• position 11-20: the name of the last receiver of previous run
• position 21-40: the last date/time process
• position 41-50: the library of the last receiver of current run
• position 51-60: the name of the last receiver of current run
• FICLH: contains the RUNCDC command execution logs and the receivers used.
• FICLF: contains technical logs about the files used.
Both FITAB and FICLH tables provide information on the receivers already used by the RUNCDC program, and can help
clean up receivers if needed.
If you extract change data frequently later, there might be an overflow on the NUM_ORD field in the FITAB table
when its value reaches 9999999. If it happens or before it happens, you can reset its value by executing the command
Change F2CD65 NUM_ORD in FITAB to 0 (zero) and then remove all records in the FICLH, FICLF, FITMP,
and FIRCV tables.

311
Data Integration

If there is any issue related to the FITAB table after a reboot of your AS/400 system, you might need to reset the value
of the NUM_ORD field.
4. If you need to manually execute the command, copy the command and click Skip to close this dialog box. In this
situation, the command is not executed by the Studio and you need to paste or even edit the command by yourself and
execute it in your AS/400 system.
Otherwise, click Execute to directly run the default command in the Studio. A step-by-step execution list appears.

Note that on the list, you might read an error with number 550 describing issues such as the fact that not all objects
have been restored. This could be normal if the library that was not restored has in fact been restored in your AS/400
system. Contact the administrator of your AS/400 system for clarification.
5. Once done, in the Create Change Data Capture dialog box, click Finish.
In the CDC Foundation folder, the CDC database connection appears, along with the subscription table schema.
Step 4: Finalize the subscription in AS/400 journal mode

About this task


You must specify the table to which the subscriber wants to subscribe and then activate the subscription.

Procedure
1. Right-click the schema that corresponds to the source table and select Add CDC. The Create Subscriber and Execute SQL
Script dialog box displays. The long name and the short name of the source table are both displayed in this dialog box.

312
Data Integration

Warning: The source table to be monitored should have a primary key so that the CDC system can identify the lines
on which changes have been done. You cannot set up a CDC environment if the schema of your source table does
not have a primary key.

In this example, since the long name CUSTOMERS does not exceed 10 characters, the short name reads the same as the
long name.
2. In the Subscriber Name field, enter the name you want to give the subscriber. By default, the subscriber name is APP1.
3. Click Execute and then Close.
In the CDC Foundation folder, the change table schema and the associated view appear. A new record is added to the
TSUBSCRIBERS table.

4. From your AS/400 system:


a) Create a new receiver:

CRTJRNRCV JRNRCV(<source_library_name>/<receiver_name>)

b) Create a new journal and attach the receiver created in the previous step:

CRTJRN JRN(<source_library_name>/<journal_name>) JRNRCV(<source_library_name>/


<receiver_name>)

313
Data Integration

c) For the file to be monitored, start journaling changes into the journal created in the previous step:

STRJRNPF FILE(<source_library_name>/<file_to_be_monitored>) JRN(<source_li


brary_name>/<journal_name>) IMAGES(*BOTH)

If the sequence number of the journal receiver is reached its maximum value later, you can increase the size of the
receiver by executing the command CHGJRN JRN(<Journal_name>) JRNRCV(*GEN) RCVSIZOPT(*MAXOPT3)
on your AS/400 system. For more information, see Change Journal (CHGJRN).
5. To view any changes made to the data, right-click the relevant table in the Table schemas folder and select View All
Changes to open the relevant dialog box.

6. Click Finish to close the dialog box.


Extracting the change data modified in AS/400 journal mode

About this task


Once you have defined the CDC environment, you can create a Job in Talend Studio using a tAS400CDC component to
extract the changes made to the data in the source system.

Procedure
1. Drop a tAS400CDC component and a tLogRow component from the Palette onto the design workspace, and link the two
components using a Row > Main connection.

2. Double-click the tAS400CDC component to display its Basic settings view.

314
Data Integration

3. Select Repository from the Property Type drop-down list and click on [...] to fetch the schema which corresponds to
your CDC connection. The fields which follow are automatically filled in with the information required to connect to the
CDC database.
4. Select Repository from the Schema drop-down list and click on [...] to fetch the schema which corresponds to the
AS/400 table to be monitored.
5. In the Table Name field, enter the name of the source table monitored by CDC, CUSTOMERS in this example.
6. In the Source Library field, enter the name of the source library. By default, this is the same name of the source
database.
7. In the Subscriber field, enter the name of the subscriber who will extract the modified data. By default, the subscriber is
named APP1.
8. In the Events to catch field, select the check box which corresponds to the event(s) to be caught.
9. From your AS/400 system, execute CHGJRN JRN(<Source_library_name>/<Journal_name>)
JRNRCV(*GEN) to detach the old receiver from the journal and create and attach a new receiver to the journal to
retrieve the last change.
The tAS400CDC component by default executes the following RUNCDC command:

<CDC_library_name>/RUNCDC FILE(<Source_library_name>/<Source_table_name>)
LIBOUT(<CDC_library_name>) MODE(*DETACHED) MBROPT(*ADD)

Alternatively, you can automate the detachment and attachment process by selecting the Customize Command check
box and adding the following custom command on the Advanced Settings view of the tAS400CDC component:

<CDC_library_name>/RUNCDC FILE(<Source_library_name>/<Source_table_name>)
LIBOUT(<CDC_library_name>) MODE(*DETACHED) MBROPT(*ADD) DTCHJRN(*YES)

Note that the default RUNCDC command setup in the tAS400CDC component reads changes of only one table. If you
want to read changes on multiple (up to 300) tables at the same time and those tables are all on the same journal, you
can use multiple tAS400CDC components in your Job, add an custom command

<CDC_library_name>/RUNCDC FILE(<Source_library_name>/<Source_table_name1>
<Source_library_name>/<Source_table_name2> ... <Source_library_name>/<Sourc
e_table_nameN>) LIBOUT(<CDC_library_name>) MODE(*DETACHED) MBROPT(*ADD)

for the first tAS400CDC component and disable the command for other tAS400CDC components by selecting the
Disable Command check box on the Advanced settings view. The first tAS400CDC component will execute the
RUNCDC command and extract change data from all source tables, and other tAS400CDC components will simply
read data prepared by the first tAS400CDC component. For example, the following command reads changes from two
tables EMPLOYEE1 and EMPLOYEE2 on the same journal. You can retrieve a sample Job tAS400CDC-multiple-
tables.zip from the Downloads tab in the left panel of this page.

TALENDCDC/RUNCDC FILE(TALEND/EMPLOYEE1 TALEND/EMPLOYEE2) LIBOUT(TALENDCDC)


MODE(*DETACHED) MBROPT(*ADD)"

315
Data Integration

Alternatively, you can disable command for all tAS400CDC components and execute the custom command manually on
your AS/400 system.
10. Save the Job and press F6 to run it.
In the console, you can read the output results which correspond to what is displayed in the View All Changes dialog
box.

Database support for CDC


The CDC feature is available for the following databases: AS/400, DB2, Informix, Ingres, MS SQL Server, MySQL, Oracle,
PostgreSQL, Sybase, and Teradata. Note that the CDC feature works only with database systems running on the same server.
The following table provides information about the database support for CDC. As explained in the CDC architectural
overview on page 283, the CDC feature captures the updated data from a Source database and sends the captured data to a
Change table, where the Talend Studio Job reads the data from.

Source database Change table with the same database as the Change table with a different database from the
source source

AS/400 Supported Supported

MySQL Supported Supported

Teradata Supported Supported

Note: For AS/400, MySQL and Teradata, no schema is involved.

Change table with the same database as the source Change table with a different database from the source

Source database Same schema (table) Different schema (table) Same schema (table) Different schema (table)

DB2 Supported Supported Unsupported Unsupported

Informix Supported Supported Unsupported Unsupported

Ingres Supported Unsupported Unsupported Unsupported

MS SQL Server Supported Supported Supported Supported

Oracle Supported Supported Unsupported Unsupported

PostgreSQL Supported Supported Unsupported Unsupported

316
Data Integration

Change table with the same database as the source Change table with a different database from the source

Source database Same schema (table) Different schema (table) Same schema (table) Different schema (table)

Sybase Supported Supported Supported Supported

317
Centralizing metadata for Data Integration

Centralizing metadata for Data


Integration
Objectives
The Metadata folder in the Repository tree view stores reusable information on files, databases, and/or systems that you
need to create your Jobs.
Various corresponding wizards help you store these pieces of information that can be used later to set the connection
parameters of the relevant input or output components and the data description called "schemas" in a centralized manner in
Talend Studio.
The procedures of different wizards slightly differ depending on the type of connection chosen.
Click Metadata in the Repository tree view to expand the folder tree. Each of the connection nodes will gather the various
connections and schemas you have set up.

Centralizing database metadata


If you often need to connect to database tables of any kind, then you may want to centralize the connection information
details in the Metadata folder in the Repository tree view.
This setup procedure is made of two separate but closely related major tasks:
1. Set up a database connection,
2. Retrieve the table schemas.
Talend Studio requires specific third-party Java libraries or database drivers (.jar files) to be installed in order to connect to
sources or targets.

318
Centralizing metadata for Data Integration

Due to license restrictions, Talend may not be able to ship certain required libraries or drivers; in that situation, the
connection wizard to be presented in the following sections displays related information to help you identify and install the
libraries or drivers in question.

Setting up a database connection


To create a database connection from scratch, expand Metadata in the Repository tree view, right-click Db Connections and
select Create connection from the contextual menu to open the database connection setup wizard.

To centralize database connection parameters you have defined in a Job, click the icon in the Basic settings view of the
relevant database component with its Property Type set to Built-in to open the database connection setup wizard.
To modify an existing database connection, right-click the connection item from the Repository tree view, and select Edit
connection to open the connection setup wizard.
Then define the general properties and parameters of the connection in the wizard.

Defining general properties

Procedure
1. In the connection setup wizard, give your connection a name in the Name field. This name will appear as the database
connection name under the Metadata node of the Repository tree view.

2. Fill in the optional Purpose and Description fields as required. The information you fill in the Description field will
appear as a tooltip when you move your mouse pointer over the connection.
3. If needed, set the connection version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the Project Settings dialog box. For more information, see Upgrading the
version of project items on page 578 and Status management on page 580 respectively.
4. If needed, click the Select button next to the Path field to select a folder under the Db connections node to hold
your newly created database connection. Note that you cannot select a folder if you are editing an existing database
connection, but you can drag and drop a connection to a new folder whenever you want.

319
Centralizing metadata for Data Integration

5. Click Next when completed. The second step requires you to fill in or edit database connection data.

Defining connection parameters

Procedure
1. Select the type of the database to which you want to connect and fill in the connection details. The fields you need to
fill vary with the database type you select.

Due to limitations of Java 8, ODBC is no longer supported for Access database connections, and the only supported
database driver type is JDBC.
For an MS SQL Server (JDBC) connection, when Microsoft is selected from the Db Version list, you need to download the
Microsoft JDBC driver for SQL Server on Microsoft Download Center, unpack the downloaded zip file, choose a jar in the
unzipped folder based on your JRE version, rename the jar to mssql-jdbc.jar and install it manually. For more in
formation about choosing the jar, see the System Requirements information on Microsoft Download Center.
You can set up a connection to Oracle using the Wallet by selecting Oracle Custom from the DB Type drop-down
list, then selecting the Use SSL Encryption check box and specifying the related properties, including the path to your
TrustStore and KeyStore files and the password for each of them, and whether to disable the use of CBC (CipherBlock
Chaining).
If you need to connect to Hive, we recommend using one of the Talend solutions with Big Data.

320
Centralizing metadata for Data Integration

Warning:
If you are creating an MSSQL connection, in order to be able to retrieve all table schemas in the database, be sure to:
• enter dbo in the Schema field if you are connecting to MSSQL 2000,
• remove dbo from the Schema field if you are connecting to MSSQL 2005/2008.

2. (Optional) Specify additional connection properties through the Additional parameters field in the Database Settings
area.
3. Click Check to check your connection.
If the connection fails, a message box is displayed to indicate this failure and from that message box. From that
message box, click the Details button to read further information.
If a missing library or driver (.jar file) has provoked this failure, you can read that from the Details panel and then
install the specified library or driver.
4. If you are creating a Teradata connection, select Yes for the Use SQL Mode option at the bottom of the wizard to use
the SQL queries to retrieve metadata. The JDBC driver is not recommended with this database because of possible bad
performance.

321
Centralizing metadata for Data Integration

5. If needed, click Export as context and follow the steps in the wizard to export the database connection parameters
as context variables in a context group under the Contexts node of the Repository and fill the database connection
parameter fields with context variables.

Note: The Schema field, if any, must be filled when using context mode.

If the database connection parameter fields are filled with context variables and if there are multiple context groups
defined for the database connection parameters, when you perform an operation that requires the database connection,
a Choose context dialog box pops up and you must choose a context group before proceeding.
If you have installed the 8.0 R2022-05 Studio monthly update or a later one provided by Talend, and if you have
enabled the prompt functionality on any variables, you must also fill the correct value of each variable in the Choose
context dialog box before proceeding. For more information about how to enable the prompt functionality on context
variables, see Defining variables on page 107.

322
Centralizing metadata for Data Integration

6. If needed, fill in the database properties information. That is all for the first operation on database connection setup.
Click Finish to close the connection setup wizard.
The newly created database connection will be saved under the Db Connections node in the Repository tree view,
and several folders for SQL queries and different types of schemas, such as Calculation View schemas (only for SAP
HANA), Synonym schemas (for Oracle, IBM DB2 and MSSQL), Table schemas, and View schemas will be created under
the database connection node.
Now you can drag and drop the database connection onto the design workspace as a database component to reuse the
defined database connection details in your Job.

Retrieving table schemas

Warning: If you are working on a Git managed project while the Manual lock option is selected in Talend Administration
Center, be sure to lock manually your connection in the Repository tree view before retrieving or updating table schemas
for it. Otherwise the connection is read-only and the Finish button of the wizard is not operable. For more information
about locking and unlocking a project item and on different lock types, see Working collaboratively on project items on
page 26.

To retrieve table schemas from the database connection you have just set up, right-click the connection item from the
Repository tree view, and select Retrieve schema from the contextual menu.

323
Centralizing metadata for Data Integration

Note:
An error message will appear if there are no tables to retrieve from the selected database or if you do not have the
correct rights to access this database.

A new wizard opens up where you can specify the filter for searching different database objects such as table, view, synonym
(for Oracle, IBM DB2 and MSSQL) and calculation view (only for SAP HANA).

324
Centralizing metadata for Data Integration

Filtering database objects

In the Select Filter Conditions area, you can filter the database objects using either of the two options: Set the Name Filter or
Use the Sql Filter to filter tables based on objects names or using SQL queries respectively.
Filtering database tables based on their names

Procedure
1. In the Select Filter Conditions area, select the Use the Name Filter option.
2. In the Select Types area, select the check box(es) of the database object(s) you want to filter or display.

Note: Available options can vary according to the selected database.

3. In the Set the Name Filter area, click Edit... to open the Edit Filter Name dialog box.
4. Enter the filter you want to use in the dialog box.

Example
For example, if you want to recuperate the database objects which names start with "A", enter the filter A%, or if you
want to recuperate all database objects which names end with "type", enter %type as your filter.
5. Click OK to close the dialog box.
6. Click Next to open a new view on the wizard that lists the filtered database objects.
Filtering database objects using an SQL query

Procedure
1. In the Select Filter Conditions area, select the Use Sql Filter option.
2. In the Set the Sql Filter field, enter the SQL query you want to use to filter database objects.
3. Click Next to open a new view that lists the filtered database objects.

325
Centralizing metadata for Data Integration

Selecting tables and defining table schemas

About this task


Once you have the filtered list of the database objects, do the following to load the schemas of the desired objects onto your
Repository:

Procedure
1. Select one or more database objects on the list and click Next to open a new view on the wizard where you can see the
schemas of the selected object.

Note: If no schema is visible on the list, click the Check connection button below the list to verify the database
connection status.

326
Centralizing metadata for Data Integration

2. Modify the schemas if needed.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.

Warning: If your source database table contains any default value that is a function or an expression rather than a
string, be sure to remove the single quotation marks, if any, enclosing the default value in the end schema to avoid
unexpected results when creating database tables using this schema.

Tip: If you find a certain data type of the database not yet supported by Talend , you can edit the mapping file for
that database to enable conversion between the database data type and the corresponding Talend data type. For
more information, see Type mapping on page 573.

327
Centralizing metadata for Data Integration

By default, the schema displayed on the Schema panel is based on the first table selected in the list of schemas loaded
(left panel). You can change the name of the schema and according to your needs. You can also customize the schema
structure in the schema panel.
The tool bar allows you to add, remove or move columns in your schema. In addition, you can load an XML schema from
a file or export the current schema as XML.
To retrieve a schema based on one of the loaded table schemas, select the DB table schema name in the drop-down list
and click Retrieve schema. Note that the retrieved schema then overwrites any current schema and does not retain any
custom edits.
When done, click Finish to complete the database schema creation. All the retrieved schemas will be saved in the
corresponding schema folders under the relevant database connection node.
Now you can drag and drop any table schema of the database connection from the Repository tree view onto the design
workspace as a new database component or onto an existing component to reuse the metadata. For more information,
see Using centralized metadata in a Job on page 513 and Setting a repository schema in a Job on page 76.

Centralizing JDBC metadata


To centralize DB table based metadata into a JDBC connection under the Metadata node of the Repository tree view, the
procedure is made of two separate but closely related tasks:
1. Set up a JDBC connection,
2. Retrieve the table schemas.
The sections below describe how to complete the tasks in detail.
For an example of using a JDBC connection, see Data Integration Job Examples on Talend Help Center (https://help.t
alend.com).

Creating a JDBC connection and importing a database driver

Procedure
1. To create a JDBC connection from scratch, expand Metadata in the Repository tree view, right-click Db Connections and
select Create connection from the contextual menu to open the database connection setup wizard.

To centralize database connection parameters you have defined in a Job into a JDBC connection, click the icon in
the Basic settings view of the relevant database component with its Property Type set to Built-In to open the database
connection setup wizard.
To modify an existing JDBC connection, right-click the connection item from the Repository tree view, and select Edit
connection to open the connection setup wizard.
2. Fill in the schema generic information, such as the connection Name and Description, and then click Next to proceed to
define the connection details.
For further information, see the section on defining general properties in Setting up a database connection on page
319.
3. Select JDBC from the DB Type list.

328
Centralizing metadata for Data Integration

4. If the library to be imported is not available on your machine, either download and install it using the Modules view or
download and store it in a local directory.

Note: Depending on the connection, you may need to import several driver files. For example, when connecting to
Google BigQuery, import each jar file extracted from the zip file. For more information, see the procedure to install
and use the JDBC driver.

5. In the Drivers table, add one row to the table by clicking the [+] button.

6. Click the newly added row and click the [...] button to open the Module dialog box where you can import the external
library.
7. If you have installed the library using the Modules view, specify it in either of the following two ways:
• select the Platform option and then select the library from the list, or

329
Centralizing metadata for Data Integration

• select the Artifact repository (local m2/nexus) option, enter the search keyword in the Module Name field, click
Search Local to search in the local repository <TalendStudio>\configuration\.m2 or click Search Remote
to search in the remote artifact repository, then select the library from the list below. The search keyword can be
the partial or full name of the library.
The Search Remote button is available only when the user libraries is set up in Talend Administration Center or
Talend Management Console.

8. If you have stored the library file in a local directory, select the Install a new module option, and click the [...] button to
browse to the library file.
If the MVN URI of the library exists in the file <TalendStudio>\configuration\MavenUriIndex.xml, it is
automatically filled in the Default MVN URI field.
If the MVN URI of the library is within the jar file, it is automatically detected and filled in the Custom MVN URI field if
it is different from the default MVN URI. Select the Custom MVN URI check box if you want to change the custom Maven
URI or use the custom Maven URI to install the library. If neither the default MVN URI nor the custom MVN URI exists,
the Default MVN URI field is filled with mvn:org.talend.libraries/<jarname>/6.0.0-SNAPSHOT/jar.
9. Click OK to confirm your changes.
The imported library file is listed in the Drivers table.

Note: You can replace or delete the imported library, or import new libraries if needed.

Completing the JDCB connection details

Procedure
1. Fill in the connection details as follows:
• Fill in the JDBC URL used to access the database server.
• In the Driver Class field, specify the main class of the driver allowing to communicate with the database.
• Fill in your database user authentication data in User Id and Password fields.
• In the Mapping file list, select the mapping that allows the database Type to match the Java type of data on the
schema according to the type of database you are connecting to.

330
Centralizing metadata for Data Integration

Note: The mapping files are XML files that you can manage via Window > Preferences > Talend > Specific Settings >
Metadata of TalendType. For more information, see Accessing mapping files and defining type mappings on page
574.

2. Click Test connection to check your connection.


3. Click Finish to close the connection setup wizard.
The newly created JDBC connection is now available in the Repository tree view and it displays several folders including
Queries (for the SQL queries you save) and Table schemas that will gather all schemas linked to this DB connection
upon schema retrieval.

Retrieving table schemas

Warning: If you are working on a Git managed project while the Manual lock option is selected in Talend Administration
Center, be sure to lock manually your connection in the Repository tree view before retrieving or updating table schemas
for it. Otherwise the connection is read-only and the Finish button of the wizard is not operable. For more information
about locking and unlocking a project item and on different lock types, see Working collaboratively on project items on
page 26.

Procedure
1. To retrieve table schemas from the database connection you have just set up, right-click the connection item from the
Repository tree view and select Retrieve schema from the contextual menu.
A new wizard opens up where you can filter and show different objects (tables, views and synonyms) in your database
connection, select tables of interest, and define table schemas.

2. Define a filter to filter databases objects according to your need. For details, see Filtering database objects on page
325.

331
Centralizing metadata for Data Integration

Click Next to open a view that lists your filtered database objects. The list offers all the databases with all their tables
present on the database connection that meet you filter conditions.
If no database is visible on the list, click Check connection to verify the database connection.
3. Select one or more tables on the list to load them onto your repository file system. Your repository schemas will be
based on these tables.
4. Click Next. On the next window, four setting panels help you define the schemas to create. Modify the schemas if
needed.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.

Warning: If your source database table contains any default value that is a function or an expression rather than a
string, be sure to remove the single quotation marks, if any, enclosing the default value in the end schema to avoid
unexpected results when creating database tables using this schema.

Tip: If you find a certain data type of the database not yet supported by Talend , you can edit the mapping file for
that database to enable conversion between the database data type and the corresponding Talend data type. For
more information, see Type mapping on page 573.

By default, the schema displayed on the Schema panel is based on the first table selected in the list of schemas loaded
(left panel). You can change the name of the schema and according to your needs, you can also customize the schema
structure in the schema panel.
The tool bar allows you to add, remove or move columns in your schema. In addition, you can load an XML schema from
a file or export the current schema as XML.
To retrieve a schema based on one of the loaded table schemas, select the database table schema name in the drop-
down list and click Retrieve schema. Note that the retrieved schema then overwrites any current schema and does not
retain any custom edits.
When done, click Finish to complete the database schema creation. All the retrieved schemas are displayed in the Table
schemas sub-folder under the relevant database connection node.
Now you can drag and drop any table schema of the database connection from the Repository tree view onto the design
workspace as a new database component or onto an existing component to reuse the metadata. For more information,
see Using centralized metadata in a Job on page 513 and Setting a repository schema in a Job on page 76.

Centralizing SAP metadata


From Talend Studio, you can create a connection to an SAP BW system and an SAP HANA database and store this connection
in the Metadata folder in the Repository tree view. Once connected to the SAP system, you can retrieve SAP tables and table
schemas, preview data in tables, retrieve SAP Business Content Extractors, retrieve SAP RFC and BAPI functions and their
input and output schemas, retrieve the metadata of the SAP BW Advanced Data Store Object, Data Source, Data Store Object,
InfoCube or InfoObject objects, or create a file from SAP IDoc.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.
Prerequisites:

332
Centralizing metadata for Data Integration

To use the SAP IDoc connectors and the SAP IDoc wizard, you must install specific jar and dll files provided by SAP and then
restart the Studio.
For Windows 64-bit:
1. Copy the sapjco3.dll file that comes with the SAP Java Connector into the folder C:\WINDOWS\system32\ or
replace the existing dll files with the new ones.
2. Install the jar files sapjco.jar, sapjco3.jar and sapidoc3.jar in the Java library of Talend Studio.
For Linux:
1. Copy the appropriate distribution package (sapjco-linuxintel-x.x.x.tgz or sapjco-linuxintel-
x.x.x.tar.Z, where x.x.x is the version of the SAP Java Connector) into an arbitrary directory {sapjco-instal
l-path}.
2. Go to the installation directory by executing the following command:

cd {sapjco-install-path}

3. Extract the archive tar zxvf sapjco-linux*x.x.x.tgz, where x.x.x is the version of the SAP Java Connector.
4. Add {sapjco-install-path} to the environment variable LD_LIBRARY_PATH by executing the following
command:

export LD_LIBRARY_PATH={sapjco-install-path}

5. Add {sapjco-install-path}/sapjco.jar in the Java library of Talend Studio.

Setting up an SAP connection

About this task


This section shows you how to set up a centralized SAP connection using the SAP metadata wizard.
To successfully establish the connection between Talend Studio and SAP, ensure that you have the proper authorization
rights to access the SAP systems. For more information, see SAP.

Procedure
1. In the Repository tree view, expand the Metadata node and check that the SAP Connections node is present. If the SAP
Connections node is present, go to the next step. Otherwise:
• Open the Feature Manager window by selecting Help > Feature Manager.
• Type sap in the search field, check SAP in the search result section, and click Next.
• Proceed as prompted to install the SAP module and restart Talend Studio.

333
Centralizing metadata for Data Integration

2. Right-click the SAP Connections node and select Create SAP connection from the contextual menu. The SAP Connection
wizard opens up.

334
Centralizing metadata for Data Integration

3. Fill in the generic properties such as Name, Purpose (optional), and Description (optional). The information you fill in
the Description field will appear as a tooltip when you move your mouse pointer over the connection.
4. If needed, set the connection version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the Project Settings dialog box. For more information, see Upgrading the
version of project items on page 578 and Status management on page 580 respectively.
5. Click Next to fill in the SAP system connection details.

335
Centralizing metadata for Data Integration

The following table describes the fields listed in the dialog box.

Property Description

Client The SAP system client ID.

Host The name or IP address of the host on which the SAP server is
running.

User and Password The user connection ID and password.

System Number The SAP system number.

Language The language of the SAP system.

Additional Properties Complete this table with the property or properties to be


customized if you need to use custom configuration for the SAP
system being used.
For example, if you need to retrieve data from tables with more
than 512 bytes per row using this connection later, click the [+]
button below the Additional Properties table to add a property
api.use_z_talend_read_table and set its value
to true. For more information, click Help to open the dialog box
that shows the instruction.

6. Click Check to validate the SAP connection details.


7. Click Next to specify the type of the SAP connection you want to create from the ADSO connection type drop-down list.
• Select SAP JCO3 to create a connection using SAP JCO3.
• Select Hana JDBC to create a connection to the SAP HANA database.

336
Centralizing metadata for Data Integration

Note: The SAP Connection - Update SAP Connection - Step 3/3 dialog box is available only when you have installed
the R2021-12 Studio Monthly update or a later one delivered by Talend. For more information, check with your
administrator.

8. Do the following according to your selection in the previous step.


• If you select HANA JDBC in the previous step and click Next, specify the SAP HANA database connection parameters
if you need to retrieve the metadata of Advanced Data Store Objects using this centralized connection later. You
can optionally click Check to validate the SAP HANA database connection details.

The following table describes the fields listed in the dialog box.

337
Centralizing metadata for Data Integration

Property Description

Db Host The IP address or hostname of the database.

Db Port The listening port number of the database.

Db Schema The table schema name.

Db Username and Db Password The database user authentication data.

Db Additional Parameters The additional parameters for connecting the SAP HANA
database.

• Select SAP JCO 3 from the ADSO SAP Connection drop-down list if you want to create the connection using SAP
JCO3.

9. Click Finish to save the settings.


The newly created SAP connection metadata will be saved under the Metadata > SAP Connections node in the
Repository tree view. Now you can drag and drop the SAP connection node onto your Job design workspace as an SAP
component, with the connection details automatically filled.
If you need to further edit an SAP connection, right-click the connection node and select Edit SAP Connection from the
contextual menu to open this wizard again and make your modifications.

Retrieving SAP tables

About this task


This section shows you how to retrieve SAP tables and table schemas of interest, and preview data in tables from the
connected SAP system using the SAP metadata wizard.

Warning: If you are working on a Git managed project while the Manual lock option is selected in Talend Administration
Center, be sure to lock manually your connection in the Repository tree view before retrieving or updating table schemas
for it. Otherwise the connection is read-only and the Finish button of the wizard is not operable. For more information
about locking and unlocking a project item and on different lock types, see Working collaboratively on project items on
page 26.

Procedure
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve SAP Table from the
contextual menu. The SAP Table wizard dialog box opens up.

338
Centralizing metadata for Data Integration

2. In the Name and Description fields, enter the filter condition for the table name or table description if needed. Then
click Search and all the SAP tables that meet the filter condition will be listed in the table.
3. Select one or more tables of interest by selecting the corresponding check boxes in the Name column. The tables
selected will finally be saved in the Repository and the tables unselected will be removed from the Repository if they
already exist in the Repository.
4. Click Next to proceed to the next step.

339
Centralizing metadata for Data Integration

All selected tables are listed in the Table Name area. You can remove the table(s) you have already selected by clicking
Remove Table.
Click Refresh Table and the latest table schema will be displayed in the Current Table area.
Click Refresh Preview to preview data in the selected table. If an Error Message dialog box pops up, and when you
click Details, it displays DATA_BUFFER_EXCEEDED error information, you need to edit the SAP connection to add a
property api.use_z_talend_read_table and set its value to true. For more information, see Setting up an SAP
connection on page 333.

340
Centralizing metadata for Data Integration

Modify the schema of the selected table in the Current Table area if needed. The Ref Table column value will be lost if
you modify the Technical Name or Talend Name column value.
5. Click Finish. The tables and their schema of interest will be saved in the SAP Tables folder under the SAP connection
node in the Repository tree view. You can now drag and drop any table node onto your Job design workspace as a
tELTSAPInput component or a tSAPTableInput component, with all metadata information automatically filled.
If you need to further edit a table, right-click the table and select Edit Table from the contextual menu to open this
wizard again and make your modifications.

Retrieving SAP Business Content Extractors


This section shows you how to retrieve SAP Business Content Extractors using the SAP metadata wizard.

Note: To retrieve any custom extractor, you need to first release it by adding its name into the SAP table ROOSATTR.
This can be done using the SAP program RODPS_OS_EXPOSE whose transaction code is SE38. For more information, see
Releasing Extractors for use by the ODP API.

Procedure
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve Business Content Extractor
from the contextual menu. The SAP BI Content Extractor wizard opens up.

2. In the Name field, enter the filter condition for the SAP business content extractor name if needed. Then click Search
and all extractors that meet the filter condition will be listed in the table.
3. Select one or more extractors of interest by selecting the corresponding check boxes in the Name column. The
extractors selected will finally be saved in the Repository and the extractors unselected will be removed from the
Repository if they already exist in the Repository.
4. Click Next to proceed to the next step.

341
Centralizing metadata for Data Integration

All selected extractors are listed in the BI Content Extractor Name area. You can remove the extractor(s) you have
already selected by clicking Remove BI Content Extractor.
Click Refresh BI Content Extractor and the latest schema will be displayed in the Current BI Content Extractor area.
Click Detailed Schema to open the schema dialog box and view the schema column details of the extractor.
5. Click Finish. The extractors and their schema of interest will be saved in the SAP BI Content Extractor folder under the
SAP connection node in the Repository tree view. You can now drag and drop any extractor node onto your Job design
workspace as a tSAPODPInput component, with all metadata information automatically filled.
If you need to further edit an extractor, right-click the extractor and select Edit Business Content Extractor from the
contextual menu to open this wizard again and make your modifications.

Retrieving an SAP function

About this task


This section shows you how to retrieve an SAP function and the schema describing the input and output data of the function
using the SAP metadata wizard.

Procedure
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve Bapi from the contextual
menu. The SAP function wizard opens up.

342
Centralizing metadata for Data Integration

2. In the Name Filter field, enter the filter condition for the function name if needed. To use the custom function
Z_TALEND_READ_TABLE, you need to install an SAP module provided under the directory <Talend_Studio>
\plugins\org.talend.libraries.sap_<version>\resources. For how to install the SAP module, see the
file readme.txt under the directory.
3. Click Search. All SAP functions that meet the filter condition will be listed in the Functions area.

Note: z-BAPI or customized BAPI are also supported in Talend Studio.

4. Double-click the name of the function of interest in the Functions area. The input and output parameters will be
displayed in the Parameter tab.
5. Click the Test-it view to test the recuperation of the SAP data.

343
Centralizing metadata for Data Integration

6. Click the Value cell for the corresponding input parameter that needs an input value, and then click the [...] button in
the cell and enter the value in the pop-up Setup input parameter dialog box. When done, click OK to validate and save
the settings.

7. Click Run to get the values of the output parameters returned by the function in the Output Parameters(Preview) table.

344
Centralizing metadata for Data Integration

8. Click Next to proceed to the next step.

345
Centralizing metadata for Data Integration

9. Select the input and output schemas of interest and click Finish. The function and its schemas of interest will be saved
in the SAP Bapi folder under your SAP connection node in the Repository tree view. You can now drag and drop any
function node onto your Job design workspace as a tSAPBapi component, with all metadata information automatically
filled.

346
Centralizing metadata for Data Integration

If you need to further edit the metadata of a function, right-click the function and select Edit Bapi from the contextual
menu to open this wizard again and make your modifications.
10. You can also retrieve the input and output schemas as XML metadata in either of the following ways:
• Select the Import schema as xml metadata check box and the input and output schemas of interest.

• Right-click the name of the function that you have just retrieved under the SAP Bapi folder and select Retrieve As
Xml Metadata from the contextual menu.
The selected schema will be saved under the File xml node in the Repository tree view. For the usage of the XML
metadata, see the section about retrieving data from an SAP system by calling a BAPI function using document type
parameters at SAP.

347
Centralizing metadata for Data Integration

Retrieving SAP BW objects metadata

About this task


This section shows you how to retrieve the metadata of the SAP BW Advanced Data Store Object, Data Source, Data Store
Object, InfoCube or InfoObject objects using the SAP metadata wizard.
Before reading data from Data Source and InfoCube objects or writing data to direct updatable Data Store objects, you need
to install some custom function modules on your SAP system. For how to install the modules, see the file readme.txt
provided under the directory <Talend_Studio>\plugins\org.talend.libraries.sap_<version>\resources.
For more information about these SAP function modules, see SAP function modules shipped with Talend Studio on Talend
Help Center (https://help.talend.com).

Procedure
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve SAP BW metadata from
the contextual menu. The SAP BW Table Wizard dialog box opens up.

348
Centralizing metadata for Data Integration

2. In the Search in drop-down list, select the type of the SAP BW objects whose table metadata you want to retrieve.
3. In the Name field, enter the filter criteria for the object name to narrow your search if needed.
In the Description field, enter the filter criteria for the object description to narrow your search if needed.
Note that for the Data Store Object, InfoCube and InfoObject types, the filter criteria for the Name field and the
Description field act together as an OR operator, that is to say, all objects that match either the filter criteria for the
Name field or the filter criteria for the Description field will be returned.
4. For the Data Source, InfoCube and InfoObject objects, you can select the data type from the Type drop-down list to filter
the search results.
5. For the Data Source objects, you can also enter the filter condition for the Data Source system name if needed.
6. Click Search and all the SAP BW objects that match the criteria will be listed in the table. Select one or more objects of
interest by selecting the corresponding check boxes in the Name column and then wait until the Creation Status column
values for all the selected objects become Success.

349
Centralizing metadata for Data Integration

The tables and their schemas of the selected objects will finally be saved in the Repository and the tables of the
unselected objects will be removed from the Repository if they already exist in the Repository.
For the InfoObject type objects, only the Attribute, Hierarchy and Text information can be extracted, and the number of
the columns for each type of information is displayed in the Column Number field with the format A[X] H[Y] T[Z], where
X, Y and Z represent the number of the columns for the Attribute, Hierarchy and Text information respectively.
7. Click Next to proceed to the next step.

350
Centralizing metadata for Data Integration

All tables of the selected objects are listed in the Table Name area. For the InfoObject table, the type information is
appended to the name of each table. You can remove the table(s) by clicking Remove Table.
Click Refresh Table and the latest table schema will be displayed in the Current Table area. You can modify the schema
of the selected table in the Current Table area if needed.
Click Refresh Preview to preview data in the selected table if needed. The Refresh Preview button is not available when
you search the Data Source type objects.
8. Click Finish. The tables and their schemas will be saved in the folder for the corresponding object type in the
Repository tree view. You can now drag and drop any SAP BW table node onto your Job design workspace as an SAP BW
component, with all metadata information automatically filled.

If you need to further edit an SAP BW table metadata, right-click the table node and select the corresponding item for
editing the object from the contextual menu to open this wizard again and make your modifications.

Creating a file from SAP IDOC

About this task


With this procedure, you will be able to connect to an IDoc file in your SAP server and create an XML file from it. To do so:

Procedure
1. Right-click the SAP connection you just created.

351
Centralizing metadata for Data Integration

2. Click Create SAP IDoc on the right-click menu.

3. In the iDoc name field, give a name to your connection to SAP IDoc file.
4. In the Program Id field, fill in the program identifier as it is defined in the RFC destination you want to use.
5. In the Gateway Service field, fill in the name of the service that enable Talend system to communicate with SAP
system. To get the service name, you can edit the service file in the C:\WINDOWS\system32\drivers\etc\
folder of the workstation on which the SAP server is installed.
6. In the Output Format area, you can select XML and/or HTML check boxes according to the type of output you want to
generate from SAP IDoc.
7. Click Browse... to set the file(s) path and name.

352
Centralizing metadata for Data Integration

8. Click Finish to close the dialog box and validate the creation of the IDoc file connection.

Results
The new connection displays under the SAP iDocs node of your SAP connection in the Repository tree view. You can now use
it with the tSAPIDocInput and tSAPIDocOutput components.

Centralizing File Delimited metadata


If you often need to read data from and/or write data to delimited files, you may want to centralize their metadata in
the Repository for easy reuse. File Delimited metadata can be used to define the properties of tFileInputDelimited,
tFileOutputDelimited, and t*OutputBulk components.

Note:
The file schema creation is very similar for all types of file connections: Delimited, Positional, Regex, XML, or Ldif.

Unlike the database connection wizard, the New Delimited File wizard gathers both file connection and schema definitions
in a four-step procedure.
To create a File Delimited connection from scratch, expand Metadata in the Repository tree view, right-click File Delimited
and select Create file delimited from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings view of the
relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design workspace
as a new component or onto an existing component to reuse the metadata. For further information about how to use the
centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and Setting a repository schema in a
Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file delimited to open the
file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
you choose to do so. The information you provide in the Description field will appear as a tooltip when you move your
mouse pointer over the file connection.

353
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version and
status of a repository item in the Project Settings dialog box. For more information, see Upgrading the version of project
items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File delimited node to hold your
newly created file connection. Note that you cannot select a folder if you are editing an existing connection, but you can
drag and drop it to a new folder whenever you want.
4. Click Next when completed with the general properties.

Defining the file path and format

Procedure
1. Specify the full path of the source file in the File field, or click the Browse... button to search for the file.

Note: The Universal Naming Convention (UNC) path notation is not supported. If your source file is on a LAN host,
you can first map the network folder into a local drive.

354
Centralizing metadata for Data Integration

2. Select the OS Format the file was created in. This information is used to prefill subsequent step fields. If the list doesn't
include the appropriate format, ignore it.
3. The File viewer gives an instant picture of the file loaded. Check the file consistency, the presence of header and more
generally the file structure.
4. Click Next to proceed to the next step.

Defining the file parsing parameters

About this task


On this view, you can refine the various settings of your file so that the file schema can be properly retrieved.

355
Centralizing metadata for Data Integration

Procedure
1. Set the Encoding type,and the Field and Row separators in the File Settings area.

2. Depending on your file type (csv or delimited), set the Escape and Enclosure characters to be used.
3. If the file preview shows a header message, exclude the header from the parsing. Set the number of header rows to be
skipped. Also, if you know that the file contains footer information, set the number of footer lines to be ignored.

356
Centralizing metadata for Data Integration

4. The Limit of Rows allows you to restrict the extend of the file being parsed. If needed, select the Limit check box and
set or select the desired number of rows.
5. In the File Preview panel, view the new settings impact.
6. Check the Set heading row as column names box to transform the first parsed row as labels for schema columns. Note
that the number of header rows to be skipped is then incremented by 1.

7. Click Refresh on the preview panel for the settings to take effect and view the result on the viewer.
8. Click Next to proceed to the final step to check and customize the generated file schema.

Checking and customizing the file schema

About this task


The last step shows the Delimited File schema generated. You can customize the schema using the toolbar underneath the
table.

357
Centralizing metadata for Data Integration

Procedure
1. If the Delimited file which the schema is based on has been changed, use the Guess button to generate again the
schema. Note that if you customized the schema, the Guess feature does not retain these changes.
2. Modify the schemas if needed.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
3. Click Finish. The new schema is displayed under the relevant File Delimited connection node in the Repository tree
view.

358
Centralizing metadata for Data Integration

Centralizing File Positional metadata


If you often need to read data from and/or write data to certain positional files, you may want to centralize their metadata
in the Repository for easy reuse. File Positional metadata can be used to define the properties of tFileInputPositional,
tFileOutputPositional, and tFileInputMSPositional components.
The New Positional File wizard gathers both file connection and schema definitions in a four-step procedure.
To create a File Positional connection from scratch, expand Metadata in the Repository tree view, right-click File positional
and select Create file positional from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings view of the
relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
The new schema is displayed under the relevant File positional connection node in the Repository tree view. You can drop
the defined metadata from the Repository onto the design workspace as a new component or onto an existing component
to reuse the metadata. For further information about how to use the centralized metadata in a Job, see Using centralized
metadata in a Job on page 513 and Setting a repository schema in a Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file positional to open the
file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties of the File Positional connection

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
you choose to do so. The information you provide in the Description field will appear as a tooltip when you move your
mouse pointer over the file connection.

359
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version
and status of a Repository item in the Project Settings dialog box. For more information, see Upgrading the version of
project items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File positional node to hold your
newly created file connection. Note that you cannot select a folder if you are editing an existing connection, but you can
drag and drop it to a new folder whenever you want.
4. Click Next when completed with the general properties.

Defining the file path, format and marker positions

Procedure
1. Specify the full path of the source file in the File field, or click the Browse... button to search for the file.

Note: The Universal Naming Convention (UNC) path notation is not supported. If your source file is on a LAN host,
you can first map the network folder into a local drive.

2. Select the Encoding type and the OS Format the file was created in. This information is used to prefill subsequent step
fields. If the list doesn't include the appropriate format, ignore the OS format.

360
Centralizing metadata for Data Integration

The file is loaded and the File Viewer area shows a file preview and allows you to place your position markers.
3. Click the file preview and set the markers against the ruler to define the file column properties. The orange arrow helps
you refine the position.

361
Centralizing metadata for Data Integration

The Field Separator and Marker Position fields are automatically filled with a series of figures separated by commas.
The figures in the Field Separator are the number of characters between the separators, which represent the lengths
of the columns of the loaded file. The asterisk symbol means all remaining characters on the row, starting from the
preceding marker position. You can change the figures to specify the column lengths precisely.
The Marker Position field shows the exact position of each marker on the ruler, in units of characters. You can change
the figures to specify the positions precisely.
To move a marker, press its arrow and drag it to the new position. To remove a marker, press its arrow and drag it
towards the ruler until a icon appears.
4. Click Next to continue.

Defining the parsing parameters of your positional file

About this task


On this view, you define the file parsing parameters so that the file schema can be properly retrieved.
At this stage, the preview shows the file columns upon the markers' positions.

362
Centralizing metadata for Data Integration

Procedure
1. Set the Field and Row separators in the File Settings area.
• If needed, change the figures in the Field Separator field to specify the column lengths precisely.
• If the row separator of your file is not the standard EOL (end of line), select Custom String from the Row Separator
list and specify the character string in the Corresponding Character field.
2. If your file has any header rows to be excluded from the data content, select the Header check box in the Rows To Skip
area and define the number of rows to be ignored in the corresponding field. Also, if you know that the file contains
footer information, select the Footer check box and set the number of rows to be ignored.
3. The Limit of Rows area allows you to restrict the extend of the file being parsed. If needed, select the Limit check box
and set or select the desired number of rows.
4. If the file contains column labels, select the Set heading row as column names check box to transform the first parsed
row to labels for schema columns. Note that the number of header rows to be skipped is then incremented by 1.
5. Click Refresh Preview on the Preview panel for the settings to take effect and view the result on the viewer.
6. Click Next to proceed to the next view to check and customize the generated file schema.

Checking and customizing the schema of your positional file

About this task


Step 4 shows the end schema generated. Note that any character which could be misinterpreted by the program is replaced
by neutral characters. Underscores replace asterisks, for example.

363
Centralizing metadata for Data Integration

Procedure
1. Rename the schema (by default, metadata) and edit the schema columns as needed.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. To generate the Positional File schema again, click the Guess button. Note that, however, any edits to the schema might
be lost after "guessing" the file-based schema.
3. When done, click Finish to close the wizard.

364
Centralizing metadata for Data Integration

Centralizing File Regex metadata


Regex file schemas are used for files made of regular expressions, such as log files. If you often need to connect to a regex
file, you may want to centralize its connection and schema information in the Repository for easy reuse.
The New RegEx File wizard gathers both file connection and schema definitions in a four-step procedure.

Note:
This procedure requires some advanced knowledge on regular expression syntax.

To create a File Regex connection from scratch, expand the Metadata node in the Repository tree view, right-click File Regex
and select Create file regex from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings view of the
relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
The new schema is displayed under the relevant File regex node in the Repository tree view. You can drop the defined
metadata from the Repository onto the design workspace as a new component or onto an existing component to reuse the
metadata. For further information about how to use the centralized metadata in a Job, see Using centralized metadata in a
Job on page 513 and Setting a repository schema in a Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file regex to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties of the File Regex connection

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
you choose to do so. The information you provide in the Description field will appear as a tooltip when you move your
mouse pointer over the file connection.

365
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version and
status of a repository item in the Project Settings dialog box. For more information, see Upgrading the version of project
items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File regex node to hold your newly
created file connection. Note that you cannot select a folder if you are editing an existing connection, but you can drag
and drop it to a new folder whenever you want.
4. Click Next when completed with the general properties.

Defining the path and format of your Regex file

Procedure
1. Specify the full path of the source file in the File field, or click the Browse... button to search for the file.

Note: The Universal Naming Convention (UNC) path notation is not supported. If your source file is on a LAN host,
you can first map the network folder into a local drive.

2. Select the Encoding type and the OS Format the file was created in. This information is used to prefill subsequent step
fields. If the list doesn't include the appropriate format, ignore the OS format.

366
Centralizing metadata for Data Integration

The file viewer gives an instant picture of the loaded file.


3. Click Next to define the schema structure.

Defining the parsing parameters of your Regex file

About this task


On this view, you define the file parsing parameters so that the file schema can be properly retrieved.

Procedure
1. Set the Field and Row separators in the File Settings area.
• If needed, change the figures in the Field Separator field to specify the column lengths precisely.
• If the row separator of your file is not the standard EOL, select Custom String from the Row Separator list and
specify the character string in the Corresponding Character field.
2. In the Regular Expression settings panel, enter the regular expression to be used to delimit the file.

Warning:
Make sure to include the Regex code in single or double quotes accordingly.

367
Centralizing metadata for Data Integration

3. If your file has any header rows to be excluded from the data content, select the Header check box in the Rows To Skip
area and define the number of rows to be ignored in the corresponding field. Also, if you know that the file contains
footer information, select the Footer check box and set the number of rows to be ignored.
4. The Limit of Rows allows you to restrict the extend of the file being parsed. If needed, select the Limit check box and
set or select the desired number of rows.
5. If the file contains column labels, select the Set heading row as column names check box to transform the first parsed
row to labels for schema columns. Note that the number of header rows to be skipped is then incremented by 1.
6. Then click Refresh preview to take the changes into account. The button changes to Stop until the preview is refreshed.

7. Click Next to proceed to the next view where you can check and customize the generated Regex File schema.

Checking and customizing the schema of your Regex file

Procedure
1. Rename the schema (by default, metadata) and edit the schema columns as needed.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. To retrieve or update the Regex File schema, click Guess. Note however that any edits to the schema might be lost after
guessing the file based schema.
3. When done, click Finish to close the wizard.

Centralizing XML file metadata


If you often need to connect to an XML file, you may want to use the New Xml File wizard to centralize your connection to
the file and the schema retrieved from it in your Repository for easy reuse.
Depending on the option you select, the wizard helps you create either an input or an output file connection. In a Job,
the tFileInputXML and tExtractXMLField components use the input connection created to read XML files, whereas
tAdvancedFileOutputXML uses the output schema created to either write an XML file, or to update an existing XML file.

368
Centralizing metadata for Data Integration

For further information about reading an XML file, see Setting up XML metadata for an input file on page 369.
For further information about writing an XML file, see Setting up XML metadata for an output file on page 379.
To create an XML file connection from scratch, expand the Metadata node in the Repository tree view, right-click File XML
and select Create file XML from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings view of the
relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.

Setting up XML metadata for an input file


This section describes how to define a file connection and upload an XML schema for an input file.

Defining the general properties of the File XML connection

About this task


In this step, the general metadata properties such as the Name, Purpose and Description are set.

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
you choose to do so. The information you provide in the Description field will appear as a tooltip when you move your
mouse pointer over the file connection.

Note:
When you enter the general properties of the metadata to be created, you need to define the type of connection as
either input or output. It is therefore advisable to enter information that will help you distinguish between your
input and output schemas.

369
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version
and status of a Repository item in the Project Settings dialog box. For more information, see Upgrading the version of
project items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File XML node to hold your newly
created file connection. Note that you cannot select a folder if you are editing an existing connection, but you can drag
and drop it to a new folder whenever you want.
4. Click Next to select the type of metadata.

Setting the type of metadata (input)

About this task


In this step, the type of metadata is set as either input or output. For this procedure, the metadata of interest is input.

Procedure
1. In the dialog box, select Input XML.

370
Centralizing metadata for Data Integration

2. Click Next to upload the input file.

Uploading an XML file

About this task


This procedure describes how to upload an XML file to obtain the XML tree structure.
The example input XML file used to demonstrate this step contains some contact information, and the structure is like the
following:

<contactInfo>
<contact>
<id>1</id>
<firstName>Michael</firstName>
<lastName>Jackson</lastName>
<company>Talend</company>
<city>Paris</city>
<phone>2323</phone>
</contact>
<contact>
<id>2</id>
<firstName>Elisa</firstName>
<lastName>Black</lastName>
<company>Talend</company>
<city>Paris</city>
<phone>4499</phone>
</contact>
...
</contactInfo>

To upload an XML file, do the following:

Procedure
1. Click Browse... and browse your directory to the XML file to be uploaded. Alternatively, enter the access path to the file.
The Schema Viewer area displays a preview of the XML structure. You can expand and visualize every level of the file's
XML tree structure.

371
Centralizing metadata for Data Integration

2. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
3. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want to run it
against all of the columns.
4. Click Next to define the schema parameters.

Uploading an XSD file

About this task


This procedure describes how to upload an XSD file to obtain the XML tree structure.

372
Centralizing metadata for Data Integration

An XSD file is used to define the schema of XML files. The structure and element data types of the example XML file above
can be described using the following XSD, which is used as the example XSD input in this section.

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">


<xs:element name="contactInfo">
<xs:complexType>
<xs:sequence>
<xs:element maxOccurs="unbounded" ref="contact"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="contact">
<xs:complexType>
<xs:sequence>
<xs:element ref="id"/>
<xs:element ref="firstName"/>
<xs:element ref="lastName"/>
<xs:element ref="company"/>
<xs:element ref="city"/>
<xs:element ref="phone"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="id" type="xs:integer"/>
<xs:element name="firstName" type="xs:NCName"/>
<xs:element name="lastName" type="xs:NCName"/>
<xs:element name="company" type="xs:NCName"/>
<xs:element name="city" type="xs:NCName"/>
<xs:element name="phone" type="xs:integer"/>
</xs:schema>

For more information on XML Schema, see http://www.w3.org/XML/Schema.

Note:
When loading an XSD file,
• the data will be saved in the Repository, and therefore the metadata will not be affected by the deletion or
displacement of the file.
• you can choose an element as the root of your XML tree.

To load an XSD file, do the following:

Procedure
1. Click Browse... and browse your directory to the XSD file to be uploaded. Alternatively, enter the access path to the file.
2. In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.

The Schema Viewer area displays a preview of the XML structure. You can expand and visualize every level of the file's
XML tree structure.

373
Centralizing metadata for Data Integration

3. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
4. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want to run it
against all of the columns.
5. Click Next to define the schema parameters.

Defining the schema

About this task


In this step, the schema parameters are set.

374
Centralizing metadata for Data Integration

The schema definition window is composed of four views:

View Description

Source Schema Tree view of the XML file.

Target Schema Extraction and iteration information.

Preview Preview of the target schema, together with the input data of the selected columns
displayed in the defined order.

Note: The preview functionality is not available if you loaded an XSD file.

File Viewer Preview of the brute data.

First define an Xpath loop and the maximum number of times the loop can run. To do so:

375
Centralizing metadata for Data Integration

Procedure
1. Populate the XPath loop expression field with the absolute XPath expression for the node to be iterated upon. There
are two ways to do this, either:
• enter the absolute XPath expression for the node to be iterated upon (Enter the full expression or press Ctrl+Space
to use the autocompletion list),
• drop a node from the tree view under Source schema onto the Absolute XPath expression field.
An orange arrow links the node to the corresponding expression.

Note: The Xpath loop expression field is mandatory.

2. In the Loop limit field, specify the maximum number of times the selected node can be iterated, or -1 if you want to run
it against all of the rows.
3. Define the fields to be extracted dragging the node(s) of interest from the Source Schema tree into the Relative or
absolute XPath expression fields.

Note: You can select several nodes to drop on the table by pressing Ctrl or Shift and clicking the nodes of interest.
The arrow linking an individual node selected on the Source Schema to the Fields to extract table are blue in colour.
The other ones are gray.

4. If needed, you can add as many columns to be extracted as necessary, delete columns or change the column order using
the toolbar:

Add or delete a column using the and buttons.

Change the order of the columns using the and buttons.
5. In the Column name fields, enter labels for the columns to be displayed in the schema Preview area.
6. Click Refresh Preview to display a preview of the target schema. The fields are consequently displayed in the schema
according to the defined order.

376
Centralizing metadata for Data Integration

Note: The preview functionality is not available if you loaded an XSD file.

7. Click Next to check and edit the end schema.

Finalizing the end schema

About this task


The schema generated displays the columns selected from the XML file and allows you to further define the schema.

377
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
• Redefine the columns by editing the relevant fields.

Add or delete a column using the and buttons.

Change the order of the columns using the and buttons.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.

378
Centralizing metadata for Data Integration

• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the XML file which the schema is based on has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with it schema, appears under the File XML node in the Repository tree
view.

Results
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design workspace
as a new tFileInputXML or tExtractXMLField component or onto an existing component to reuse the metadata. For further
information about how to use the centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and
Setting a repository schema in a Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file xml to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Setting up XML metadata for an output file


This section describes how to define a file connection and upload an XML schema for an output file.

Defining the general properties of the File XML connection for an output file

About this task


In this step, the general metadata properties such as the Name, Purpose and Description are set.

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
you choose to do so. The information you provide in the Description field will appear as a tooltip when you move your
mouse pointer over the file connection.

Note:
When you enter the general properties of the metadata to be created, you need to define the type of connection as
either input or output. It is therefore advisable to enter information that will help you distinguish between your
input and output schemas.

379
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version and
status of a repository item in the Project Settings dialog box. For more information, see Upgrading the version of project
items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File XML node to hold your newly
created file connection. Note that you cannot select a folder if you are editing an existing connection, but you can drag
and drop it to a new folder whenever you want.
4. Click Next to select the type of metadata.

Setting the type of metadata (output)

About this task


In this step, the type of metadata is set as either input or output. For this procedure, the metadata of interest is output.

Procedure
1. From the dialog box, select Output XML.

380
Centralizing metadata for Data Integration

2. Click Next to define the output file, either from an XML or XSD file or from scratch.

Defining the output file structure using an existing XML file

About this task


In this step, you will choose whether to create your file manually or from an existing XML or XSD file. If you choose the
Create manually option you will have to configure your schema, source and target columns yourself at step 4 in the wizard.
The file will be created in a Job using a an XML output component such as tAdvancedFileOutputXML.
To create the output XML structure from an XML file, do the following:

Procedure
1. Select the Create from a file option.
2. Click the Browse... button next to the XML or XSD File field, browse to the access path to the XML file the structure of
which is to be applied to the output file, and double-click the file.
The File Viewer area displays a preview of the XML structure, and the File Content area displays a maximum of the first
50 rows of the file.

381
Centralizing metadata for Data Integration

3. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
4. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if you want it to
be run against all of the columns.
5. In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the file does not
exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML component. If the file
already exists, it will be overwritten.
6. Click Next to define the schema.

Defining the output file structure using an XSD file

About this task


This procedure describes how to define the output XML file structure from an XSD file.

382
Centralizing metadata for Data Integration

Note:
When loading an XSD file,
• the data will be saved in the Repository, and therefore the metadata will not be affected by the deletion or
displacement of the file.
• you can choose an element as the root of your XML tree.

To create the output XML structure from an XSD file, do the following:

Procedure
1. Select the Create from a file option.
2. Click the Browse... button next to the XML or XSD File field, browse to the access path to the XSD file the structure of
which is to be applied to the output file, and double-click the file.
3. In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.
The File Viewer area displays a preview of the XML structure, and the File Content area displays a maximum of the first
50 rows of the file.

383
Centralizing metadata for Data Integration

4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if you want it to
be run against all of the columns.
6. In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the file does not
exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML component. If the file
already exists, it will be overwritten.
7. Click Next to define the schema.

Defining the schema of your output file

About this task


Upon completion of the previous operations, the columns in the Linker Source area are automatically mapped to the corres
ponding ones in the Linker Target area, as indicated by blue arrow links.

384
Centralizing metadata for Data Integration

In this step, you need to define the output schema. The following table describes how:

To... Perform the following...

Create a schema from scratch or In the Linker Source area, click the Schema Management button to open the schema editor.
edit the source schema columns to
pass to the output schema

Specify the maximum number of In the Field Limit field of the Linker Source area, specify the maximum number of columns you want to display
columns to be displayed in the s if the number of columns of the source file exceeds the limit defined in the Studio preference and any columns
chema list you want to pass to the output schema are not shown in the schema list. Then click the refresh button.

Define a loop element In the Linker Target area, right-click the element of interest and select Set As Loop Element from the
contextual menu.

Note: It is a mandatory operation to define an element to run a loop on.

Define a group element In the Linker Target area, right-click the element of interest and select Set As Group Element from the
contextual menu.

Note: You can set a parent element of the loop element as a group element on the condition that the
parent element is not the root of the XML tree.

385
Centralizing metadata for Data Integration

To... Perform the following...

Create a child element for an In the Linker Target area,


element
• Right-click the element of interest and select Add Sub-element from the contextual menu, enter a
name for the sub-element in the dialog box that appears, and click OK,
• Select the element of interest, click the [+] button at the bottom, select Create as sub-element in the
dialog box that appears, and click OK. Then, enter a name for the sub-element in the next dialog box and
click OK.

Create an attribute for an element In the Linker Target area,


• Right-click the element of interest and select Add Attribute from the contextual menu, enter a name for
the attribute in the dialog box that appears, and click OK,
• Select the element of interest, click the [+] button at the bottom, select Create as attribute in the dialog
box that appears, and click OK. Then, enter a name for the attribute in the next dialog box and click OK.

Create a name space for an element In the Linker Target area,


• Right-click the element of interest and select Add Name Space from the contextual menu, enter a name
for the name space in the dialog box that appears, and click OK,
• Select the element of interest, click the [+] button at the bottom, select Create as name space in the
dialog box that appears, and click OK. Then, enter a name for the name space in the next dialog box and
click OK.

Delete one or more elements/attri In the Linker Target area,


butes/name spaces
• Right-click the element(s)/attribute(s)/name space(s) of interest and select Delete from the contextual
menu
• Select the element(s)/attribute(s)/name space(s) of interest and click the x button at the bottom
• Select the element(s)/attribute(s)/name space(s) of interest and press the Delete key.

Note: Deleting an element will also delete its children, if any.

Adjust the order of one or more


elements In the Linker Target area, select the element(s) of interest and click the and buttons.

Set a static value for an element/ In the Linker Target area, right-click the element/attribute/name space of interest and select Set A Fix Value
attribute/name space from the contextual menu.

Note:
• The value you set will replace any value retrieved for the corresponding column from the incoming
data flow in your Job.
• You can set a static value for a child element of the loop element only, on the condition that the
element does not have its own children and does not have a source-target mapping on it.

Create a source-target mapping Select the column of interest in the Linker Source area, drop it onto the node of interest in the Linker Target
area, and select Create as sub-element of target node, Create as attribute of target node, or Add linker to target
node according to your need in the dialog box that appears, and click OK.
If you choose an option that is not permitted for the target node, you will see a warning message and your
operation will fail.

Remove a source-target mapping In the Linker Target area, right-click the node of interest and select Disconnect Linker from the contextual
menu.

Create an XML tree from another Right-click any schema item in the Linker Target area and select Import XML Tree from the contextual menu
XML or XSD file to load another XML or XSD file. Then, you need to create source-target mappings manually and define the
output schema all again.

Note:
You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore
making mapping faster. You can also make multiple selections for right-click operations.

386
Centralizing metadata for Data Integration

Procedure
1. In the Linker Target area, right-click the element you want to run a loop on and select Set As Loop Element from the
contextual menu.

2. Define other output file properties as needed, and then click Next to view and customize the end schema.

Finalizing the end schema of your output file

About this task


Step 5 of the wizard displays the end schema generated and allows you to further define the schema.

387
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
• Redefine the columns by editing the relevant fields.
• Add or delete a column using the [+] and x buttons.

Change the order of the columns using the and buttons.
2. If the XML file which the schema is based on has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File XML metadata node in
the Repository tree view.

Results
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design workspace
as a new tAdvancedFileOutputXML component or onto an existing component to reuse the metadata.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file xml to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

388
Centralizing metadata for Data Integration

Centralizing File Excel metadata


If you often need to read data from and/or write data to a certain Excel spreadsheet file, you may want to centralize the
connection to the file, along with its data structure, in the Repository for easy reuse. This will save you much effort because
you will not have to define the metadata details manually in the relevant components each time you use the file.
You can centralize an Excel file connection either from an existing Excel file, or from Excel file property settings defined in a
Job.
To centralize a File Excel connection and its schema from an Excel file, expand Metadata in the Repository tree view, right-
click File Excel and select Create file Excel from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have already defined in a Job, click the icon in the Basic settings view
of the relevant component, with its Property Type set to Built-in, to open the file metadata setup wizard.
Then complete these tasks step by step following the wizard:
• Define the general information that will identify the file connection. See Defining the general properties of the File
Excel connection on page 389.
• Load the file of interest. See Loading the file on page 390.
• Parse the file to retrieve the file schema. See Parsing the file on page 391.
• Finalize the file schema. See Finalizing the end schema of your Excel file on page 392.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design workspace
as a new component or onto an existing component to reuse the metadata. For further information about how to use the
centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and Setting a repository schema in a
Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file Excel to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties of the File Excel connection

Procedure
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description fields if
needed. The information you provide in the Description field will appear as a tooltip when you move your mouse poin
ter over the file connection.

389
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version and
status of a repository item in the Project Settings dialog box. For more information, see Upgrading the version of project
items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Excel node to hold your newly
created file connection.
4. Click Next to proceed with file settings.

Loading the file

Procedure
1. Specify the full path of the source file in the File field, or click the Browse... button to browse to the file.

Note: The Universal Naming Convention (UNC) path notation is not supported. If your source file is on a LAN host,
you can first map the network folder into a local drive.

Skip this step if you are saving an Excel file connection defined in a component because the file path is already filled in
the File field.

390
Centralizing metadata for Data Integration

2. If the uploaded file is an Excel 2007 file, make sure that the Read excel2007 file format(xlsx) check box is selected.
3. By default, user mode is selected. If the uploaded xlsx file is extremely large, select Less memory consumed for large
excel(Event mode) from the Generation mode list to prevent out-of-memory errors.
4. In the File viewer and sheets setting area, view the file content and the select the sheet or sheets of interest.
• From the Please select sheet drop-down list, select the sheet you want to view. The preview table displays the
content of the selected sheet.
By default the file preview table displays the first sheet of the file.
• From the Set sheets parameters list, select the check box next to the sheet or sheets you want to upload.
If you select more than one sheet, the result schema will be the combination of the structures of all the selected
sheets.
5. Click Next to continue.

Parsing the file

About this task


In this step of the wizard, you can define the various settings of your file so that the file schema can be properly retrieved.

391
Centralizing metadata for Data Integration

Procedure
1. Specify the encoding, advanced separator for numbers, and the rows that should be skipped as they are header or
footer, according to your Excel file.

2. If needed, fill the First column and Last column fields with integers to set precisely the columns to be read in the file.
For example, if you want to skip the first column as it may not contain proper data to be processed, fill the First column
field with 2 to set the second column of the file as the first column of the schema.
To retrieve the schema of an Excel file you do not need to parse all the rows of the file, especially when you have
uploaded a large file. To limit the number of rows to parse, select the Limit check box in the Limit Of Rows area and set
or select the desired number of rows.

3. If your Excel file has a header row, select the Set heading row as column names check box to take into account the
heading names. Click Refresh to view the result of all the previous changes in the preview table.

4. Then click Next to continue.

Finalizing the end schema of your Excel file

About this task


The last step of the wizard shows the end schema generated and allows you to customize the schema according to your
needs.
Note that any character which could be misinterpreted by the program is replaced by neutral characters. For example,
asterisks are replaced with underscores.

392
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file, or replace
the schema by importing an schema definition XML file using the tool bar.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the Excel file which the schema is based on has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new schema is displayed under the relevant File Excel connection node in the Repository tree view.

393
Centralizing metadata for Data Integration

Centralizing File LDIF metadata


About this task
LDIF files are directory files described by attributes. If you often need to read certain LDIF files, you may want to centralize
the connections to these LDIF-type files and their attribute descriptions in the Repository for easy reuse. This way you will
not have to define the metadata details manually in the relevant components each time you use the files.
You can centralize an LDIF file connection either from an existing LDIF file, or from the LDIF file property settings defined in
a Job.
To centralize an LDIF connection and its schema from an LDIF file, expand Metadata in the Repository tree view, right-click
File ldif and select Create file ldif from the contextual menu to open the file metadata setup wizard.

To centralize a file connection and its schema you have already defined in a Job, click the icon in the Basic settings view
of the relevant component, with its Property Type set to Built-in, to open the file metadata setup wizard.
Then complete these steps following the wizard:

Procedure
1. Fill in the general information in the relevant fields to identify the LDIF file metadata, including Name, Purpose and
Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip when you
move your mouse pointer over the file connection.

2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version and
status of a repository item in the Project Settings dialog box. For more information, see Upgrading the version of project
items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File ldif node to hold your newly
created file connection.

394
Centralizing metadata for Data Integration

Click Next to proceed with file settings.


4. Specify the full path of the source file in the File field, or click the Browse... button to browse to the file.

Note: The Universal Naming Convention (UNC) path notation is not supported. If your source file is on a LAN host,
you can first map the network folder into a local drive.

Skip this step if you are saving an LDIF file connection defined in a component because the file path is already filled in
the File field.

5. Check the first 50 rows of the file in the File Viewer area and click Next to continue.
6. From the list of attributes of the loaded file, select the attributes you want to include the file schema, and click Refresh
Preview to preview the selected attributes.
Then click Next to proceed with schema finalization.

395
Centralizing metadata for Data Integration

7. If needed, customize the generated schema:


• Rename the schema (by default, metadata) and leave a comment.
• Add, remove or move schema columns, export the schema to an XML file, or replace the schema by importing an
schema definition XML file using the tool bar.

396
Centralizing metadata for Data Integration

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
8. If the LDIF file on which the schema is based has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.
9. Click Finish. The new schema is displayed under the relevant Ldif file connection node in the Repository tree view.

Results
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design workspace
as a new component or onto an existing component to reuse the metadata. For further information about how to use the
centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and Setting a repository schema in a
Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file ldif to open the file
metadata setup wizard.

397
Centralizing metadata for Data Integration

To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Centralizing JSON file metadata


If you often need to use a JSON file, you may want to use the New Json File wizard to centralize the file connection, XPath
query statements, and data structure in the Repository for easy reuse.
Depending on the option you select, the wizard helps you create either an input or an output file connections. In a Job,
the tFileInputJSON and tExtractJSONFields components use the input schema created to read JSON files/fields, whereas
tWriteJSONField uses the output schema created to write a JSON field, which can be saved in a file by tFileOutputJSON or
extracted by tExtractJSONFields.
For information about setting up input JSON file metadata, see Setting up JSON metadata for an input file on page 398.
For information about setting up output JSON metadata, see Setting up JSON metadata for an output file on page 406.
In the Repository view, expand the Metadata node, right click File JSON, and select Create JSON Schema from the contextual
menu to open the New Json File wizard.

Setting up JSON metadata for an input file


This section describes how to define a file connection and upload a JSON schema for an input file.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design workspace
as a new tFileInputJSON or tExtractJSONFields component or onto an existing component to reuse the metadata. For further
information about how to use the centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and
Setting a repository schema in a Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit JSON to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties of the File JSON connection

Procedure
1. In the wizard, fill in the general information in the relevant fields to identify the JSON file metadata, including Name,
Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip when you
move your mouse pointer over the file connection.

Note:
In this step, it is advisable to enter information that will help you distinguish between your input and output
connections, which will be defined in the next step.

398
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively.
You can also manage the version and status of a repository item in the Project Settings dialog box. For more
information, see Upgrading the version of project items on page 578 and Status management on page 580
respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Json node to hold your newly
created file connection.
4. Click Next to select the type of metadata.

Setting the type of metadata and loading the input file

Procedure
1. In the dialog box, select Input Json and click Next to proceed to the next step of the wizard to load the input file.

399
Centralizing metadata for Data Integration

2. From the Read By list box, select the type of query to read the source JSON file.
• JsonPath: read the JSON data based on a JsonPath query.
This is the default and recommended query type to read JSON data in order to gain performance and to avoid
problems that you may encounter when reading JSON data based on an XPath query.
• Xpath: read the JSON data based on an XPath query.
3. Click Browse... and browse your directory to the JSON file to be uploaded. Alternatively, enter the full path to the file or
the URL that links to the JSON file.
In this example, the input JSON file has the following content:

{"movieCollection": [
{
"type": "Action Movie",
"name": "Brave Heart",
"details": {
"release": "1995",
"rating": "5",
"starring": "Mel Gibson"
}
},
{
"type": "Action Movie",
"name": "Edge of Darkness",
"details": {
"release": "2010",
"rating": "5",
"starring": "Mel Gibson"
}
}
]}

The Schema Viewer area displays a preview of the JSON structure. You can expand and visualize every level of the file's
JSON tree structure.

400
Centralizing metadata for Data Integration

4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of levels in the JSON hierarchical depth to which you want to limit the JsonPath or
XPath query, 0 for no limits.
Setting this parameter to a value less than 5 can help prevent the wizard from hanging in case of a large JSON file.
6. Click Next to define the schema parameters.

Defining the schema of your JSON file

About this task


In this step, you will set the schema parameters.

401
Centralizing metadata for Data Integration

The schema definition window is composed of four views:

View Description

Source Schema Tree view of the JSON file.

Target Schema Extraction and iteration information.

Preview Preview of the target schema, together with the input data of the selected columns
displayed in the defined order.

402
Centralizing metadata for Data Integration

View Description

File Viewer Preview of the JSON file's data.

Procedure
1. Populate the Path loop expression field with the absolute JsonPath or XPath expression, depending on the type of query
you have selected, for the node to be iterated upon. There are two ways to do this, either:
• enter the absolute JsonPath or XPath expression for the node to be iterated upon (enter the full expression or press
Ctrl+Space to use the autocompletion list),
• drag the loop element node from the tree view under Source schema into the Absolute path expression field of the
Path loop expression table.
An orange arrow links the node to the corresponding expression.

Note: The Path loop expression definition is mandatory.

2. In the Loop limit field, specify the maximum number of times the selected node can be iterated.
3. Define the fields to be extracted by dragging the nodes from the Source Schema tree into the Relative or absolute path
expression fields of the Fields to extract table.

Note: You can select several nodes to drop onto the table by pressing Ctrl or Shift and clicking the nodes of interest.

4. If needed, you can add as many columns to be extracted as necessary, delete columns or change the column order using
the toolbar:
• Add or delete a column using the [+] and x buttons.

Change the order of the columns using the and buttons.

403
Centralizing metadata for Data Integration

5. If you want your file schema to have different column names than those retrieved from the input file, enter new names
in the corresponding Column name fields.
6. Click Refresh Preview to preview the target schema. The fields are consequently displayed in the schema according to
the defined order.

7. Click Next to finalize the schema.

Finalizing the schema of your JSON file

About this task


The last step of the wizard shows the end schema generated and allows you to customize the schema according to your
needs.

404
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file, or replace
the schema by importing an schema definition XML file using the tool bar.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the JSON file which the schema is based on has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.

405
Centralizing metadata for Data Integration

3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File Json metadata node in
the Repository tree view.

Setting up JSON metadata for an output file


This section describes how to define JSON metadata for an output file.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design workspace
as a new tWriteJSONField component or onto an existing component to reuse the metadata. For further information about
how to use the centralized metadata in a Job, see Using centralized metadata in a Job on page 513 and Setting a repository
schema in a Job on page 76.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit JSON to open the file
metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.

Warning: If you are working on a Git managed project while the Manual lock option is selected in Talend Administration
Center, be sure to lock manually your connection in the Repository tree view before retrieving or updating table schemas
for it. Otherwise the connection is read-only and the Finish button of the wizard is not operable. For more information
about locking and unlocking a project item and on different lock types, see Working collaboratively on project items on
page 26.

To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining general properties of the File JSON connection for an output file

Procedure
1. In the wizard, fill in the general information in the relevant fields to identify the JSON file metadata, including Name,
Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip when you
move your mouse pointer over the file connection.

Note:
In this step, it is advisable to enter information that will help you distinguish between your input and output
connections, which will be defined in the next step.

406
Centralizing metadata for Data Integration

2. If needed, set the version and status in the Version and Status fields respectively.
You can also manage the version and status of a repository item in the Project Settings dialog box. For more
information, see Upgrading the version of project items on page 578 and Status management on page 580
respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Json node to hold your newly
created file connection.
4. Click Next to set the type of metadata.

Setting the type of metadata and loading the template JSON file

About this task


In this step, the type of schema is set as either input or output. For this procedure, the schema of interest is output.

Procedure
1. From the dialog box, select Output JSON click Next to proceed to the next step of the wizard.

407
Centralizing metadata for Data Integration

2. Choose whether to create the output metadata manually or from an existing JSON file as a template.
If you choose the Create manually option you will have to configure the schema and link the source and target columns
yourself. The output JSON file/field is created via a Job using a JSON output component such as tWriteJSONField.
In this example, we will create the output metadata by loading an existing JSON file. Therefore, select the Create from a
file option.
3. Click the Browse... button next to the JSON File field, browse to the access path to the JSON file the structure of which
is to be applied to the output JSON file/field, and double-click the file. Alternatively, enter the full path to the file or the
URL which links to the template JSON file.
The File Viewer area displays a preview of the JSON structure, and the File Content area displays a maximum of the first
50 rows of the file.

408
Centralizing metadata for Data Integration

4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of levels in the JSON hierarchical depth to which you want to limit the JsonPath or
XPath query, 0 for no limits.
Setting this parameter to a value less than 5 can help prevent the wizard from hanging in case of a large JSON file.
6. Optionally, specify an output file path.
7. Click Next to define the schema.

Defining the JSON schema of your output file

About this task


Upon completion of the previous operations, the columns in the Linker Source area are automatically mapped to the corres
ponding ones in the Linker Target area, as indicated by blue arrow links..

409
Centralizing metadata for Data Integration

In this step, you need to define the output schema. The following table describes how:

To... Perform the following...

Define a loop element In the Linker Target area, right-click the element of interest and select Set As Loop Element from the
contextual menu.

Note:
It is a mandatory operation to define an element to run a loop on.

Define a group element In the Linker Target area, right-click the element of interest and select Set As Group Element from the
contextual menu.

Note:
You can set a parent element of the loop element as a group element on the condition that the parent
element is not the root of the JSON tree.

Create a child element for an In the Linker Target area,


element
• Right-click the element of interest and select Add Sub-element from the contextual menu, enter a name
for the sub-element in the dialog box that appears, and click OK.
• Select the element of interest, click the [+] button at the bottom, select Create as sub-element in the
dialog box that appears, and click OK. Then, enter a name for the sub-element in the next dialog box
and click OK.

410
Centralizing metadata for Data Integration

To... Perform the following...

Create an attribute for an element In the Linker Target area,


• Right-click the element of interest and select Add Attribute from the contextual menu, enter a name for
the attribute in the dialog box that appears, and click OK.
• Select the element of interest, click the [+] button at the bottom, select Create as attribute in the dialog
box that appears, and click OK. Then, enter a name for the attribute in the next dialog box and click OK.

Create a name space for an element In the Linker Target area,


• Right-click the element of interest and select Add Name Space from the contextual menu, enter a name
for the name space in the dialog box that appears, and click OK.
• Select the element of interest, click the [+] button at the bottom, select Create as name space in the
dialog box that appears, and click OK. Then, enter a name for the name space in the next dialog box and
click OK.

Delete one or more elements/attri In the Linker Target area,


butes/name spaces
• Right-click the element(s)/attribute(s)/name space(s) of interest and select Delete from the contextual
menu.
• Select the element(s)/attribute(s)/name space(s) of interest and click the x button at the bottom.
• Select the element(s)/attribute(s)/name space(s) of interest and press the Delete key.

Note:
Deleting an element will also delete its children, if any.

Adjust the order of one or more


elements In the Linker Target area, select the element(s) of interest and click the and buttons.

Set a static value for an element/ In the Linker Target area, right-click the element/attribute/name space of interest and select Set A Fix Value
attribute/name space from the contextual menu.

Note:
• The value you set will replace any value retrieved for the corresponding column from the incoming
data flow in your Job.
• You can set a static value for a child element of the loop element only, on the condition that the
element does not have its own children and does not have a source-target mapping on it.

Create a source-target mapping Select the column of interest in the Linker Source area, drop it onto the node of interest in the Linker Target
area, and select Create as sub-element of target node, Create as attribute of target node, or Add linker to target
node according to your need in the dialog box that appears, and click OK.
If you choose an option that is not permitted for the target node, you will see a warning message and your
operation will fail.

Remove a source-target mapping In the Linker Target area, right-click the node of interest and select Disconnect Linker from the contextual
menu.

Create a JSON tree from another Right-click any schema item in the Linker Target area and select Import JSON Tree from the contextual menu
JSON file to load another JSON file. Then, you need to create source-target mappings manually and define the output
schema all again.

Note:
You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore
making mapping faster. You can also make multiple selections for right-click operations.

Procedure
1. In the Linker Target area, right-click the element you want to set as the loop element and select Set As Loop Element
from the contextual menu.
In this example, define a loop to run on the details element.

411
Centralizing metadata for Data Integration

2. Customize the mappings if needed.


3. Click Next to finalize the schema.

Finalizing the end schema JSON of your output file

About this task


The last step of the wizard shows the end schema generated and allows you to customize the schema according to your
needs.

412
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file, or replace
the schema by importing an schema definition XML file using the tool bar.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the JSON file which the schema is based on has been changed, click the Guess button to generate the schema again.
Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File Json metadata node in
the Repository tree view.

413
Centralizing metadata for Data Integration

Centralizing LDAP connection metadata


If you often need to access an LDAP directory, you want to centralize your LDAP server connection in the Repository tree
view for easy reuse.
You can create an LDAP connection either from an accessible LDAP directory, or by saving the LDAP settings defined in a Job.
To create an LDAP connection from an accessible LDAP directory, expand the Metadata node in the Repository tree view,
right-click the LDAP tree node, and select Create LDAP schema from the contextual menu to open the Create new LDAP
schema wizard.

To centralize an LDAP connection and its schema you have already defined in a Job, click the icon in the Basic settings
view of the relevant component, with its Property Type set to Built-In, to open the Create new LDAP schema wizard.
Unlike the DB connection wizard, the LDAP wizard gathers both LDAP server connection and schema definition in a five-step
procedure.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design workspace
as a new component or onto an existing component to reuse the metadata.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit LDAP schema to open the
file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and select
Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Defining the general properties of the LDAP connection

Procedure
1. Fill in the general information in the relevant fields to identify the LDAP connection to be created, including Name,
Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip when you
move your mouse pointer over the LDAP connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage the version
and status of a Repository item in the Project Settings dialog box. For more information, see Upgrading the version of
project items on page 578 and Status management on page 580 respectively.
3. If needed, click the Select button next to the Path field to select a folder under the LDAP node to hold your newly
created LDAP connection.
4. Click Next to define your LDAP server connection details.

Defining the server connection

Procedure
1. Fill the connection details.

414
Centralizing metadata for Data Integration

Field Description

Host LDAP Server host name or IP address

Port Listening port to the LDAP directory

Encryption method LDAP : no encryption is used


LDAPS: secured LDAP
TLS: certificate is used

2. Then check your connection using Check Network Parameter to verify the connection and activate the Next button.
3. Click Next to continue.

Configuring LDAP access parameters

Procedure
1. In this view, set the authentication and data access mode.

415
Centralizing metadata for Data Integration

Field Description

Authentication method Simple authentication: requires Authentication Parameters field to be filled in


Anonymous authentication: does not require authentication parameters

Authentication Parameters Bind DN or User: login as expected by the LDAP authentication method
Bind password: expected password
Save password: remembers the login details.

Get Base DN from Root DSE / Base DN Path to user's authorized tree leaf
Fetch Base DNs button retrieves the DN automatically from Root.

Alias Dereferencing Never allows to improve search performance if you are sure that no aliases is to be
dereferenced. By default, Always is to be used. Always: Always dereference aliases
Never: Never dereferences aliases.
Searching:Dereferences aliases only after name resolution.
Finding: Dereferences aliases only during name resolution

Referral Handling Redirection of user request:


Ignore: does not handle request redirections
Follow:does handle request redirections

Limit Limited number of records to be read

2. Click Check authentication to verify your access rights.

416
Centralizing metadata for Data Integration

3. Click Fetch Base DNs to retrieve the DN and click the Next button to continue.
4. If any third-party libraries required for setting up an LDAP connection are found missing, an external module
installation wizard appears. Install the required libraries as guided by the wizard.

Defining the schema of your LDAP directory

Procedure
1. Select the attributes to be included in the schema structure.
Add a filter if you want selected data only.

2. Click Refresh Preview to display the selected column and a sample of the data.
3. Click Next to continue.

Finalizing the end schema of your LDAP directory

About this task


The last step shows the LDAP schema generated and allows you to further customize the end schema.

417
Centralizing metadata for Data Integration

Procedure
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
• Redefine the columns by editing the relevant fields.

Add or delete a column using the and buttons.

Change the order of the columns using the and buttons.

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.

418
Centralizing metadata for Data Integration

• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the LDAP directory which the schema is based on has changed, use the Guess button to generate again the schema.
Note that if you customized the schema, your changes will not be retained after the Guess operation.
3. Click Finish. The new schema is displayed under the relevant LDAP connection node in the Repository tree view.

Centralizing Azure Storage metadata


About this task
You can use the Azure Storage metadata wizard provided by Talend Studio to set up quickly a connection to Azure Storage
and retrieve the schema of your interested container(s), queue(s), and table(s).

Procedure
1. In the Repository tree view, expand the Metadata node, right-click the Azure Storage tree node, and select Create Azure
Storage from the contextual menu to open the Azure Storage wizard.

2. In the Azure Storage Connection Settings dialog box, specify (or update if needed) the values for the properties listed in
the following table.

419
Centralizing metadata for Data Integration

Property Description

Name Enter the name for the connection to be created.

Account Name Enter the name of the storage account you need to access. A storage account name can be found in
the Manage Access Keys dashboard of the Microsoft Azure Storage system to be used.

Account Key Enter the key associated with the storage account you need to access. Two keys are available for
each account and by default, either of them can be used for this access.

Protocol Select the protocol for this connection to be created.

Use Azure Shared Access Select this check box to use a shared access signature to access the storage resources without
Signature need for the account key. In the Azure Shared Access Signature field displayed, enter your shared
access signature between double quotation marks. For more information, see Using Shared Access
Signatures (SAS).

3. Click Test connection to verify the configuration.


A connection successful dialog box will prompt up if the connection information provided is correct. Then click OK to
close the dialog box. The Next button will be available to use.
4. Click Next and in the Add a new container schema in current connection dialog box displayed, select your interested
container(s) whose schema you want to retrieve.

420
Centralizing metadata for Data Integration

5. Click Next and in the Add a new queue schema in current connection dialog box displayed, select your interested
queue(s) whose schema you want to retrieve.

6. Click Next and in the Add a new table schema in current connection dialog box displayed, select your interested table(s)
whose schema you want to retrieve.

421
Centralizing metadata for Data Integration

7. Click Finish to complete the procedure.


The newly created Azure Storage connection is displayed under the Azure Storage node in the Repository tree view,
along with the schema of your interested container(s), queue(s), and table(s).

You can now add a Azure Storage component onto the design workspace by dragging and dropping the Azure Storage
connection created or any container/queue/table retrieved from the Repository view to reuse the connection and/or

422
Centralizing metadata for Data Integration

schema information. For more information about dropping component metadata in the design workspace, see Using
centralized metadata in a Job on page 513. For more information about the usage of the Azure Storage components,
see the related documentation for the Azure Storage components.
To modify the Azure Storage connection metadata created, right-click the connection node in the Repository tree view
and select Edit Azure Storage from the contextual menu to open the metadata setup wizard.
To edit the schema of an interested container/queue/table, right-click the container/queue/table node in the Repository
tree view and select Edit Schema from the contextual menu to open the update schema wizard.

Centralizing Data Stewardship metadata


Talend Studio enables you to centralize the details of one or more Talend Data Stewardship connections under the Metadata
folder in the Repository tree view. You can use any of these established connections to connect to Talend Data Stewardship.

Procedure
1. In the Repository tree view, expand Metadata and right-click Data Stewardship.
2. Select Create Data Stewardship from the contextual menu.
The Data Stewardship dialog box opens.

3. In the Name field, enter the name for the connection to be created.
4. In the URL field, enter the address to access Talend Data Stewardship suffixed with /data-stewardship/, for
example http://<server_address>:19999/data-stewardship/.
If you are working with Talend Cloud Data Stewardship, use the URL for the corresponding data center suffixed with /
data-stewardship/ to access the application. For example, https://tds.us.cloud.talend.com/data-stewa
rdship/ for the AWS US data center.
For the URLs of available data centers, see Talend Cloud regions and URLs.
5. In the Username field, enter the username used to access Talend Data Stewardship.
The value must be a valid login (an email address) of a Data Stewardship user who has been assigned the Campaign
Owner role in Talend Administration Center.
6. In the Password field, enter the authentication information used to access Talend Data Stewardship.
If you are working with Talend Cloud Data Stewardship and if:

423
Centralizing metadata for Data Integration

• SSO is enabled, enter an access token in the field.


• SSO is not enabled, enter either an access token or your password in the field.
7. Click Test connection to verify the connection you have created.
When the successful connection message dialog box displays, click OK.
8. Click Finish to save your changes and close the dialog box.
The newly created connection is listed under Metadata > Data Stewardship in the Repository tree view. To modify the
connection, right-click it and select Edit Data Stewardship or double-click it to open the wizard.

Centralizing Google Drive metadata


Talend Studio enables you to centralize the details of your Google Drive connection under the Metadata folder in the
Repository tree view. You can then use the established connection to connect to your Google Drive when using the Google
Drive components.

Procedure
1. In the Repository tree view, expand the Metadata node, right-click the Google Drive tree node, and select New
GoogleDrive Connection from the contextual menu to open the New Google Drive Connection wizard.
2. Specify the values for the properties listed in the following table according to the OAuth method you are using.

Property Description

Name The name for the Google Drive connection to be created.

Application Name The application name required by Google Drive to get access to its
APIs.

OAuth Method Select an OAuth method used to access Google Drive from the drop-
down list.
• Access Token (deprecated): uses an access token to access
Google Drive.
• Installed Application (Id & Secret): uses the client ID and client
secret created through Google API Console to access Google
Drive. For more information about this method, see Google
Identity Platform > Installed applications .
• Installed Application (JSON): uses the client secret JSON file
that is created through Google API Console and contains the
client ID, client secret, and other OAuth 2.0 parameters to access
Google Drive.
• Service Account: uses a service account JSON file created
through Google API Console to access Google Drive. For more
information about this method, see Google Identity Platform >
Service accounts.

Access Token The access token generated through Google Developers OAuth 2.0
Playground.
This property is available only when Access Token is selected
from the OAuth Method drop-down list.

Client ID and Client Secret The client ID and client secret.


These two properties are available only when Installed
Application (Id & Secret) is selected from the
OAuth Method drop-down list.

Client Secret JSON The path to the client secret JSON file.
This property is available only when Installed
Application (JSON) is selected from the OAuth Method
drop-down list.

Service Account JSON The path to the service account JSON file.
This property is available only when Service Account is
selected from the OAuth Method drop-down list.

424
Centralizing metadata for Data Integration

Property Description

DataStore Path The path to the credential file that stores the refresh token.
This property is available only when Installed
Application (Id & Secret) or Installed
Application (JSON) is selected from the OAuth Method
drop-down list.

Use Proxy Select this check box when you are working behind a proxy. With this
check box selected, you need to specify the value for the following
parameters:
• Host: The IP address of the HTTP proxy server.
• Port: The port number of the HTTP proxy server.

Use SSL Select this check box if an SSL connection is used to access Google
Drive. With this check box selected, you need to specify the value for
the following parameters:
• Algorithm: The name of the SSL cryptography algorithm.
• Keystore File: The path to the certificate TrustStore file that
contains the list of certificates the client trusts.
• Password: The password used to check the integrity of the
TrustStore data.

3. Click Test connection to verify the configuration.


If you are using the OAuth method Access Token (deprecated), Installed Application (Id &
Secret), or Installed Application (JSON), a window will pop up in your web browser, asking you to
choose your account and allow the access to your Google Drive. After the authentication in web browser, a connection
successful dialog box will prompt up in Talend Studio.
4. Click OK to close the connection successful dialog box and then click Finish.
The newly created Google Drive connection is displayed under the Google Drive node in the Repository tree view.
You can now add a Google Drive component onto the design workspace by dragging and dropping the new Google
Drive connection node to reuse the connection information. For more information about dropping component metadata
in the design workspace, see Using centralized metadata in a Job on page 513. For more information about the usage
of the Google Drive components, see the related documentation for the Google Drive components.
To modify the Google Drive connection metadata created, right-click the connection node in the Repository tree view
and select Edit GoogleDrive Connection from the contextual menu to open the metadata setup wizard.

Centralizing Marketo metadata


About this task
You can use the Marketo metadata wizard provided by Talend Studio to set up quickly a connection to Marketo and retrieve
the schema of your interested custom objects using REST API.

Procedure
1. In the Repository tree view, expand the Metadata node, right-click the Marketo tree node, and select Create Marketo
from the contextual menu to open the Marketo wizard.

425
Centralizing metadata for Data Integration

2. In the Marketo REST Connection Settings dialog box, specify (or update if needed) the values for the properties listed in
the following table.

426
Centralizing metadata for Data Integration

Property Description

Connection name Enter the name for the connection to be created.

Endpoint address Enter the API Endpoint URL of the Marketo Web Service. The API Endpoint URL can be found on the
Marketo Admin > Web Services panel.

Client access ID Enter the client Id for the access to the Marketo Web Service.

Secret key Enter the client secret for the access to the Marketo Web Service.

Timeout Enter the timeout value (in milliseconds) for the connection to the Marketo Web Service before
terminating the attempt.

Max reconnection attempts Enter the maximum number of reconnect attempts to the Marketo Web Service before giving up.

Attempt interval time Enter the time period (in milliseconds) between subsequent reconnection attempts.

3. Click Test connection to verify the configuration.


A connection successful dialog box will prompt up if the connection information provided is correct. Then click OK to
close the dialog box. The Next button will be available to use.
4. Click Next to go to the next step to select your interested custom objects.

427
Centralizing metadata for Data Integration

5. Select the custom objects whose schema you want to retrieve, and then click Finish.
The newly created Marketo connection is displayed under the Marketo node in the Repository tree view, along with the
schema of your interested custom objects.

You can now add a Marketo component onto the design workspace by dragging and dropping the Marketo connection
created or any custom object retrieved from the Repository view to reuse the connection and/or schema information.
For more information about dropping component metadata in the design workspace, see Using centralized metadata in

428
Centralizing metadata for Data Integration

a Job on page 513. For more information about the usage of the Marketo components, see the related documentation
for the Marketo components.
To modify the Marketo connection metadata created, right-click the connection node in the Repository tree view and
select Edit Marketo from the contextual menu to open the metadata setup wizard.
To edit the schema of an interested custom object, right-click the custom object node in the Repository tree view and
select Edit Schema from the contextual menu to open the update schema wizard.

Centralizing Salesforce metadata


You can use the Salesforce metadata wizard provided by Talend Studio to set up quickly a connection to a Salesforce system
so that you can reuse your Salesforce metadata across Jobs.

About this task


You can use the Salesforce metadata wizard provided by Talend Studio to set up quickly a connection to a Salesforce system
so that you can reuse your Salesforce metadata across Jobs.

Procedure
1. In the Repository tree view, expand the Metadata node, right-click the Salesforce tree node, and select Create
Salesforce from the contextual menu to open the Salesforce wizard.

2. Enter a name for your connection in the Name field, select Basic or OAuth from the Connection type list, and provide
the connection details according to the connection type you selected.

429
Centralizing metadata for Data Integration

• With the Basic option selected, you need to specify the following details:
• User Id: the ID of the user in Salesforce.
• Password: the password associated with the user ID.
• Security Key: the security token.
• With the OAuth option selected, you need to specify the following details:
• Client Id and Client Secret: the OAuth consumer key and consumer secret, which are available in the OAuth
Settings area of the Connected App that you have created at Salesforce.com.
• Callback Host and Callback Port: the OAuth authentication callback URL. This URL (both host and port)
is defined during the creation of a Connected App and will be shown in the OAuth Settings area of the
Connected App.
• Token File: the path to the token file that stores the refresh token used to get the access token without
authorization.
3. If needed, click Advanced... to open the Salesforce Advanced Connection Settings dialog box, do the following and then
click OK:
• enter the Salesforce Webservice URL required to connect to the Salesforce system.
• select the Bulk Connection check box if you need to use bulk data processing function.
• select the Use or save the connection session check box and in the Session directory field displayed, specify the
path to the connection session file to be saved or used.
This session file can be shared by different Jobs to retrieve a connection session as long as the correct user ID is
provided by the component. This way, you do not need to connect to the server to retrieve the session.
When an expired session is detected, if the correct connection information (the user ID, password, and security
key) is provided, the component will connect to the server to retrieve the new session information and update the
connection session file.
This check box is available only when Basic is selected from the Connection type drop-down list.
• select the Need compression check box to activate SOAP message compression, which can result in increased
performance levels.

430
Centralizing metadata for Data Integration

• select the Trace HTTP message check box to output the HTTP interactions on the console.
This option is available if the Bulk Connection check box is selected.
• select the Use HTTP Chunked check box to use the HTTP chunked data transfer mechanism.
This option is not available if the Bulk Connection check box is selected.
• enter the ID of the real user in the Client Id field to differentiate between those who use the same account and
password to access the Salesforce website.
• fill the Timeout field with the Salesforce connection timeout value, in milliseconds.
• If needed, select the Use Proxy check box to set the SOCKS type proxy and enter the corresponding setting details.
Note that you can also set the HTTP type proxy via Window > Preferences > General > Network Connections.

4. Click Test connection to verify the connection settings, and when the connection check success message appears, click
OK for confirmation. Then click Next to go to the next step to select the modules you want to retrieve the schema of.
5. Select the check boxes for the modules of interest and click Finish to retrieve the schemas of the selected modules.
You can type in filter text to narrow down your selection.

431
Centralizing metadata for Data Integration

The newly created Salesforce connection is displayed under the Salesforce node in the Repository tree view, along with
the schemas of the selected modules.

Results
You can now drag and drop the Salesforce connection or any schema of it from the Repository onto the design workspace,
and from the dialog box that opens choose a Salesforce component to use in your Job. You can also drop the Salesforce

432
Centralizing metadata for Data Integration

connection or a schema of it onto an existing component to reuse the connection or metadata details in the component. For
more information about dropping component metadata in the design workspace, see Using centralized metadata in a Job on
page 513.
To modify the Salesforce metadata entry, right-click it from the Repository tree view, and select Edit Salesforce to open the
file metadata setup wizard.
To edit an existing Salesforce schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.

Centralizing Snowflake metadata


About this task
You can use the Snowflake metadata wizard provided by Talend Studio to set up quickly a connection to Snowflake and
retrieve the schema of your interested tables.

Note: The Snowflake metadata wizard doesn't support handling Snowflake views for now.

Procedure
1. In the Repository tree view, expand the Metadata node, right-click the Snowflake tree node, and select Create
Snowflake from the contextual menu to open the Snowflake wizard.

2. In the Snowflake Connection Settings dialog box, specify the values for the properties listed in the following table.

433
Centralizing metadata for Data Integration

Property Description

Name Enter the name for the connection to be created.

Account Enter the account name that has been assigned to you by Snowflake.

User Id Enter your login name that has been defined in Snowflake using the LOGIN_NAME parameter of
Snowflake. For details, ask the administrator of your Snowflake system.

Password Enter the password associated with the user ID.

Warehouse Enter the name of the Snowflake warehouse to be used. This name is case-sensitive and is normally
upper case in Snowflake.

Schema Enter the name of the database schema to be used. This name is case-sensitive and is normally
upper case in Snowflake.

Database Enter the name of the Snowflake database to be used. This name is case-sensitive and is normally
upper case in Snowflake.

3. Click Advanced... and in the Snowflake Advanced Connection Settings dialog box displayed, specify or update the values
for the advanced properties listed in the following table and click OK to close the dialog box.

434
Centralizing metadata for Data Integration

Property Description

Login Timeout Specify how long to wait for a response when connecting to Snowflake before returning an error.

Tracing Select the log level for the Snowflake JDBC driver. If enabled, a standard Java log is generated.

Role Enter the default access control role to use to initiate the Snowflake session.
This role must already exist and has been granted to the user ID you are using to connect
to Snowflake. If this field is left empty, the PUBLIC role is automatically granted. For further
information about the Snowflake access control model, see Snowflake documentation at
Understanding the Access Control Model.

4. Click Test connection to verify the configuration.


A connection successful dialog box will prompt up if the connection information provided is correct. Then click OK to
close the dialog box. The Next button will be available to use.
5. Click Next to go to the next step to select your interested tables.

435
Centralizing metadata for Data Integration

6. Select the tables whose schema you want to retrieve, and then click Finish.
The newly created Snowflake connection is displayed under the Snowflake node in the Repository tree view, along with
the schema of your interested tables.

You can now add a Snowflake component onto the design workspace by dragging and dropping the Snowflake
connection created or any table retrieved from the Repository view to reuse the connection and/or schema information.

436
Centralizing metadata for Data Integration

For more information about dropping component metadata in the design workspace, see Using centralized metadata
in a Job on page 513. For more information about the usage of the Snowflake components, see the related
documentation for the Snowflake components.
To modify the Snowflake connection metadata created, right-click the connection node in the Repository tree view and
select Edit Snowflake from the contextual menu to open the metadata setup wizard.
To edit the schema of an interested table, right-click the table node in the Repository tree view and select Edit Schema
from the contextual menu to open the update schema wizard.

Setting up a generic schema


Talend Studio allows you to create a generic schema to use in your Jobs if none of the specific metadata wizards matches
your need or if you do not have any source file to take the schema from.
You can create a generic schema:
• from scratch. For details, see Setting up a generic schema from scratch on page 437,
• from a schema definition XML file. For details, see Setting up a generic schema from an XML file on page 440, and
• from the schema defined in a component. For details, see Saving a component schema as a generic schema on page
442.
To use a generic schema on a component, use either of the following methods:
• Select Repository from the Schema drop-down list in the component Basic settings view.
Click the [...] button to open the Repository Content dialog box, select the generic schema under the Generic schemas
node and click OK.
• Select the metadata node of the generic schema from the Repository tree view and drop it onto the component.

Setting up a generic schema from scratch

About this task


To create a generic schema from scratch, proceed as follows:

Procedure
1. Right-click Generic schemas under the Metadata node in the Repository tree view, and select Create generic schema.

2. In the schema creation wizard that appears, fill in the generic schema properties such as schema Name and Description.
The Status field is a customized field. For more information about how to define the field, see Status settings on page
586.
Click Next to continue.

437
Centralizing metadata for Data Integration

3. Give a name to the schema or use the default one (metadata) and add a comment if needed. Customize the schema
structure in the Schema panel according to your needs.
The tool bar allows you to add, remove or move columns in your schema.

438
Centralizing metadata for Data Integration

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
4. Click Finish to complete the generic schema creation. The created schema is displayed under the relevant Generic
schemas node.

439
Centralizing metadata for Data Integration

Setting up a generic schema from an XML file

About this task

Warning:
The source XML file from which you can create a generic schema must be an export of schema from the Studio or an XML
with the same XML tree structure, not any other kind of XML.

To create a generic schema from a source XML file, proceed as follows:

Procedure
1. Right-click Generic schemas in the Repository tree view, and select Create generic schema from xml.

2. In the dialog box that appears, choose the source XML file from which the schema is taken and click Open.
3. In the schema creation wizard that appears, define the schema Name or use the default one (metadata) and give a
Comment if any.
The schema structure from the source file is displayed in the Schema panel. You can customize the columns in the
schema as needed.
The tool bar allows you to add, remove or move columns in your schema.

440
Centralizing metadata for Data Integration

Warning: Avoid using any Java reserved keyword as a schema column name.

Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
4. Click Finish to complete the generic schema creation. The created schema is displayed under the relevant Generic
schemas node.

441
Centralizing metadata for Data Integration

Saving a component schema as a generic schema

About this task


You can create a generic schema by saving the schema defined in a component. To do so, follow the steps below:

Procedure
1. Open the Basic settings view of the component that has the schema you want to create a generic schema from, and
click the [...] button next to Edit schema to open the Schema dialog box.

2. Click the floppy disc icon to open the Select folder dialog box.

3. Select a folder if needed, and click OK to close the dialog box and open the Save as generic schema creation wizard.

442
Centralizing metadata for Data Integration

4. Fill in the Name field (required) and the other fields if needed, and click Finish to save the schema. Then close the
Schema dialog box opened from the component Basic settings view.
The schema is saved in the selected folder under the Generic schemas node in the Repository tree view.

Centralizing MDM metadata


Talend Studio enables you to centralize the details of one or more MDM connections under the Metadata folder in the
Repository tree view. You can then use any of these established connections to connect to the MDM server.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

Note: You can also set up an MDM connection the same way by clicking the icon in the Basic settings view of the
tMDMInput and tMDMOutput components.

According to the option you select, the wizard helps you create an input XML, an output XML or a receive XML schema. Later,
in a Talend Job, the tMDMInput component uses the defined input schema to read master data stored in XML documents,
tMDMOutput uses the defined output schema to either write master data in an XML document on the MDM server, or to
update existing XML documents and finally the tMDMReceive component uses the defined XML schema to receive an MDM
record in XML from MDM triggers and processes.

Setting up the connection

About this task


To establish an MDM connection, complete the following:

443
Centralizing metadata for Data Integration

Procedure
1. In the Repository tree view, expand Metadata and right-click Talend MDM.
2. Select Create MDM Connection from the contextual menu.
The connection wizard is displayed.

3. Fill in the connection properties such as Name, Purpose and Description. The Status field is a customized field that can
be defined. For more information, see Status settings on page 586.
4. Click Next to proceed to the next step.

444
Centralizing metadata for Data Integration

5. From the Version list, select the version of the MDM server to which you want to connect.

Note:
The default value in the Server URL field varies depending on what you selected in the Version list.

6. Fill in the connection details including the authentication information to the MDM server and then click Check to check
the connection you have created.
A dialog box pops up to show that your connection is successful. Click OK to close it.
If needed, you can click Export as context to export this Talend MDM connection details to a new context group in the
Repository or reuse variables of an existing context group to set up your metadata connection. For more information,
see Exporting metadata as context and reusing context parameters to set up a connection on page 500.
7. Click Next to proceed to the next step.

445
Centralizing metadata for Data Integration

8. From the Data-Model list, select the data model against which the master data is validated.
9. From the Data-Container list, select the data container that holds the master data you want to access.
10. Click Finish to validate your changes and close the dialog box.
The newly created connection is listed under Talend MDM under the Metadata folder in the Repository tree view.

Results
You need now to retrieve the XML schema of the business entities linked to this MDM connection.

Defining MDM schema


Defining Input MDM schema

This section describes how to define and download an input MDM XML schema. To define and download an output MDM
XML schema, see Defining output MDM schema on page 452.
Retrieve entity values for an MDM connection

About this task


To set the values to be fetched from one or more entities linked to a specific MDM connection, complete the following:

Procedure
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to retrieve the
entity values, and select Retrieve Entity from the contextual menu.

446
Centralizing metadata for Data Integration

Example

2. In the MDM Model dialog box, select the Input MDM option in order to download an input XML schema and then click
Next to proceed to the following step.

Example

3. From the Entities field, select the business entity (XML schema) from which you want to retrieve values.
The name is displayed automatically in the Name field.

447
Centralizing metadata for Data Integration

Example

Example

Note: You are free to enter any text in this field, although you would likely put the name of the entity from which
you are retrieving the schema.

4. Click Next to proceed to the next step.


The schema of the entity you selected is automatically displayed in the Source Schema panel. Here, you can set the
parameters to be taken into account for the XML schema definition.

448
Centralizing metadata for Data Integration

Example

The schema dialog box is divided into four different panels as follows:

Panel Description

Source Schema Tree view of the uploaded entity.

Target schema Extraction and iteration information.

Preview Target schema preview.

File viewer Raw data viewer.

5. In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node on which to
apply the iteration.
Or, drop the node from the source schema to the target schema Xpath field.
This link is orange in color.

Note: The Xpath loop expression field is compulsory.

6. If required, define a Loop limit to restrict the iteration to a number of nodes.

449
Centralizing metadata for Data Integration

Example

Example
In the capture above, we use Features as the element to loop on because it is repeated within the Product entity as
follows:

<Product>
<Id>1</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color red</Feature>
<Feature>Size maxi</Feature
<Features>
...
</Product>
<Product>
<Id>2</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color blue</Feature>
<Feature>Thermos</Feature>
<Features>
...
</Product>

By doing so, the tMDMInput component that uses this MDM connection will create a new row for every item with
different feature.
7. To define the fields to extract, drop the relevant node from the source schema to the Relative or absolute XPath
expression field.

450
Centralizing metadata for Data Integration

Example

Tip: Use the [+] button to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or
the Shift keys for multiple selection of grouped or separate nodes and drop them to the table.

8. If required, enter a name to each of the retrieved columns in the Column name field.

Tip: You can prioritize the order of the fields to extract by selecting the field and using the up and down arrows. The
link of the selected field is blue, and all other links are grey.

9. Click Finish to validate your modifications and close the dialog box.

Results
The newly created schema is listed under the corresponding MDM connection in the Repository tree view.

Modifying the created schema

Procedure
1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want to modify.
2. Right-click the schema name and select Edit Entity from the contextual menu.
A dialog box is displayed.

451
Centralizing metadata for Data Integration

3. Modify the schema as needed.


You can change the name of the schema according to your needs, you can also customize the schema structure in the
schema panel. The tool bar allows you to add, remove or move columns in your schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
Below are the commonly used Talend data types:
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a data
file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has a data
type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column names
appearing in the header. For more information, see Dynamic schema on page 82.
• Document: a data type that allows processing an entire XML document without regarding to its content.
4. Click Finish to close the dialog box.

Results
The MDM input connection (tMDMInput) is now ready to be dropped in any of your Jobs.

Defining output MDM schema

About this task


This section describes how to define and download an output MDM XML schema. To define and download an input MDM
XML schema, see Setting up the connection on page 443.
To set the values to be written in one or more entities linked to a specific MDM connection, complete the following:

Procedure
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to write the
entity values, and select Retrieve Entity from the contextual menu.
2. In the MDM Model dialog box, select the Output MDM option in order to define an output XML schema and then click
Next to proceed to the following step.

452
Centralizing metadata for Data Integration

Example

3. From the Entities field, select the business entity (XML schema) in which you want to write values.

Example

The name is displayed automatically in the Name field.

Note: You are free to enter any text in this field, although you would likely put the name of the entity from which
you are retrieving the schema.

4. Click Next to proceed to the next step.


Identical schema of the entity you selected is automatically created in the Linker Target panel, and columns are
automatically mapped from the source to the target panels. The wizard automatically defines the item Id as the looping

453
Centralizing metadata for Data Integration

element. You can always select to loop on another element. Here, you can set the parameters to be taken into account
for the XML schema definition.

Example

5. Click Schema Management to display a dialog box.


6. Do necessary modifications to define the XML schema you want to write in the selected entity.

Warning: Your Linker Source schema must corresponds to the Linker Target schema, that is to say define the
elements in which you want to write values.

7. Click OK to close the dialog box.


The defined schema is displayed under Schema list.

454
Centralizing metadata for Data Integration

Example

8. In the Linker Target panel, right-click the element you want to define as a loop element and select Set as loop element.
This will restrict the iteration to one or more nodes.
By doing so, the tMDMOutput component that uses this MDM connection will create a new row for every item with
different feature.

Example

455
Centralizing metadata for Data Integration

Tip: You can prioritize the order of the fields to write by selecting the field and using the up and down arrows.

9. Click Finish to validate your modifications and close the dialog box.

Results
The newly created schema is listed under the corresponding MDM connection in the Repository tree view. You can modify
the created schema according to your needs and drop the connection as a tMDMOutput in any of your Jobs.
For more information on how to modify the schema, see Modifying the created schema on page 451.

Defining Receive MDM schema

Before you begin


This section describes how to define a receive MDM XML schema based on the MDM connection.
To set the XML schema you want to receive in accordance with a specific MDM connection, complete the following:

Procedure
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to retrieve the
entity values, and select Retrieve Entity from the contextual menu.
2. In the MDM Model dialog box, select the Receive MDM option in order to define a receive XML schema and then click
Next to proceed to the following step.

456
Centralizing metadata for Data Integration

Example

3. From the Entities field, select the business entity (XML schema) according to which you want to receive the XML
schema.
The name displays automatically in the Name field.

Example

Note: You can enter any text in this field, although you would likely put the name of the entity according to which
you want to receive the XML schema.

457
Centralizing metadata for Data Integration

4. Click Next to proceed to the next step.


The schema of the entity you selected display in the Source Schema panel. Here, you can set the parameters to be taken
into account for the XML schema definition.

Example

The schema dialog box is divided into four different panels as follows:

Panel Description

Source Schema Tree view of the uploaded entity.

Target schema Extraction and iteration information.

Preview Target schema preview.

File viewer Raw data viewer.

5. In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node on which to
apply the iteration. Or, drop the node from the source schema to the target schema Xpath field. This link is orange in
color.

Note: The Xpath loop expression field is compulsory.

6. If required, define a Loop limit to restrict the iteration to one or more nodes.

458
Centralizing metadata for Data Integration

Example

In the above capture, we use Features as the element to loop on because it is repeated within the Product entity as
the following:

<Product>
<Id>1</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color red</Feature>
<Feature>Size maxi</Feature
<Features>
...
</Product>
<Product>
<Id>2</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color blue</Feature>
<Feature>Thermos</Feature>
<Features>
...
</Product>

By doing so, the tMDMReceive component that uses this MDM connection will create a new row for every item with
different feature.
7. To define the fields to receive, drop the relevant node from the source schema to the Relative or absolute XPath
expression field.

459
Centralizing metadata for Data Integration

Tip: Use the plus sign to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or the
Shift keys for multiple selection of grouped or separate nodes and drop them to the table.

8. If required, enter a name to each of the received columns in the Column name field.

Tip: You can prioritize the order of the fields you want to receive by selecting the field and using the up and down
arrows. The link of the selected field is blue, and all other links are grey.

9. Click Finish to validate your modifications and close the dialog box.

Results
The newly created schema is listed under the corresponding MDM connection in the Repository tree view. You can modify
the created schema according to your needs and drop the connection as a tMDMReceive in any of your Jobs.
For more information on how to modify the schema, see Modifying the created schema on page 451.

Managing a survivorship rule package


If you have subscribed to the Data Quality features, you have access to the survivorship package item. A survivorship rule
package contains a complete survivor validation flow composed of various user-defined validation rules. This survivor
validation flow allows you to select the best-of-breed data from groups of duplicate data and using the selected data, to
create the single representation of each duplicates group.
An established survivorship rule package is stored in the folder of the same name in Metadata > Rules Management >
Survivorship Rules on the Repository tree view and is composed of the items representing each step of a validation flow, the
rule package itself, and the whole validation flow, respectively. The following figure presents an example of the survivorship
rule package in the Repository.

460
Centralizing metadata for Data Integration

Note: The Survivorship Rules item node has no child items until you have generated the corresponding survivorship rule
package. You need to use the tRuleSurvivorship component to define each rule of interest, generate the corresponding
rule package into the Repository and as well to execute the established survivor validation flow. For further information
about this component, see tRuleSurvivorship on Deduplication.

Once a survivorship rule package with its validation rules is generated under the Survivorship Rules item, you could perform
different operations to manage them:
• Lock items: for further information about how to lock or unlock an item, see Lock principle on page 27.
• Detect dependencies: for further information about the dependencies, see Updating impacted Jobs manually on page
174
• Import or export items: for further information, see Importing/exporting items and building Jobs on page 157
• Edit items: for further information, see Viewing or editing a survivorship rule item on page 461

Viewing or editing a survivorship rule item


The items you are able to view or edit of a rule package include the package itself, each of the validation steps and the
whole survivor validation flow.

The validation step item

About this task


One validation step contains a group of validation rules that you are able to view or edit. To do this, proceed as follows:

461
Centralizing metadata for Data Integration

Procedure
1. In Metadata > Rules Management > Survivorship Rules of the Repository tree view, expand it to show the survivorship
rule package folder that you have generated.
2. Select the rule package folder of interest and expand it. Then the contents of this package folder are listed under it.
3. Right-click the validation step item that you need to view or edit. Then in the contextual menu, select Edit rule.

The selected step item is opened in the workspace of your Studio.

In this example, the validation rule of this step is labelled 5_MostCommonZip, belonging to the rule group whose
identifier in the established survivor validation flow is 5_MostCommonZipGroup, and it examines the data from the
zip column. The when clause indicates the condition used to examine the data and the then clause indicates the
target columns from which the best-of-breed data is selected.
4. Type in the necessary modifications and press Ctrl+S to save it.

Note:
The edit feature is intended for viewing items as well as minor modifications such as, in this example, the change
of the matching regex from "\\d{5}" to "\\d{6}". If you have to rewrite the clauses, or remove or add some
clauses, we recommend using tRuleSurvivorship to define and organize the rules of interest and then regenerate the
new rule package into the Repository in order to avoid manual efforts and risky errors.

The rule package item

To view or edit a rule package item, proceed as the same as described in the section earlier regarding the validation step
item. Thus you are able to open the contextual menu of the rule package item, labelled drools x.x (x.x indicates the
version number of the package) and select Edit package to open it on the workspace of your Studio. In the following figure,
the rule package item is drools 0.1.

462
Centralizing metadata for Data Integration

Once opened, its contents read as presented in the figure below:

This package defines a Drools declarative model for the corresponding survivor validation flow, using the user-defined
columns in the input schema of the tRuleSurvivoship component. For further information about Drools declarative model,
see the manual of Drools Guvnor.

Note:
The edit feature is intended for viewing items as well as minor modifications. If you have to rewrite the whole contents,
or remove or add some contents, we recommend using tRuleSurvivorship to define and organize the rules of interest and
then regenerate the new rule package into the Repository in order to avoid manual efforts and risky errors.

463
Centralizing metadata for Data Integration

The validation flow item

The survivor validation flow item presents the diagram of a survivor validation flow. To view or edit it, proceed as the same
as described in the section earlier regarding the validation step item. Thus you are able to open the contextual menu of the
survivor flow, and select Edit flow to open it on the workspace of your Studio.

Once opened, the diagram of the validation flow in this example reads as presented in the figure below:

464
Centralizing metadata for Data Integration

This diagram is a simple Drools flow. You can select each step to check the corresponding properties in the Properties view,
for example, the RuleFlowGroup property, which indicates the group identifier of the rules defined and executed at each
step.

Note:
If the Properties view does not display, select the menu Window > Show view > General > Properties to enable it.

On the left side are docked the tool panel where you can select the tools of interest to modify the established diagram.
Three flow components are available in this figure, but in the Drools Flow nodes view of the Preferences dialog box, you can
select the corresponding check box to add a flow component or clear the corresponding check box to hide it. To validate the
settings of preferences, you need to re-open the flow of interest.
For further information about the Preferences dialog box, see Setting Talend Studio preferences on page 600

465
Centralizing metadata for Data Integration

Note:
The edit feature is intended for viewing items as well as minor modifications. If you have to rearrange the flow or
change properties of a step, we recommend using tRuleSurvivorship to define and organize the rules of interest and then
regenerate the new rule package into the Repository in order to avoid manual efforts and risky errors.

For further information about a Drools flow and its editing tools, see the relative Drools manuals.

Centralizing Embedded Rules (Drools)


Talend Studio provides the basis for storing and managing business rules through the integration of the Drools Business
Rule Management System (BRMS). It supports the dynamic addition and removal of business rules which you can later
execute in defined Jobs.

Note:
Drools Guvnor, a web based business rules governance system, has been integrated in Talend Studio. With Drools Guvnor,
non-technical users can quickly and easily create and modify complex business logic directly, via the Guvnor interface.
For more information, see Talend Administration Center User Guide.

Through the Rules folder in the Metadata node of the Repository tree view, you can create your own personalized rules or
access a file that holds predefined rules. Then you can use the tRule component to apply the encoded rules in one or more
of your Job designs.

Defining the general properties of the embedded rule

Procedure
1. In the Repository tree view, expand the Metadata node and the Rules Management node.
2. Right-click the Embedded Rules folder.

3. In the contextual menu, select Create Rules to display the New Rule... wizard that will guide you through the steps of
creating or selecting the business rules you want to use.

Note:
The Embedded Rules files are either Drools or Excel files of .drl or .xls formats, respectively.

4. In the New Rule... wizard, fill in schema generic information, such as Name and Description and click Next to open a
new view on the wizard.
For further information, see Setting up a database connection on page 319.

Uploading or creating a file


Through the New Rule... wizard, you can either:
• create a rule file of Drools format in which you can store the newly created rules, or
• connect to an existing rule file of Drools or Excel format.

466
Centralizing metadata for Data Integration

Warning: When you connect to an Excel file, make sure that all occurrences of project and Job names on top of the
file correspond to the project you launch the Studio on and to the Job you want to use the rules in.

Creating a rule file

Procedure
1. Select the Create option to create the rule file of Drools format.

2. In the Type of rule resource list, select the format of the file you want to create: New DRL (rule package).
3. Click Finish to validate the operation and close the wizard.
A rule editor opens in the design workspace, in which you must manually define the rules you want to use in simplified
Drools language.
The figure below shows an example of a defined rule.

467
Centralizing metadata for Data Integration

Connecting to an existing rule file

Procedure
1. Select the Select option.
The DRL/XSL field displays.

2. In the Type of rule resource field, select New DRL (rule package) or New XLS (Excel) depending on the file format you
want to set the path to.
3. Click the Browse button next to the field to set the path to the rule file you want to use.
4. Click Finish to close the wizard and open in the Studio the rule file you set the connection to.

Note: If you want to modify any of the rules held in the rule files, do the following:
• For a Drools file, open the file in Talend Studio and modify the rules directly in the open file.
• For an Excel file, open the file locally and carry out necessary modifications. Then in the Repository tree view
and under Rules, right-click the file connection and select Update Xls file in the contextual menu.

Warning: If you modify a rule, you must close the Job using the rule and reopen it to take into accounts the new
modifications.

Centralizing Web Service metadata


If you often need to visit a Web Service from your Talend Studio you can save your Web Service connections in the
Repository.
The Web Service schema wizard enables you to create either a simple schema (Simple WSDL) or an advanced schema
(Advanced WebService), according to your needs.

Note:
In step 1, you must enter the schema metadata before choosing whether to create a simple or an advanced schema in
step 2. It is therefore important to enter metadata information which will help you to differentiate between your different
schema types in the future.

To create a simple schema, see Setting up a simple schema on page 469.


To create an advanced schema, see Setting up an advanced schema on page 473.

468
Centralizing metadata for Data Integration

Setting up a simple schema


This section describes how to define a simple Web Service schema (Simple WSDL).

Defining general properties of the simple Web Service schema

Procedure
1. In the Repository, expand the Metadata node.
2. Right-click Web Service and select Create WSDL schema from the context menu list.

3. Enter the generic schema information such as its Name and Description.

4. Click Next to select the schema type in step 2.

469
Centralizing metadata for Data Integration

Selecting the type of schema (Simple)

About this task


In this step, you need to indicate whether you want to create a simple or an advanced schema. In this example, a simple
schema is created.

Procedure
1. In the dialog box, select the Simple WSDL option.

2. Click Next to continue.

Specifying the URI and method

About this task


This step involves the definition of the URI and other parameters required to obtain the desired values.

470
Centralizing metadata for Data Integration

In the Web Service Parameter zone:

Procedure
1. Enter the URI which will transmit the desired values, in the WSDL field, http://www.webservicex.net/c
ountry.asmx?wsdl in this example.
2. If necessary, select the Need authentication? check box and then enter your authentication information in the User and
Password fields.
3. If you use an http proxy, select the Use http proxy check box and enter the information required in the host, Port, user
and password fields.
4. Enter the Method name in the corresponding field, GetCountryByCountryCode in this example.
5. In the Value table, Add or Remove values as desired, using the corresponding buttons.
6. Click Refresh Preview to check that the parameters have been entered correctly.

471
Centralizing metadata for Data Integration

In the Preview tab, the values to be transmitted by the Web Service method are displayed, based the parameters
entered.

Finalizing the end schema (Simple WSDL)

About this task


You can modify the schema name (metadata, by default) and modify the schema itself using the tool bar.

472
Centralizing metadata for Data Integration

Procedure
1.
Add or delete columns using the and buttons.
2.
Modify the order of the columns using the and buttons.
3. Click Finish.
The new schema is added to the Repository under the Web Service node. You can now drop it onto the design
workspace as a tWebServiceInput component in your Job.

Setting up an advanced schema


This section describes how to define an Advanced WebService schema.
Next, you need to define the input and output schemas and schema-parameter mappings in the Input mapping and Output
mapping tabs.

Note:
Depending on the type of the output, you can choose to normalize or denormalize the results by clicking the Normalize
and Denormalize buttons.

Defining general properties of the advanced Web Service schema

Procedure
1. In the Repository view, expand the metadata node.
2. Right-click Web Service and select Create WSDL schema from the context menu list.

3. Enter the generic schema information, such as its Name and Description.

473
Centralizing metadata for Data Integration

4. Click Next to select the schema type in step 2.

Selecting the type of schema (Advanced)

About this task


In this step, you must indicate whether you want to create a Simple or an Advanced schema. In this example, an Advanced
schema is created.

Procedure
1. In the dialog box, select the Advanced WebService option.

474
Centralizing metadata for Data Integration

2. Click Next to defineprecise Web Service parameters.

Defining the port name and operation

Procedure
1. Type in the URI of the Web Service WSDL file manually by typing in the WSDL field, or click the Browse... button to
browse your directory if your WSDL is stored locally.

2. Click the Refresh button to retrieve the list of port names and operations available.

475
Centralizing metadata for Data Integration

3. Select the port name to be used, in the Port Name zone, countrySoap12 in this example.
4. Select the operation to be carried out in the Operation zone.
In this example, select GetCountryByCountryCode(parameters):string to retrieve the country name for a
given country code.

Defining the input schemas and mappings

About this task


To define the input schema and mappings, do the following:

Procedure
1. Click the Input mapping tab to define the input schema and set the parameters required to execute the operation.
2. In the table to the right, select the parameters row and click the [+] button to open the ParameterTree dialog box.

476
Centralizing metadata for Data Integration

3. Selectthe parameter you want to use and click OK to close the dialog box.
A new row appears showing the parameter you added, CountryCode in this example.
4. In the table to the left, click the Schema Management button to open the Schema dialog box.

5. Define the input schema.


In this example, the schema has only one column: CountryCode.
6. Click OK to validate this addition and close the dialog box.
7. Create mappings between schema columns and parameters.
In this example, drop the CountryCode column from the left table onto the parameters.CountryCode row to the
right.
A red line shows that the column is mapped.

477
Centralizing metadata for Data Integration

Note:
If available, use the Auto Map button situated to the top of the tab, to carry out the mapping automatically.

Defining the output schemas and mappings

About this task


To define the output schema and mappings, proceed as follows:

Procedure
1. Click the Output mapping tab to define the output schema and set its parameters.
2. In the table to the left, select the parameter row and click the [+] button to add a parameter.
The ParameterTree dialog box opens.

478
Centralizing metadata for Data Integration

3. Select the parameter and click OK to close the dialog box.


A new row appears showing the parameter you added, GetCountryByCountryCodeResult in this example.
4. In the table to the right, click [...] to open the Schema dialog box.
5. Define the output schema.
In this example, the schema has only one column: Result.
6. Click OK to validate your addition and close the dialog box.
7. Create output parameter-schema mappings.
In this example, drop the parameters.GetCountryByCountyCodeResult row from the table to the left onto the
Result column to the right.

479
Centralizing metadata for Data Integration

8. Click Next to finalize the schema.

Finalizing the end schema (Advanced WebService)

About this task


In this step the wizard displays the output schema generated.

480
Centralizing metadata for Data Integration

You can customize the metadata by changing or adding information in the Name and Comment fields and make further
modifications using the toolbar, for example:

Procedure
1.
Add or delete columns using the and buttons.
2.
Change the column order by clicking the and arrows.
3. Click Finish to finalize your advanced schema.
The new schema is added to the Repository under the corresponding Web Service node. You can now drop it onto the
design workspace as a tWebService component in your Job.

Discovering Web services using the Web Service Explorer

About this task


In the Web Service wizard step described in Specifying the URI and method on page 470, you can seek the help of the Web
Service Explorer to discover the Web Service methods available for a defined WSDL.
The Web Service Explorer button is located next to the Refresh Preview button.

481
Centralizing metadata for Data Integration

Procedure
1. Click Web Service Explorer to launch the help browser.
2. In the Web Service Explorer's toolbar (top-right), click the WSDL Page icon.

3. Click the WSDL Main.


4. In the WSDL URL field, enter the URL of the Web Service WSDL you want to get the operation details of, and click Go.
Note that the field is case-sensitive.
In this example: http://www.webservicex.net/country.asmx?WSDL

5. Click the port name you want to use under Bindings. In this example: countrySoap.
6. Click the Name of the method under Operations to display the parameters required, GetCountryByCountryCode in
this example.

482
Centralizing metadata for Data Integration

7. Click the parameter name (in this example: CountryCode) to get more information about the parameter.
8. Click Add to add a new parameter line. You can add as many lines as you want.
9. Enter a value, fr in this example, and then click Go to execute.

Results
The result displays in the Status area. If the number of parameters you entered exceeds the maximum number of parameters
authorized, an error message will display a pop-up message.
Simply copy and paste the relevant information to help you fill in the fields of the standard WSDL wizard.

Note:
The WSDL URI is not passed on automatically to the Web Service wizard fields.

You can use the Source link on the Status area in case you need to debug your Web Service request or response

483
Centralizing metadata for Data Integration

The Web Service Explorer can also help find your favorite registry through the UDDI page and WSIL page buttons of
the tool bar.

Centralizing a Validation Rule


A validation rule is a basic or integrity rule that you can apply to metadata items to check the validity of your data. It can be
basic check for correct values or referential integrity check, both applicable to database tables or individual columns, file
metadata or any relevant metadata item.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.
All your business and validation rules can now be centralized in Repository metadata which will enable you to modify,
activate, deactivate and delete them according to your need.
They can be defined either from the Validation Rules metadata entry or directly from the metadata schema or columns you
want to check and they are to be used in your Job designs at the component level. Data that did not pass the validation
check can easily be retrieved through a reject link for a further treated, if necessary.
To see how to use a validation rule in a Job design, see Data Integration Job Examples.

Defining the general properties of a validation rule

About this task


To create a validation rule, complete the following:

Procedure
1. In the Repository tree view, expand Metadata and right-click Validation Rules, and select Create validation rule from the
contextual menu.
Or

484
Centralizing metadata for Data Integration

In the Repository tree view, expand Metadata and expand any metadata item you want to check, either directly right-
click the schema of the metadata item or right-click a column of that schema, and select Add validation rule... from the
contextual menu.
For more information about metadata compatible with validation rules, see Selecting the trigger and type of validation
on page 486.

The validation rule wizard displays.

2. Fill in the general information of the metadata such as Name, Purpose and Description. The Status field is a customized
field that can be defined. For more information, see Status settings on page 586.
3. Click Next to proceed to the next step.

485
Centralizing metadata for Data Integration

Selecting the schema to validate

About this task


In this step, select the schema or the column(s) you want to check.

Procedure
1. In the tree view on the left of the window, select the metadata item you want to check.
2. In the panel on the right, select the column(s) on which you want to perform the validity check.

Note:
At least one column must be selected.

3. Click Next to proceed to the next step.

Selecting the trigger and type of validation

Procedure
1. In this step, you can select the action that will trigger the rule:
• On select,
• On insert,
• On update,
• On delete.

486
Centralizing metadata for Data Integration

Note: Some of the rule trigger options can be disabled according to the type of metadata you checked. For example
if the metadata is a file, on update and on delete triggers are not applicable.

Please refer to the following table for a complete list of supported (enabled) options:

Metadata item On select On insert On update On delete

Database Table Y Y Y Y

Database View Y - - -

Database Synonym Y - - -

SAP Y - - -

File Delimited Y Y - -

File Positional Y Y - -

File RegEx Y Y - -

File XML Y Y - -

File Excel Y Y - -

File LDIF Y Y - -

LDAP Y Y Y Y

Salesforce Y Y Y Y

Generic Schema - - - -

HL7 Y - - -

Talend MDM Y Y Y Y

WSDL Y - - -

Validation rules are not supported for any other metadata that does not display in the above list.
When you select the On select trigger, the validation rule should be applied to the input components of the Job Designs
and when you select the On insert, On update and On delete triggers, the validation rule should be applied to output
components.
And you can select the type of validation you want to perform:
• a referential integrity validation rule that will check your data against a reference data,
• a basic restriction validation rule that will check the validity of the values of the selected field(s) with basic criteria,
• a custom code validation rule allowing you to specify your own Java or SQL based criteria.
2. Choose to create a referencial rule, a basic rule, or a custom rule.
Referential rule: to create a referential integrity check validation rule:

487
Centralizing metadata for Data Integration

a) In the Trigger time settings area, select the option corresponding to the action that will trigger the validation.
As On insert and On update options are selected here, data will be checked when insert or update action will be
performed.
b) In the Rule type settings area, select the type of validation you want to apply between Reference, Basic Value and
Custom check. To check data by reference, select Reference Check.
c) Click Next.

488
Centralizing metadata for Data Integration

d) In this step, select the database schema that will be used as reference.
e) Click Next.

f) In the Source Column list, select the column name you want to check and drag it to the Target column against
which you want to compare it.
g) Click Next to define how to handle rejected data.

489
Centralizing metadata for Data Integration

Basic rule: To create a basic check validation rule:

a) In the Trigger time settings area, select the option corresponding to the action that will trigger the validation. As
On Select option is selected here, the check will be performed when data are read.
b) In the Rule type settings area, select the type of validation you want to apply between Reference, Basic Value and
Custom check. To make a basic check of data, select Basic Value Check.
c) Click Next to go to the next step.

490
Centralizing metadata for Data Integration

d) Click the plus button at the bottom of the Conditions table to add as many conditions as required and select
between And and Or to combine them. Here, you want to ignore empty Phone number fields, so you added two
conditions: retrieve data that are not empty and data that are not null.
e) Click Next to define how to handle rejected data.
Custom rule: to create a custom validation rule:

491
Centralizing metadata for Data Integration

a) In the Trigger time settings area, select the option corresponding to the action that will trigger the validation. As
On Select option is selected here, the check will be performed when data are read.
b) In the Rule type settings area, select the type of validation you want to apply between Reference, Basic Value and
Custom check. To make a custom check of data, select Custom Check.
c) Click Next.

492
Centralizing metadata for Data Integration

d) In this step, type in your Java condition directly in the text box or click Expression Editor to open the Expression
Builder that will help you create your Java condition. Use input_row.columnname, where columnname is the
name of the column of your schema, to match the input column. In the previous capture, the data will be passed if
the value of the idState column is bigger than 0 and smaller than 51. For more information about the Expression
Builder, see Working with expressions on page 240.
e) Click Next to define how to handle rejected data.

Handling rejected data

About this task

493
Centralizing metadata for Data Integration

In this step:

Procedure
1. Select Disallow the operation and the data that fails to pass the condition will not be outputted.
2. Select Make rejected data available on REJECT link in job design to retrieve the rejected data in another output.
3. Click Finish to create the validation rule.
Once created the validation rule displays:
• on the Repository under the Metadata > Validation Rules node,
• under the Validation Rules node of the table you check:

Centralizing an FTP connection


If you need to connect to an FTP server regularly, you can centralize the connection information under the Metadata node in
the Repository view.
All of the connections created appear under the FTP server connection node, in the Repository view.
You can drop the connection metadata from the Repository onto the design workspace. A dialog box opens in which you can
choose the component to be used in your Job.
For further information about how to drop metadata onto the workspace, see Using centralized metadata in a Job on page
513.

Defining the general properties of the connection FTP

About this task


To create a connection to an FTP server, follow the steps below:

Procedure
1. Expand the Metadata node in the Repository tree view.

2. Right-click FTP and select Create FTP from the context menu.

494
Centralizing metadata for Data Integration

The connection wizard opens:

3. Enter the generic schema information such as its Name and Description.

Note:
The status field is a customized field which can be defined in the Preferences dialog box (Window > Preferences). For
further information about setting preferences, see Setting Talend Studio preferences on page 600.

4. When you have finished, click Next to enter the FTP server connection information.

Connecting to an FTP server

About this task


In this step we shall define the connection information and parameters.

Procedure
1. Enter your Username and Password in the corresponding fields.

495
Centralizing metadata for Data Integration

2. In the Host field, enter the name of your FTP server host.
3. Enter the Port number in the corresponding field.
4. Select the Encoding type from the list.
5. From the Connection Model list, select the connection model you want to use:
• Select Passive if you want the FTP server to choose the port connection to be used for data transfer.
• Select Active if you want to choose the port yourself.
6. In the Parameter area, select a setting for FTP server usage. For standard usage, there is no need to select an option.
• Select the SFTP Support check box to use the SSH security protocol to protect server communications.
An Authentication method appears. Select Public key or Password according to what you use.
• Select the FTPs Support check box to protect server communication with the SSL security protocol.
• Select the Use Socks Proxy check box if you want to use this option, then enter the proxy information (the host
name, port number, username and password).
7. Click Finish to close the wizard.

Centralizing UN/EDIFACT metadata


The UN/EDIFACT wizard helps you create a schema to be used for the tExtractEDIField component to read and extract data
from UN/EDIFACT message files.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

496
Centralizing metadata for Data Integration

Defining the general properties of the UN/EDIFACT schema

About this task


In this step, define the general properties of the schema metadata such as the Name, Purpose and Description.

Procedure
1. In the Repository tree view, right-click the UN/EDIFACT tree node, and select Create EDI from the pop-up menu.
2. Enter the general properties of the schema, such as its Name and Description. The Name field must be filled.

3. Click Next to set the UN/EDIFACT standard and release.

Setting the UN/EDIFACT standard and release

About this task


In this step, define the UN/EDIFACT standard and release version.

Procedure
1. To search your UN/EDIFACT standard quickly, enter the full or partial name of the UN/EDIFACT standard in the Name
Filter field, for example, enter inv for INVOIC.
2. In the UN/EDIFACT standards list, expand the standard node of your choice, and select the release version of the UN/
EDIFACT messages you want to read through this metadata.

497
Centralizing metadata for Data Integration

3. Click Next to proceed with schema mapping.

Mapping the schema UN/EDIFACT

About this task


The schema mapping window opens:

498
Centralizing metadata for Data Integration

Procedure
1. From the left-hand panel, select the EDIFACT message fields that you to include in your schema, and drop them to the
Description of the Schema table in the right-hand Schema panel.
2. If needed, select any field in the Description of the Schema table, and move it up or down or rename it.
3. Click Next to finalize the schema definition.

Finalizing the end schema UN/EDIFACT

About this task


This step shows the final schema generated.

499
Centralizing metadata for Data Integration

Procedure
1. Give a name to your metadata and enter a comment if you want.
2. Click Finish to finalize the creation of your UN/EDIFACT metadata.
The metadata created is added under the UN/EDIFACT node in the Repository tree.

Exporting metadata as context and reusing context parameters to set up a


connection
If the Export as context option is available for a metadata connection, you can export the connection details to a new
context group in the Repository for reuse in other connections or across different Jobs, or reuse variables of an existing
context group to set up your metadata connection.

Exporting connection details as context variables

About this task


To export connection details as context variables in a new context group in the Repository, follow the steps below:

Procedure
1. Upon creating or editing a metadata connection in the wizard, click Export as context.

500
Centralizing metadata for Data Integration

2. In the Create / Reuse a context group wizard that opens, select Create a new repository context and click Next.

501
Centralizing metadata for Data Integration

3. Type in a name for the context group to be created, and add any general information such as a description if required.
The name of the Metadata entry is proposed by the wizard as the context group name, and the information you provide
in the Description field will appear as a tooltip when you move your mouse over the context group in the Repository.

4. Click Next to create and view the context group, or click Finish to complete context creation and return to the
connection wizard directly.

502
Centralizing metadata for Data Integration

In this example, click Next.


5. Check the context group generation result.
To edit the context variables, go to the Contexts node of the Repository, right-click the newly created context group,
and select Edit context group to open the Create / Edit a context group wizard after the connection wizard is closed.
To edit the default context, or add new contexts, click the [+] button at the upper right corner of the wizard.
To add a new context variable, click the [+] button at the bottom of the wizard.
For more information on handling contexts and variables, see Using contexts and variables on page 105.

6. Click Finish to complete context creation and return to the connection wizard.

503
Centralizing metadata for Data Integration

The relevant connection details fields in the wizard are set with the context variables.
To unset the connection details, click the Revert Context button.

Using variables of an existing context group to set up a connection

About this task


To use variables of an existing context group centrally stored in the Repository to set up a connection, follow the steps
below:

Procedure
1. When creating or editing a metadata connection in the wizard, click Export as context.

504
Centralizing metadata for Data Integration

2. In the Create / Reuse a context group wizard that opens, select Reuse an existing repository context and click Next.

505
Centralizing metadata for Data Integration

3. Select a context group from the list and click Next.

4. For each variable, select the corresponding field of the connection details, and then click Next to view and edit the
context variables, or click Finish to show the connection setup result directly.
In this example, click Next.

506
Centralizing metadata for Data Integration

5. Edit the contexts and/or context variables if needed. If you make any changes, your centralized context group will be
updated automatically.
For more information on handling contexts and variables, see Using contexts and variables on page 105.

6. Click Finish to validate context reuse and return to the connection wizard.

507
Centralizing metadata for Data Integration

The relevant connection details fields in the wizard are set with the context variables.
To unset the connection details, click the Revert Context button.

Importing metadata from a CSV file


Talend Studio enables you to migrate metadata from any application to your Studio metadata.
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

508
Centralizing metadata for Data Integration

In the Metadata node of the Repository tree view, you can import metadata from a CSV file on an external application.
This option is available only for database connections (Db Connections) and delimited files (File delimited).

Importing database metadata

About this task


Before importing database connection metadata from a CSV file, make sure that your CSV file format is valid. The file
columns should be filled as follows:

Name;Purpose;Description;Version;Status;DbType;ConnectionString;Login;
Password;Server;Port;Database;DBSchema;Datasource;File;DBRoot;TableName;
OriginalTableName;Label;OriginalLabel;Comment;Default;Key;Length;
Nullable;Pattern;Precision;Talend Type;DBType.

Note:
-it is recommended to use either Talend Type or DBType, not both.
-tableName is the name displayed in Talend Studio, originalTableName is the original table name in the database. (You
can choose to fill only the originalTableName).
-label is the column name used in Talend Studio, originalLabel is the column name in the table. (You can choose to fill
only the originalLabel).

To import database connection metadata from a defined CSV file, do the following:

Procedure
1. In the Repository tree view, expand the Metadata node and right-click Db connections.
2. In the contextual menu, select Import connections from CSV.

The Import connections from CSV dialog box displays.


3. Click Browse... and go to the CSV file that holds the metadata of the database connection.

509
Centralizing metadata for Data Integration

4. Click Finish to close the dialog box.


The Show Logs dialog bow displays to list imported and rejected metadata, if any.

5. Click OK to close the dialog box.


The imported metadata displays under the DB connections node in the Repository tree view.

Importing delimited file metadata

About this task


You can import the metadata of a delimited file from a predefined CSV file.

510
Centralizing metadata for Data Integration

Before importing delimited file metadata from a CSV file, make sure that each line of your CSV file complies with the
following format:

Name*; Purpose; Description; Version(0.1 by default); Status(DEV|TEST|PROD); FilePath*;


FileFormat(UNIX|WINDOWS|MAC); Encoding; FieldSeparatorValue; RowSeparatorValue;
EscapeType; EscapeChar; TextEnclosure; FirstLineCaption(true|false); HeaderValue;
FooterValue; RemoveEmptyRow(true|false); LimitValue; TableName*; Label*; Comment;
DefaultValue; Key*(true|false); Length*; Nullable(true|false); Pattern; Precision;
TalendType

Note that:
• The fields with an asterisk (*) must not be left blank.
• Name is the file connection name that will be created under the File delimited node. You can create multiple file
connections by specifying different connection names.
• TableName is the name of the file schema, and Label is the column name in the schema.
• Escape sequences must be used to specify CSV metacharacters or control characters, such as ; or \n.
• The FirstLineCaption field must be set to true and the HeaderValue field must be filled properly if the delimited file
contains a header row and rows to be skipped.
The following example shows how to import the metadata of a delimited file named directors.csv from a predefined
CSV file named directors_metadata.csv.
Below is an abstract of the file directors.csv, which has two columns id and name:

id;name
1;Gregg Araki
2;P.J. Hogan
3;Alan Rudolph

The CSV file directors_metadata.csv contains two lines to describe the metadata of directors.csv:

directors;Centralize directors metadata;Metadata of directors.csv; 0.1;DEV; E:


\Talend\Data\Input\directors.csv;WINDOWS;UTF-8; "\";\""; "\"\\n\""; Delimited;;;tr
ue;1;;false;;directors_schema;id;;;false;1;true;;0;id_Integer

directors;Centralize directors metadata;Metadata of directors.csv;0.1;DEV; E:


\Talend\Data\Input\directors.csv;WINDOWS;UTF-8; "\";\""; "\"\\n\""; Delimited;;;tr
ue;1;;false;;directors_schema;name;;;false;1;true;;0;id_String

To import delimited file connection metadata from the above-mentioned CSV file, do the following:

Procedure
1. In the Repository tree view, expand the Metadata node and right-click File delimited.
2. In the contextual menu, select Import connections from CSV.

The Import connections from CSV dialog box opens.


3. Click Browse... and browse to the CSV file that describes the metadata of the delimited file metadata,
directors_metadata.csv in this example.

511
Centralizing metadata for Data Integration

4. Click Finish to close the dialog box.


The Show Logs dialog box opens to list imported and rejected metadata, if any.

5. Click OK to close the dialog box.


A new file connection named directors is created under the File delimited node in the Repository tree view, with its
properties as defined in the CSV file.

512
Centralizing metadata for Data Integration

Using centralized metadata in a Job


About this task
For recurrent use of files and database connections in various Jobs, we recommend you to store the connection and schema
metadata in the Repository tree view under the Metadata node. Different folders under the Metadata node will group the
established connections including those to databases, files and systems.
Different wizards will help you centralize connection and schema metadata in the Repository tree view.
Once the relevant metadata is stored under the Metadata node, you will be able to drop the corresponding components
directly onto the design workspace.

Procedure
1. In the Repository tree view of the Integration perspective, expand Metadata and the folder holding the connection you
want to use in your Job.
2. Drop the relevant connection or schema onto the design workspace.

A dialog box prompts you to select the component you want to use among those offered.

513
Centralizing metadata for Data Integration

3. Select the component and then click OK. The selected component displays on the design workspace.

Results
Alternatively, according to the type of component (Input or Output) you want to use, perform one of the following
operations:
• Output: Press Ctrl on your keyboard while you are dropping the component onto the design workspace to directly
include it in the active Job.
• Input: Press Alt on your keyboard while you drop the component onto the design workspace to directly include it in the
active Job.
If you double-click the component, the Component view shows the selected connection details as well as the selected
schema information.

Note:
If you select the connection without selecting a schema, then the properties will be filled with the first encountered
schema.

514
Using routines

Using routines
Managing routines
What are routines
A routine is a Java class with many functions. It is generally used to factorize code.
Talend Studio allows you to store frequently used parts of code or extract parts of existing functions and then call them via
routines. You can call a routine many times from within the same Job or from more than one of your Jobs. They therefore
optimize data processing and improve Job capacities. This factorization also makes it easier to resolve any problem which
may arise and allows you to update the code used in multiple Jobs quickly and easily.
If you want to enable your Job to call any function in a routine, you need to set up code dependencies on the Job. For more
information, see Setting up code dependencies on a Job on page 118 and Setting up code dependencies on a Joblet on page
155.
There are the following two types of routines, and all routines are stored under the Code node in the Repository tree view.
• System routines: the predefined routines that adopt the most common Java methods using the Talend syntax. They are
classified according to their usage.
• User routines: the routines that you create or adapt from system routines. There are two types of user routines:
• Inner routines: user routines that are bundled in custom routine jars. They are created under specific custom
routine jar. For more information about custom routines jars, see Creating custom routine jars on page 516.
• Global user routines: user routines that are not bundled in custom routine jars. They are created under the
Code > Global Routines node. You can create folders to organize global user routines. You can also set up their
dependencies directly on Jobs and Joblets.

Note: By default, user routines migrated from any previous version of Talend Studio are all saved under the
Code > Global Routines node.

Accessing the system routines

Procedure
1. Click Code > Global Routines > system in the Repository tree view.

515
Using routines

The system routines are stored under the system folder and are classified according to their usage. Each system routine
contains several functions.

Note: The system folder and system routines are read only. If you have subscribed to one of the Talend solutions
with the Profiling or MDM perspective, you will also have access to the system routines specific to Talend Data
Quality or MDM.

2. Double-click the system routine you are interested in, for example, Numeric.
The routine editor opens. All functions within a system routine are composed of some descriptive text, followed by the
corresponding Java code.
You can click the name of your interested function in the Outline view on the bottom left of the Studio to jump directly
to its corresponding code. Alternatively, you can press Ctrl+O in the routine editor and then select your interested
function from the list displayed to locate the function.

Customizing the system routines


If a system routine is not adapted to your specific needs, you can customize it by copying and pasting the content in a user
routine, then modify the content accordingly.

About this task


To customize a system routine:

Procedure
1. Create a user routine by following the steps outlined in Creating user routines on page 517.
The routine editor opens in the workspace, where you can find a basic example of a function.
2. In the system routines folder, double-click the system routine you want to customize.
3. Select all or part of the code and copy it using Ctrl+C.
4. Click the tab to access your user routine and paste the code by pressing Ctrl+V.
5. Modify the code as required and press Ctrl+S to save it.
We advise you to use the descriptive text to detail the input and output parameters. This will make your routines easier
to maintain and reuse.

Managing user routines


Talend Studio allows you to create user routines and modify them to fill your specific needs.

Creating custom routine jars

Talend Studio allows you to create custom routine jars and set up custom routine jar dependencies on Jobs and Joblets.
The custom routine jar allows you to package multiple user routines with their dependencies into a single archive. By setting
up custom routine jar dependencies on Jobs and Joblets, the code dependencies for Jobs and Joblets become more explicit
and this can help reduce dependency conflicts.

About this task


To create a custom routine jar:

Procedure
1. In the Repository tree view, right-click Code > Custom Routine Jars and select Create Routine Jar from the contextual
menu.
The New Routine Jar dialog box is displayed.
2. Fill in the generic properties such as Name, Purpose (optional), and Description (optional).
The information you fill in the Description field will appear as a tooltip when your mouse pointer is moved over the
custom routine jar in the Repository tree view.
3. Click Finish to save the new custom routine jar.

516
Using routines

The newly added custom routine jar is displayed under the Custom Routine Jars node.
Now you can create inner routines and add them into the new custom routine jar by right-clicking it and selecting
Create routine from the contextual menu. For more information, see Creating user routines on page 517.
You can add any existing global user routine into a custom routine jar by right-clicking the global user routine and
selecting Copy Routine to... from the contextual menu.

Note: A global user routine will be kept as a global one after being copied into a custom routine jar.

Creating user routines

You can create your own routines according to your particular factorization needs.

Procedure
1. If you need to create a global user routine, right-click Code > Global Routines and select Create routine from the
contextual menu.
You can create folders under the Global Routines node by right-clicking the node and selecting Create folder. The
folders can help you organize global user routines.
2. If you need to create an inner routine within a custom routine jar, right-click the custom routine jar under Code >
Custom Routine Jars and select Create routine from the contextual menu.
3. In the New routine dialog box displayed, fill in the generic properties for the new routine, such as Name, Purpose
(optional), and Description (optional).
The information you fill in the Description field will appear as a tooltip when your mouse pointer is moved over the
routine in the Repository tree view.
4. Click Finish.
The newly created routine is saved in the Repository tree view. The routine editor opens to reveal a model routine
which contains a function sample, by default, comprising descriptive text in blue, followed by the corresponding code.

Note: We advise you to add detailed description for each function in the routine. The description should generally
include the input and output parameters you would expect to use, as well as the results returned along with an
example. This information tends to be useful for collaborative work and the maintenance of the routines.

5. Modify or replace the function sample with your own code and press Ctrl+S to save the changes. Otherwise, the routine
is saved automatically when you close the routine editor.

Note: You can copy all or part of the code for a system routine and use it in a user routine by using the Ctrl+C and
Ctrl+V commands, then adapt the code according to your needs.

Editing user routines

You can modify user routines whenever you like.

About this task


To edit a user routine:

Procedure
1. Right-click the user routine in the Repository tree view and select Edit routine.
2. The routine editor opens in the workspace, where you can modify the routine.
3. Once you have adapted the routine to suit your needs, press Ctrl+S to save it.

Editing user routine libraries

You can edit the library of a user routine by importing external libraries (usually .jar files) for it.
You can import external libraries for an inner routine by editing the library of the custom routine jar in which the inner
routine is packaged.

517
Using routines

These external library files will be listed, like modules, in the Modules view in your current Studio. For more information on
the Modules view, see the Talend Installation Guide .
The imported library will be also listed in the library file of your Studio.

About this task


To edit the library for a user routine or a custom routine jar, complete the following:

Procedure
1. If the library to be imported is not available on your machine, either download and install it using the Modules view or
download and store it in a local directory.
2. In the Repository tree view, right-click the user routine or the custom routine jar and then select Edit Routine Libraries
or Edit Routine Jar libraries.
The Import External Library dialog box displays.

3. Click New... to open the Module dialog box where you can import the external library.
4. If you have installed the library using the Modules view, specify it in either of the following two ways:
• select the Platform option and then select the library from the list, or
• select the Artifact repository (local m2/nexus) option, enter the search keyword in the Module Name field, click
Search Local to search in the local repository <TalendStudio>\configuration\.m2 or click Search Remote
to search in the remote artifact repository, then select the library from the list below. The search keyword can be
the partial or full name of the library.
The Search Remote button is available only when the user libraries is set up in Talend Administration Center or
Talend Management Console.

518
Using routines

5. If you have stored the library file in a local directory, select the Install a new module option, and click the [...] button to
browse to the library file.
If the MVN URI of the library exists in the file <TalendStudio>\configuration\MavenUriIndex.xml, it is
automatically filled in the Default MVN URI field.
If the MVN URI of the library is within the jar file, it is automatically detected and filled in the Custom MVN URI field if
it is different from the default MVN URI. Select the Custom MVN URI check box if you want to change the custom Maven
URI or use the custom Maven URI to install the library. If neither the default MVN URI nor the custom MVN URI exists,
the Default MVN URI field is filled with mvn:org.talend.libraries/<jarname>/6.0.0-SNAPSHOT/jar.
6. Click OK to confirm your changes.
The imported library file is listed in the Library File list in the Import External Library dialog box.

Note: You can delete any of the already imported routine files if you select the file in the Library File list and click
Remove.

7. To include the external libraries when building your Job as a standalone Job to be executed out of Talend Studio, select
the corresponding Required check box. Otherwise, you will get the "class not found" error when the Job is executed.
By default, the Required check box is selected for each imported library.
If you have imported Camel or CXF libraries and you need to build your Job as an OSGI bundle to be deployed on Talend
Runtime, it is recommended that you clear the Required check box to avoid issues caused by duplicate class paths, as
those libraries are already provided with Talend Runtime bundles.
8. Click Finish to close the dialog box.

Calling a routine function from a Job


You can call any function in any of the system and user routines from your Job components in order to run them at the same
time as your Job.
To access all routine functions, press Ctrl+Space in any of the fields in the Basic settings view of a component used in your
Job and select the one you want to use from the list displayed.

519
Using routines

Alternatively, you can call any of these functions by indicating the relevant routine name and the function name, followed by
the expected settings, in any of the Basic settings fields in the following way:

<RoutineName>.<FunctionName>

Note: The syntax of routine call statements is case sensitive.

Use case: Creating a file for the current date


This scenario describes a one-component Job that calls a system routine to create an empty file with the date and time of
creation in the file name.

Procedure
1. In the Palette, click File > Management, then drop a tFileTouch component onto the workspace.
This component allows you to create an empty file.
2. Double-click the component to open its Basic settings view in the Component tab.
3. In the File Name field, enter the path to the file to be created between double quotes, or click [...] and browse to an
existing file, and modify the file name if needed.
4. Add a plus symbol (+) in the File Name field.
5. Press Ctrl+Space to open a list of all of the routines, and in the auto-completion list which appears, select
TalendDate.getDate to use the Talend routine which allows you to obtain the current date.
6. Modify the format of the date provided by default, if required.

Warning: If you are working on Windows, the ":" characters in the time format are not allowed in file names and
must be removed or replaced.

7. Add another plus symbol (+) and then enter the file extension between double quotes.

520
Using routines

8. Press F6 to run the Job.

Results
The tFileTouch component creates an empty file with the creation date and time in the file name as specified in the File
Name field.

Use case: Defining a variable accessible to multiple Jobs


This use case creates a variable that is accessible to Jobs. It also gives examples on how to access the variable.

Creating a routine and declaring a static variable

Procedure
1. In the Repository tree view, create a new routine MyRoutine.
2. Declare a static variable in the routine and set its value as null, for example:

private static String name = null;

3. Add the setter/getter methods for this variable and press Ctrl+S to save the changes.

public static synchronized void setValue(String message) {


name=message;
}

public static synchronized String getValue() {


return name;
}

521
Using routines

Note: The complete Java code for this routine is as below:

package routines;
/*
* user specification: the function's comment should contain keys as follows: 1.
write about the function's comment.but
* it must be before the "{talendTypes}" key.
*
* 2. {talendTypes} 's value must be talend Type, it is required . its value
should be one of: String, char | Character,
* long | Long, int | Integer, boolean | Boolean, byte | Byte, Date, double |
Double, float | Float, Object, short |
* Short
*
* 3. {Category} define a category for the Function. it is required. its value is
user-defined .
*
* 4. {param} 's format is: {param} <type>[(<default value or closed list
values>)] <name>[ : <comment>]
*
* <type> 's value should be one of: string, int, list, double, object, boolean,
long, char, date. <name>'s value is the
* Function's parameter name. the {param} is optional. so if you the Function
without the parameters. the {param} don't
* added. you can have many parameters for the Function.
*
* 5. {example} gives a example for the Function. it is optional.
*/
public class MyRoutine {
private static String name = null;

/**
* helloExample: not return value, only print "hello" + message.
*
*
* {talendTypes} String
*
* {Category} User Defined
*
* {param} string("world") input: The string need to be printed.
*
* {example} helloExemple("world") # hello world !.
*/
public static void helloExample(String message) {
if (message == null) {
message = "World"; //$NON-NLS-1$
}
System.out.println("Hello " + message + " !"); //$NON-NLS-1$ //$NON-NLS-2$
}

public static synchronized void setValue(String message) {


name = message;
}

public static synchronized String getValue() {


return name;
}
}

Setting up the child Jobs

The example below shows how to share a value between different Jobs via the routine defined earlier.

Procedure
1. Create a Job named childJob1, and add two components by typing their names on the design workspace or dropping
them from the Palette to the design workspace:
• A tFixedFlowInput to generate an input data flow
• A tJavaRow to receive the data and in which the Job calls the setter method to give a new value to the variable

522
Using routines

2. Double-click the tFixedFlowInput component to open its Basic settings view.


3. Click the [...] button next to Edit schema to open the [Schema] dialog box and define the schema of the input data. In
this example, the schema has only one column name of the string type.

4. In the Mode area, select Use Single Table option, and define the corresponding value for the message column in the
Values table. In this example, the value is "Talend".

Note: The tJava component is calling the getter method and assigning the return value to a string variable, then
printing the variable value in the console.

5. Double-click the tJavaRow component to open its Basic settings view.


6. In the Code area, enter the following code to add the setter method.

MyRoutine.setValue(input_row.name);

523
Using routines

7. Create a Job called childJob2 and then create a tJava component in this Job.
8. Double-click the tJava component to open its Basic settings view.
9. In the Code area, enter the following code to add the getter method.

String name=MyRoutine.getValue();
System.out.println(name);

Setting up the parent Job

Procedure
1. Create a Job named parentJob and add two tRunJob components.
2. Connect the second tRunjob component to a first tRunjob component with a OnSubjobOk link.
3. Double-click on both tRunJob component to open its Basic settings view.
4. Fill the Job field of tRunJob1 and tRunJob2 respectively with childJob1 and childJob2 to call the two child Jobs created.

This approach of defining a variable in routine and sharing it across different Jobs does not work with the Use an
independent process to run subjob feature on the tRunJobcomponent.

Executing the Jobs to call the routine

Procedure
Execute the parentJob. You can see the following results in the console:

Starting job parentJob at 19:52 21/06/2013.


[statistics] connecting to socket on port 3397
[statistics] connected
Talend
[statistics] disconnected
Job parentJob ended at 19:52 21/06/2013. [exit code=0]

524
Using routines

System routines
DataOperation routine
The DataOperation routine contains functions which allow you to manipulate data.
You can access these functions by double-clicking the DataOperation node under the system routines folder in the
Repository tree view.

Function Description Syntax

CHAR Converts a numeric value into its ASCII character DataOperation.CHAR(int i)


string equivalent.

DTX Converts a decimal integer into its hexadecimal DataOperation.DTX(int i)


equivalent.

FIX Rounds a number of type Double to a number DataOperation.FIX(double d)


of type Long with the precision specified in the
PRECISION statement.

XTD Converts a hexadecimal string into its decimal DataOperation.FIX(String text)


equivalent.

Mathematical routine
The DataOperation routine contains functions which perform mathematical operations.
You can access these functions by double-clicking the Mathematical node under the system routines folder in the Repository
tree view.

Function Description Syntax

ABS Returns the absolute (positive) numeric value Mathematical.ABS(double a)


of an expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.ABS(-3.14)
• Output: 3.14

ACOS Calculates the trigonometric arc-cosine of an Mathematical.ACOS(double a)


expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.ACOS(0.5)
• Output: 1.0471975511965979

ASIN Calculates the trigonometric arc-sine of an Mathematical.ASIN(double a)


expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.ASIN(0.5)
• Output: 0.5235987755982989

ATAN Calculates the trigonometric arctangent of Mathematical.ATAN(double a)


an expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.ATAN(0.5)
• Output: 0.4636476090008061

525
Using routines

Function Description Syntax

BITAND Performs a bitwise AND operation to two Mathematical.BITAND(int a, int b)


expressions that are of the int type. This
function returns a value of the int type.
Example:
• Mathematical.BITAND(3,6)
• Output: 2

BITNOT Performs a bitwise NOT operation to an Mathematical.BITNOT(int a)


expression that is of the int type. This function
returns a value of the int type.
Example:
• Mathematical.BITNOT(3)
• Output: -4

BITOR Performs a bitwise OR operation to two Mathematical.BITOR(int a, int b)


expressions that are of the int type. This
function returns a value of the int type.
Example:
• Mathematical.BITOR(3,6)
• Output: 7

BITXOR Performs a bitwise XOR operation to two Mathematical.BITXOR(int a, int b)


expressions that are of the int type. This
function returns a value of the int type.
Example:
• Mathematical.BITXOR(3,6)
• Output: 5

COS Calculates the trigonometric cosine of an Mathematical.COS(double a)


expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.COS(3.14)
• Output: -0.9999987317275395

COSH Calculates the hyperbolic cosine of an Mathematical.COSH(double a)


expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.COSH(3.14)
• Output: 11.573574828312076

DIV Calculates the whole part of the real division Mathematical.DIV(double a, double b)
that are of two expressions that are of the
double type. This function returns a value of the
int type.
Example:
• Mathematical.DIV(9.6,6.4)
• Output: 1

EXP Calculates the result of base e raised to the Mathematical.EXP(double a)


power designated by an expression that is of the
double type. This function returns a value of the
double type.
Example:
• Mathematical.EXP(2.5)
• Output: 12.182493960703473

526
Using routines

Function Description Syntax

INT Calculates the integer numeric value of an Mathematical.INT(string a)


expression that is of the string type. This
function returns a value of the int type.
Example:
• Mathematical.INT("100")
• Output: 100

FFIX Rounds the value of an expression that is of the Mathematical.FFIX(double a, int precision)
double type to a string with a fixed precision.
FFIX is provided for compatibility with existing
software. This function returns a value of the
string type.
Example:
• Mathematical.FFIX(3.1415926.2)
• Output: 3.14

FFLT Rounds the value of an expression that is of the Mathematical.FFLT(double a)


double type to a string with a precision of 14.
This function returns a value of the string type.
Example:
• Mathematical.FFLT(3.14159265
35897932)
• Output: 3.14159265358979

LN Calculates the natural logarithm of an Mathematical.LN(double a)


expression that is of the double type. This
function returns a value of the double type.
Example:
• Mathematical.LN(2.71828)
• Output: 0.999999327347282

MOD Calculates the modulo (the remainder) of two Mathematical.MOD(double a, double b)


expressions that is of the double type. This
function returns a value of the String type.
Example:
• Mathematical.MOD(7,3)
• Output: 1.0

NEG Returns the arithmetic additive inverse of the Mathematical.NEG(double a)


value of an expression that is of the double type.
This function returns a value of the double type.
Example:
• Mathematical.NEG(3.14)
• Output: -3.14

NUM Returns true (1) if the value of an expression is a Mathematical.NUM(double a)


numeric data type; otherwise, returns false (0).
Example:
• Mathematical.NUM("3")
• Output: 1

REAL Converts a numeric expression into a real Mathematical.REAL(double a)


number without loss of accuracy. This function
returns a value of the double type.
Example:
• Mathematical.REAL("3.14")
• Output: 3.14

527
Using routines

Function Description Syntax

RND Generates a random number between zero and Mathematical.SADD(double a)


the value of an expression that is of the double
type. This function returns a value of the double
type.
Example:
• Mathematical.RND(1.5)
• Output: 0.6509250767781953

SADD Adds two string numbers and returns the result Mathematical.SADD(String a,String b)
as a string number.
Example:
• Mathematical.SADD("3","7")
• Output: 10.0

SCMP Compares two string numbers and returns: Mathematical.SCMP(String a,String b)


• 1 if a is larger than b;
• 0 if a is equal to b;
• -1 if a is less than b.
Example:
• Mathematical.SCMP("3","7")
• Output: -1

SDIV Returns the quotient of the whole division of Mathematical.SDIV(int a, int b)


two integers. This function returns a value of the
int type.
Example:
• Mathematical.SDIV(7,3)
• Output: 2

SIN Returns the trigonometric sine of an expression. Mathematical.SIN(double a)


This function returns a value of the double type.
Example:
• Mathematical.SIN(3.14/4)
• Output: 0.706825181105366

SINH Returns the hyperbolic sine of an expression. Mathematical.SINH(double a)


This function returns a value of the double type.
Example:
• Mathematical.SIN(3.14/4)
• Output: 0.8681436194737884

SMUL Multiplies two string numbers. This function Mathematical.SMUL(String a,String b)


returns a value of the double type.
Example:
• Mathematical.SMUL("4","5")
• Output: 20.0

SQRT Calculates the square root of a number. This Mathematical.SQRT(double a)


function returns a value of the double type.
Example:
• Mathematical.SQRT(1.69)
• Output: 1.3

SSUB Subtracts one string number from another and Mathematical.SSUB(String a,String b)
returns the result as a string number.
Example:
• Mathematical.SSUB("5","3")
• Output: 2.0

528
Using routines

Function Description Syntax

TAN Returns the trigonometric tangent of an Mathematical.TAN(double a)


expression. This function returns a value of the
double type.
Example:
• Mathematical.TAN(3.14/4)
• Output: 0.9992039901050427

TANH Returns the hyperbolic tangent of an expression. Mathematical.TANH(double a)


This function returns a value of the double type.
Example:
• Mathematical.TANH(3.14/4)
• Output: 0.6555672165322445

Numeric routine
The Numeric routine contains several functions which allow you to return whole or decimal numbers in order to use them as
settings in one or more Job components.
You can access the numeric routine functions by double-clicking the Numeric node under the system routines folder in the
Repository tree view. The Numeric routine contains several functions, notably sequence, random and convertImplied
DecimalFormat.

Function Description Syntax

sequence Returns an incremental numeric ID. Numeric.sequence("Parameter name", start


value, increment value)

resetSequence Creates a sequence if it doesn't exist and Numeric.resetSequence (Sequence Identifier,


attributes a new start value. start value)

removeSequence Removes a sequence. Numeric.removeSequence (Sequence Identifier)

random Returns a random whole number between the Numeric.random(minimum start value, maximum
maximum and minimum values. end value)

convertImplied Returns a decimal with the help of an implicit Numeric.convertImpliedDecimalFormat ("Target


DecimalFormat decimal model. Format", value to be converted)

The three routines sequence, resetSequence, and removeSequence are closely related.
• The sequence routine is used to create a sequence identifier, named s1 by default, in the Job. This sequence identifier
is global in the Job.
• The resetSequence routine can be used to initialize the value of the sequence identifier created by sequence
routine.
• The removeSequence routine is used to remove the sequence identifier from the global variable list in the Job.

Note: When using the tRunJob component, the sequence identifier is shared by the parent Job and all child Jobs across
the whole JVM. You should use unique identifier if you do not want to use it globally.

Creating a Sequence

The sequence routine allows you to create automatically incremented IDs, using a tJava component:

System.out.println(Numeric.sequence("s1",1,1));
System.out.println(Numeric.sequence("s1",1,1));

The routine generates and increments the ID automatically:

529
Using routines

Converting an Implied Decimal

It is easy to use the convertImpliedDecimalFormat routine, along with a tJava component, for example:

System.out.println(Numeric.convertImpliedDecimalFormat("9V99","123"));

The routine automatically converts the value entered as a parameter according to the format of the implied decimal
provided:

Relational routine
The Relational routine contains several functions which allow you to check affirmations based on boolean values.
You can access these functions by double-clicking the Relational node under the system routines folder in the Repository
tree view.

Function Description Syntax

ISNULL Checks if the variable provided is a null value. Relational.ISNULL(variable)


It returns true if the value is NULL and false
if the value is not NULL.

NOT Returns the complement of the logical value of Relational.NOT(expression)


an expression.

isNull Checks if the variable provided is a null value. Relational.isNull(variable)


It returns 1 if the value is NULL and 0 if the
value is not NULL.

To check the Relational routine, you can use the ISNULL function, along with a tJava component, for example:

String str = null;


System.out.println(Relational.ISNULL(str));

In this example, the test result is displayed in the Run view:

SQLike routine
The SQLike routine contains functions which allow you to extract a part of a string.
You can access these functions by double-clicking the SQLike node under the system routines folder in the Repository tree
view.

530
Using routines

Function Description Syntax

mid Extracts a substring from a string, from the SQLike.mid(String instr,int beginIndex,[int
position specified by beginIndex. The count count])
parameter specifies the number of characters to
be extracted. If the count is not provided, this
function extracts the substring from the position
specified by beginIndex to the end.
Example:
• SQLike.mid("abcdefg",3,3)
• Output: cde

mid_index Extracts a substring from a string. If count is SQLike.mid_index(String instr,String


positive, the left most substring that contains delimiter,int count)
count-1delimiters is returned; if count is
negative, the right most substring that contains
-count-1 delimiters is returned.
Example 1:
• SQLike.mid("a,bc,d,e,f,g",",
",3)
• Output: a,bc,d
Example 2:
• SQLike.mid("a,bc,d,e,f,g",",
",-3)
• Output: e,f,g

StringHandling routine
The StringHandling routine contains several functions which allow you to carry out various kinds of operations and tests on
alphanumeric expressions, based on Java methods.
You can access these functions by double-clicking the StringHandling node under the system routines folder in the
Repository tree view.

Function Description Syntax

ALPHA Checks whether the expression is StringHandling.ALPHA("string to be checked")


arranged in alphabetical order.
Returns the true or false
boolean accordingly.

IS_ALPHA Checks whether the expression StringHandling.IS_ALPHA("string to be checked")


contains alphabetical characters
only, or otherwise. Returns
the true or false boolean
accordingly.

CHANGE Replaces an element of a string StringHandling.CHANGE("string to be checked", "string to


with a defined replacement be replaced","replacement string")
element and returns the new
string.

COUNT Returns the number of times a StringHandling.COUNT("string to be checked", "substring


substring occurs within a string. to be counted")

DOWNCASE Converts all uppercase letters in StringHandling.DOWNCASE("string to be converted")


an expression into lowercase and
returns the new string.

UPCASE Converts all lowercase letters in an StringHandling.UPCASE("string to be converted")


expression into uppercase and re
turns the new string.

DQUOTE Encloses an expression in double StringHandling.DQUOTE("string to be enclosed in double


quotation marks. quotation marks")

531
Using routines

Function Description Syntax

EREPLACE Substitutes all substrings StringHandling.EREPLACE(oldStr, regex, replacement)


that match the given regular
expression in the given old string
with the given replacement and
returns a new string.

INDEX Returns the position of the first StringHandling.INDEX("string to be checked", "substring


character in a specified substring, specified")
within a whole string. If the
substring specified does not exist
in the whole string, the value -1 is
returned.

LEFT Specifies a substring which StringHandling.LEFT("string to be checked", number of ch


corresponds to the first n aracters)
characters in a string.

RIGHT Specifies a substring which StringHandling.RIGHT("string to be checked", number of


corresponds to the last n characters)
characters in a string.

LEN Calculates the length of a string. StringHandling.LEN("string to check")

SPACE Generates a string consisting of a StringHandling.SPACE(number of blank spaces to be ge


specified number of blank spaces. nerated)

SQUOTE Encloses an expression in single StringHandling.SQUOTE("string to be enclosed in single


quotation marks. quotation marks")

STR Generates a particular character a StringHandling.STR('character to be generated', number


the number of times specified. of times)

TRIM Deletes the spaces and tabs before StringHandling.TRIM("string to be checked")


the first non-blank character in a
string and after the last non-blank
character, then returns the new st
ring.

BTRIM Deletes all the spaces and tabs StringHandling.BTRIM("string to be checked")


after the last non-blank character
in a string and returns the new
string.

FTRIM Deletes all the spaces and tabs StringHandling.FTRIM("string to be checked")


preceding the first non-blank
character in a string.

SUBSTR Returns a portion of a string. It StringHandling.SUBSTR(string, start, length)


counts all characters, including
• string: the character string you want to search.
blanks, starting at the beginning of
the string. • start: the position in the string where you want to start counting.
• length: the number of characters you want to return.

LTRIM Removes blanks or characters from StringHandling.LTRIM(string[, trim_set])


the beginning of a string.
• string: the string you want to change.
• trim_set: the characters you want to remove from the beginning of
the string. LTRIM will compare the trim_set to the string character-
by-character, starting with the left side of the string, and remove
characters until it fails to find a matching character in the trim_set. If t
his parameter is not specified, LTRIM will remove any blanks from the
beginning of the string.

532
Using routines

Function Description Syntax

RTRIM Removes blanks or characters from StringHandling.RTRIM(string[, trim_set])


the end of a string.
• string: the string you want to change.
• trim_set: the characters you want to remove from the ending of the
string. RTRIM will compare the trim_set to the string character-b
y-character, starting with the right side of the string, and remove
characters until it fails to find a matching character in the trim_set. If t
his parameter is not specified, RTRIM will remove any blanks from the
ending of the string.

LPAD Converts a string to a specified StringHandling.LPAD(first_string, length[, second_str


length by adding blanks or cha ing])
racters to the beginning of the
• first_string: the string you want to change.
string.
• length: the length you want the string to be after being padded.
• second_string: the characters you want to append to the left side of the
first_string.

RPAD Converts a string to a specified StringHandling.RPAD(first_string, length[, second_str


length by adding blanks or cha ing])
racters to the end of the string.
• first_string: the string you want to change.
• length: the length you want the string to be after being padded.
• second_string: the characters you want to append to the right side of
the first_string.

INSTR Returns the position of a character StringHandling.INSTR(string, search_value, start,


set in a string, counting from left occurrence)
to right and starting from 1.
• string: the string you want to search.
Note that it returns 0 if the search • search_value: the set of characters you want to search for.
is unsuccessful and NULL if the • start: the position in the string where you want to start the search. The
search value is NULL. default is 1, meaning it starts the search from the first character in the
string.
• occurrence: the occurrence you want to search for.
For example, StringHandling.INSTR("Talend Technology",
"e", 3, 2), it will start the search from the third character l and return 7,
the position of the second character e.

INSTR Returns the position of a byte set StringHandling.INSTR(byte[] string, byte[] search_value,
in a string, counting from left to start, occurrence)
right and starting from 1.
• string: the string you want to search.
Note that it returns 0 if the search • search_value: the set of characters you want to search for.
is unsuccessful and NULL if the • start: the position in the string where you want to start the search. The
search value is NULL. default is 1, meaning it starts the search from the first character in the
string.
• occurrence: the occurrence you want to search for.

TO_CHAR Converts numeric values to text StringHandling.TO_CHAR(numeric_value)


strings.

Storing a string in alphabetical order

It is easy to use the ALPHA routine along with a tJava component, to check whether a string is in alphabetical order:

System.out.println(StringHandling.ALPHA("abcdefg"));

The check returns a boolean value.

533
Using routines

Checking whether a string is alphabetical

It is easy to use the IS_ALPHA routine along with a tJava component, to check whether the string is alphabetical:

System.out.println(StringHandling.IS_ALPHA("ab33cd"));

The check returns a boolean value.

Replacing an element in a string

It is easy to use the CHANGE routine along with a tJava component, to replace one element in a string with another:

System.out.println(StringHandling.CHANGE("hello world!", "world", "guy"));

The routine replaces the old element with the new element specified.

Checking the position of a specific character or substring, within a string

The INDEX routine is easy to use along with a tJava component, to check whether a string contains a specified character or
substring:

System.out.println(StringHandling.INDEX("hello world!", "hello"));


System.out.println(StringHandling.INDEX("hello world!", "world"));
System.out.println(StringHandling.INDEX("hello world!", "!"));
System.out.println(StringHandling.INDEX("hello world", "?"));

The routine returns a whole number which indicates the position of the first character specified, or indeed the first character
of the substring specified. Otherwise, - 1 is returned if no occurrences are found.

Calculating the length of a string

The LEN routine is easy to use, along with a tJava component, to check the length of a string:

System.out.println(StringHandling.LEN("hello world!"));

The check returns a whole number which indicates the length of the chain, including spaces and blank characters.

Deleting blank characters

The FTRIM routine is easy to use, along with a tJava component, to delete blank characters from the start of a string:

System.out.println(StringHandling.FTRIM(" Hello world !"));

The routine returns the string with the blank characters removed from the beginning.

534
Using routines

TalendDataGenerator routine
The TalendDataGenerator routine contains several functions which allow you to generate sets of test data. These functions
are based on fictitious lists of first names, second names, addresses, towns and States provided by Talend. They are generally
used when developing Jobs, using a tRowGenerator component, for example, to avoid using production or company data.
You can access these functions by double-clicking the TalendDataGenerator node under the system routines folder in the
Repository tree view.

Function Description Syntax

getFirstName returns a first name taken randomly TalendDataGenerator.getFirstName()


from a fictitious list.

getLastName returns a random surname from a TalendDataGenerator.getLastName()


fictitious list.

getUsStreet returns an address taken randomly TalendDataGenerator.getUsStreet()


from a list of common American street
names.

getUsCity returns the name of a town taken TalendDataGenerator.getUsCity()


randomly from a list of American
towns.

getUsState returns the name of a State taken TalendDataGenerator.getUsState()


randomly from a list of American
States.

getUsStateId returns an ID randomly taken from TalendDataGenerator.getUsStateId()


a list of IDs attributed to American
States.

Note: No entry parameter is required as Talend provides the list of fictitious data.

Generating fictitious data

It is easy to use the different functions to generate data randomly. Using a tJava component, you can, for example, create a
list of fictitious client data using functions such as getFirstName(), getLastName(), getUSCity():

System.out.println(TalendDataGenerator.getFirstName());
System.out.println(TalendDataGenerator.getLastName());
System.out.println(TalendDataGenerator.getUsCity());
System.out.println(TalendDataGenerator.getUsState());
System.out.println(TalendDataGenerator.getUsStateId());
System.out.println(TalendDataGenerator.getUsStreet());

The set of data taken randomly from the list of fictitious data is displayed in the Run view:

535
Using routines

TalendDate routine
The TalendDate routine contains several functions which allow you to carry out different kinds of operations and checks
concerning the format of Date expressions.

Warning: These functions internally use the Java class SimpleDateFormat except when the pattern is set to yyyy-MM-
dd or yyyy-MM-dd HH:mm:ss, where there is no check for the format of the input date string in order to achieve better
performance. You need to make sure the input date string matches the set pattern precisely when using either of these
two patterns.

You can access these functions by double-clicking the TalendDate node under the system routines folder in the Repository
tree view.

Function Description Syntax

addDate Adds n days, n months, n hours, n minutes TalendDate.addDate("initial date string", ["date
or n seconds to a Java date and returns the format - eg.: yyyy/MM/dd"], integer n,"format of
new date. the part of the date to which n is to be added -
eg.:yyyy").
The Date format is: yyyy, MM, dd, HH,
mm, ss or SSS.

compareDate Compares all or part of two dates according TalendDate.compareDate(Date date1, Date date2,[String
to the format specified. Returns 0 if the pattern])
dates are identical, -1 if the first date is
earlier and 1 if the second date is earlier.
The pattern parameter specifies the date
format, for example, "yyyy-mm-dd".

diffDate Returns the difference between two dates TalendDate.diffDate(Date date1, Date date2,[String
in terms of days, months or years according dateType],[boolean ignoreDST])
to the comparison parameter specified.
The dateType parameter specifies the
date part to be compared, for example,
"yyyy"; the ignoreDST parameter
specifies if DST needs to be taken into
consideration.

dateFormatConv Returns the difference between two dates TalendDate.diffDateFloor(Date date1(), Date date2,
ert by floor in terms of years, months, days, "format of the part of the date to be compared -
hours, minutes, seconds or milliseconds eg.:MM")
according to the comparison parameter
specified.

diffDateFloor Returns the difference between two dates TalendDate.diffDateFloor(Date date1(), Date date2,
by floor in terms of years, months, days, "format of the part of the date to be compared -
hours, minutes, seconds or milliseconds eg.:MM")
according to the comparison parameter
specified.

536
Using routines

Function Description Syntax

diffDateIgnore Returns the difference between two dates TalendDate.diffDateIgnoreDST(Date1(), Date2(),[String


DST in terms of days, months or years according dateType])
to the comparison parameter specified,
ignoring the DST.
The dateType parameter specifies the
format of the date part to be compared, for
example, "yyyy".

formatDate Returns a date string which corresponds to TalendDate.formatDate("date format - eg.: yyyy-MM-dd
the format specified. HH:mm:ss", Date() to be formatted)

formatDateInTi Formats a date into the specified format TalendDate.formatDateInTimeZone(String datePattern,


meZone date/time string under the given timezone. Date tate, String TimeZoneID)

formatDateInUTC Formats a date into the specified format TalendDate.formatDateInUTC(String datePattern, Date
date/time string under the UTC timezone. tate, String TimeZoneID)

formatDateLocale Changes a date into a date/hour string TalendDate.formatDateLocale("format target",


according to the format used in the target java.util.Date date, "language or country code")
country.

formatDatetime Formats date to MSSQL 2008 type TalendDate.formatDatetimeoffset(Date date)


offset datetimeoffset ISO 8601 string with local
time zone format string : yyyy-MM-dd
HH:mm:ss.SSSXXX (JDK7 support it).

getCurrentDate Returns the current date. No entry TalendDate.getCurrentDate()


parameter is required.

getDate Returns the current date and hour in the TalendDate.getDate("Format of the string - ex: CCYY-
format specified (optional). This string can MM-DD")
contain fixed character strings or variables
linked to the date. By default, the string is
returned in the format of DD/MM/CCYY.

getFirstDayOfM Changes the date of an event to the first TalendDate.getFirstDayMonth(Date)


onth day of the current month and returns the
new date.

getLastDayOfMo Changes the date of an event to the last day TalendDate.getLastDayMonth(Date)


nth of the current month and returns the new
date.

getPartOfDate Returns part of a date according to the TalendDate.getPartOfDate("String indicating the part
format specified. This string can contain of the date to be retrieved, "String in the format of
fixed character strings or variables linked to the date to be parsed")
the date.

getRandomDate Returns a random date, in the ISO format. TalendDate.getRandomDate("format date of the
character string", String minDate, String maxDate)

isDate Checks whether the date string corresponds TalendDate.isDate(Date() to be checked, String
to the format specified. Returns the pattern, [boolean ignoreTimeZone])
boolean value true or false according to the
where, pattern specifies the format of the date to be checked. For example,
outcome.
yyyy-MM-dd HH:mm:ss.

isDateStrict Tests string value as a date with right TalendDate.isDateStrict(String stringDate, String
pattern using strict rules. This validation pattern)
uses Java 8 time tools. The range of time-
zone offsets is restricted to -18:00 to
18:00 inclusive. Returns a boolean value
indicating whether the stringDate is a date
string with a right pattern.

537
Using routines

Function Description Syntax

parseDate Changes a string into a Date. Returns a date TalendDate.parseDate("format date of the string to
in the specified format. be parsed", "string in the format of the date to be
parsed",["boolean about whether parsing is set to be
lenient, that is to say, accepting the heuristic match
with the format"])

parseDateInUTC changes a string into a Date in UTC. Returns TalendDate.parseDateInUTC("format date of the string
a date in the UTC format. to be parsed", "string in the format of the date to be
parsed", ["boolean about whether parsing is set to be
lenient, that is to say, accepting the heuristic match
with the format"])

parseDateLocale Parses a string according to a specified TalendDate.parseDateLocale("date format of the string


format and extracts the date. Returns the to be parsed", "String in the format of the date to
date according to the local format specified. be parsed", "code corresponding to the country or
language")

setDate Modifies part of a date according to the TalendDate.setDate(Date, whole n, "format of the part
part and value of the date specified and the of the date to be modified - eg.:yyyy")
format specified.

TO_CHAR Converts a date to a character string. TalendDate.TO_CHAR(date[,format])


• date: the date value you want to convert to a character string.
• format: the string which defines the format of the return value.

TO_DATE Converts a character string to a Date/Time TalendDate.TO_DATE(string[, format])


datatype.
• string: the string you want to convert to a Date/Time datatype.
• format: the format string that matches the part of the string
argument. If not specified, the string value must be in the date
format MM/dd/yyyy HH:mm:ss.SSS.
For example, TalendDate.TO_DATE("04/24/2017 13
:55:42.123") will return Mon Apr 24 13:55:42 CST 2017.

ADD_TO_DATE Adds a specified amount to one part of a TalendDate.ADD_TO_DATE(date, format, amount)


datetime value, and returns a date in the
• date: the date value you want to change.
same format as the date you pass to the
function. • format: the format string specifying the portion of the date value you
want to change.
• Valid format strings for year: Y, YY, YYY, and YYYY.
• Valid format strings for month: MONTH, MM, and MON.
• Valid format strings for day: D, DD, DDD, DAY, and DY.
• Valid format strings for hour: HH, HH12, and HH24.
• Valid format string for minute: MI.
• Valid format string for second : SS.
• Valid format string for millisecond: MS.
• amount: the integer value specifying the amount of years, months,
days, hours, and so on by which you want to change the date value.
For example,
if TalendDate.getCurrentDate() returns Mon Apr 24 14:26:03
CST 2017,
TalendDate.ADD_TO_DATE(TalendDate.getCurrentDate(),
"YY", 1) will return Tue Apr 24 14:26:03 CST 2018.

538
Using routines

Warning:
Although "yyyy" and "YYYY" in the date format return the same year number in most cases , "YYYY" may not work as
expected when used:
• at the first week of a year if the year does not start by the first day of the week.
• at the last week of a year if the year does not end by the last day of the week.
For example, when calculating what date it is 3 days before January 2, 2016, the code below would return a wrong date:

System.out.println(TalendDate.formatDate("YYYY-MM-dd", TalendDate.addDate(TalendDat
e.TO_DATE("01/02/2016 08:10:30.123"), -3, "dd")));

while the following code would work as expected:

System.out.println(TalendDate.formatDate("yyyy-MM-dd", TalendDate.addDate(TalendDat
e.TO_DATE("01/02/2016 08:10:30.123"), -3, "dd")));

Therefore, you should typically use "yyyy", which represents calendar year.

Formatting a Date

The formatDate routine is easy to use, along with a tJava component:

System.out.println(TalendDate.formatDate("dd-MM-yyyy", new Date()));

The current date is initialized according to the pattern specified by the new Date() Java function and is displayed in the
Run view:

Checking a Date

It is easy to use the isDate routine, along with a tJava component to check if a date expression is in the format specified:

System.out.println(TalendDate.isDate("2010-02-09 00:00:00",
"yyyy-MM-dd HH:mm:ss"));

A boolean is returned in the Run view:

Comparing Dates

It is easy to use the compareDate routine, along with a tJava component to compare two dates, for example to check if the
current date is identical to, earlier than or later than a specific date, according to the format specified.

System.out.println(TalendDate.compareDate(new Date(), TalendDate.parseDate("yyyy-MM-


dd", "2025/11/24")));

In this example the current date is initialized by the Java function new Date() and the value -1 is displayed in the Run
view to indicate that the current date is earlier than the second date.

539
Using routines

Configuring a Date

It is easy to use the setDate routine, along with a tJava component to change the year of the current date, for example:

System.out.println(TalendDate.formatDate("yyyy/MM/dd HH:mm:ss",new Date()));


System.out.println(TalendDate.setDate(new Date(),2011,"yyyy"));

The current date, followed by the new date are displayed in the Run view:

Parsing a Date

It is easy to use the parseDate routine, along with a tJava component to change a date string from one format into another
date format, for example:

System.out.println(TalendDate.parseDate("yyyy-MM-dd HH:mm:ss",
"1979/10/20 19:00:59"));

The string is changed and returned in the date format:

Retrieving part of a Date

It is easy to use the getPartOfDate routine, along with a tJava component to retrieve part of a date, for example:

Date D=TalendDate.parseDate("dd-MM-yyyy HH:mm:ss", "13-10-2010 12:23:45");


System.out.println(D.toString());
System.out.println(TalendDate.getPartOfDate("DAY_OF_MONTH", D));
System.out.println(TalendDate.getPartOfDate("MONTH", D));
System.out.println(TalendDate.getPartOfDate("YEAR", D));
System.out.println(TalendDate.getPartOfDate("DAY_OF_YEAR", D));
System.out.println(TalendDate.getPartOfDate("DAY_OF_WEEK", D));

In this example, the day of month (DAY_OF_MONTH), the month (MONTH), the year (YEAR), the day number of the year
(DAY_OF_YEAR) and the day number of the week (DAY_OF_WEEK) are returned in the Run view. All the returned data are
numeric data types.

Note: In the Run view, the date string referring to the months (MONTH) starts with 0 and ends with 11: 0 corresponds to
January, 11 corresponds to December.

540
Using routines

Formatting the Current Date

It is easy to use the getDate routine, along with a tJava component, to retrieve and format the current date according to a
specified format, for example:

System.out.println(TalendDate.getDate("CCYY-MM-DD"));

The current date is returned in the specified format (optional):

TalendString routine
The TalendString routine contains several functions which allow you to carry out various operations on alphanumerical
expressions.
You can access these functions by double-clicking the TalendString node under the system routines folder in the Repository
tree view.

Function Description Syntax

AddEscapeChars Adds a specified character before TalendString.addEscapeChars("padding


each special character (that is, a chars",'escape char')
character that is not a letter, a digit,
_, or space) in a string.

replaceSpecialCharForXML returns a string from which the TalendString.replaceSpecialCharForXML("string


special characters (eg.:: <, >, &...) have containing the special characters - eg.: Thelma &
been replaced by equivalent XML Louise")
characters.

checkCDATAForXML identifies characters starting with <! TalendString.checkCDATAForXML("string to be


[CDATA[ and ending with ]]> as parsed")
pertaining to XML and returns them
without modification. Transforms
the strings not identified as XML in a
form which is compatible with XML
and returns them.

talendTrim parses the entry string and removes TalendString.talendTrim("string to be parsed",


the filler characters from the start "filler character to be removed", character
and end of the string according to position)
the alignment value specified: -1
for the filler characters at the end of
the string, 1 for those at the start of
the string and 0 for both. Returns the
trimmed string.

removeAccents removes accents from a string and TalendString.removeAccents("String")


returns the string without the acc
ents.

getAsciiRandomString generates a random string with a TalendString.getAsciiRandomString(whole number


specific number of characters. indicating the length of the string)

unionString Combines the variable number TalendString.unionString(String separator,Obje


of strings with a specified string ct... objects)
separator.

Formatting an XML string

It is easy to run the replaceSpecialCharForXML routine along with a tJava component, to format a string for XML:

System.out.println(TalendString.replaceSpecialCharForXML("Thelma & Louise"));

541
Using routines

In this example, the & character is replaced in order to make the string XML compatible:

Trimming a string

It is easy to use the talendTrim routine, along with a tJava component to remove the string padding characters from the
start and end of the string:

System.out.println(TalendString.talendTrim("**talend open studio****",'*', -1));


System.out.println(TalendString.talendTrim("**talend open studio****",'*', 1));
System.out.println(TalendString.talendTrim("**talend open studio****",'*',0));

The star characters are removed from the start, then the end of the string and then finally from both ends:

Removing accents from a string

It is easy to use the removeAccents routine, along with a tJava component, to replace the accented characters, for
example:

System.out.println(TalendString.removeAccents("sâcrebleü!"));

The accented characters are replaced with non-accented characters:

TalendStringUtil routine
The TalendStringUtil routine contains only one function DECODE that allows you to search a value in a port.
You can access the function by double-clicking the TalendStringUtil node under the system routines folder in the Repository
tree view.

Function Description Syntax

DECODE Searches a port for a value you specify. If the function TalendStringUtil.DECODE(value, defaultValue,
finds the value, it returns a result value, which you search1, result1[, search2, result2]...)
define. You can build an unlimited number of searches
• value: the value you want to search.
within a DECODE function.
• defaultValue: the value you want to return if the search does
not find a matching value. The default value can be set to
null.
• search: the value for which you want to search. The search
value must have the same datatype as the value argument.
• result: the value you want to return if the search finds a
matching value.

542
Using routines

Below is an example of how to use the DECODE function with a tJava component. You need to add a tJava component to a
new Job, then enter the following code, which will search the value for 10, in the Code field on the Basic settings view of the
tJava component.

TalendStringUtil<Integer,String> example = new TalendStringUtil<Integer,String>();


System.out.println(example.DECODE(10, "error", 5, "five", 10, "ten", 15, "fifteen", 20,
"twenty"));

Note that you need to create a new object of the TalendStringUtil type, and better to use generic type to constrain the
input data, then use the object to call the DECODE routine.
Press F6 to run the Job. It will return ten, which is the result of the value 10.

543
CommandLine

CommandLine
CommandLine is the equivalent of Talend Studio without GUI. It also helps executing Jobs in batch mode for example.
At any time, and in any mode, you can display the full Help by typing in help. The Help content provides an exhaustive list
of commands and their respective description.
CommandLine has three operating modes:
• Standalone/Basic mode, see Standalone/Basic mode on page 545.
• Shell mode, see Shell mode on page 545.
• Script mode, see Script mode on page 545.
For a list of all the commands that can be used in the CommandLine, see CommandLine API on page 546.

CommandLine overview
The CommandLine offers the same basic functionalities as Talend Studio and it is used to generate and export the Jobs
developed with Talend Studio onto the Job servers, or export Services, Routes and data service Jobs onto Runtime servers.
To launch the CommandLine on Linux, you have to run the commandline-linux.sh file. To launch the CommandLine on
Windows, you have to run the commandline.bat file.

Note: Before shutting down or rebooting Windows, you should ensure that JobSever and CommandLine services are not
running.

The commandline file contains three different parts:


• the name of the Talend Studio executable corresponding to your OS, for example: ./Talend-Studio-linux-gtk-
x86
• operating options, for example:
• nosplash, no interface is displayed.
• application org.talend.commandline.CommandLine, the application is launched in commandline
mode.
• consoleLog, the logs are displayed in the console.
• data commandline-workspace, specify the path and name of the commandline workspace.
• the operating mode, for example, shell or scriptFile /tmp/myscript.
If you want to modify the default settings, you can edit the file, and set it according to your need.
If you want to run your CommandLine in background on Linux, you first need to disable the shell. To do so:
1. Edit the commandline.sh file.
2. Add the --disableShellInput command.
3. Save your file.
Then you can execute your CommandLine in background. To do so, do the following:
1. Switch to the <CommandlinePath>.
2. Enter the command ./commandline-linux.sh &.
Once the commands have been executed, you can close the CommandLine window and it will not exit the service.

Operating modes
If your operating system shell provides a graphical user interface, you just have to double-click the commandline.bat or
commandline.sh file (according to your OS) to run the CommandLine. If your operating system shell provides a command-
line interface, switch to the CommandLine path and enter the command: ./commandline-linux.sh to run the
CommandLine.
To switch between the different operating modes and execute any command, you can either edit the commandline file if you
are using a graphical shell or you can do it directly from the command-line if you are using a command-line shell.

544
CommandLine

Standalone/Basic mode

About this task


Standalone/basic is CommandLine's default mode. This mode allows you to execute a single command.
In Standalone mode, CommandLine switches off after executing all commands passed on through the list of arguments. For
example, on Linux you can display the Help content using the following arguments:

Procedure
1. Switch to the <CommandlinePath>.
2. Enter in the shell:

./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.Comma


ndLine
-consoleLog -data commandline-workspace help

Results
Once the commands have been executed, CommandLine exits.
From this mode, you can switch to the Shell and Script modes, detailed hereafter.

Shell mode

About this task


In Shell mode, you can enter the CommandLine commands once and then only have to enter CommandLine's own keywords
without having to restart CommandLine each time.
To launch CommandLine in shell mode on Linux,

Procedure
1. Switch to the CommandLine directory.
2. Enter the following command to switch to Shell mode:

./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.Comma


ndLine
-consoleLog -data commandline-workspace shell

3. Now, enter your CommandLine command, for example:


initRemote http://myAdminCenterURL.com -ul [email protected] -up mypassword
Make sure you have entered correct user credentials. If the credential information is correct, it will be saved
automatically and be reused during project logon. Otherwise, some error information will be prompted.
4. When the output of the above command is returned by the CommandLine, enter the command:
listProject

Warning: To access the help that lists all the valid commands, you can start CommandLine in standalone mode
and run the help command. The most complete help is provided by CommandLine in standalone mode, since in
standalone mode you can execute CommandLine in shell mode.

Script mode
In Script mode, CommandLine reads a script file containing a list of commands and executes them. To do so, type in the
following command in the commandline.sh:

./Talend-Studio-linux-gtk-x86 -nosplash -application org.talend.commandline.CommandLine


-consoleLog -data commandline-workspace scriptFile /tmp/myscript

If needed, you can add the parameter --disableLocalMode to disable the local mode of the CommandLine. After that,
only the commands help and initRemote are allowed.

545
CommandLine

Example of script file read by the CommandLine:

initRemote http://localhost:8888/org.talend.administrator -ul [email protected] -up


passwd
logonProject -pn myProject
exportItems /tmp/myitems.zip

Updating your license using the CommandLine


About this task
Before you begin, you have downloaded the license file to the local Nexus server.

Procedure
1. Launch your CommandLine. For more information on how to launch the CommandLine, see Operating modes on page
544.
2. Run the following command to connect to your repository:

initRemote {tac_url} -ul {TAC login username} -up {TAC login password}

3. Run the following command to update the license:

checkAndUpdate -tu checkAndUpdate -tu {TAC login username} -tup {TAC login
password}

The CommandLine updates the license, downloads and applies the available patch, if necessary, and automatically
restarts.

CommandLine API
The exhaustive help for all commands that you get when you type the -help command in your Talend CommandLine
application.
If you do not use the latest version of the Talend CommandLine, some of the following commands might not be available,
and we recommend you to refer to the help command of your current Talend CommandLine version.

546
CommandLine

Commands

+-----------------------------------------------------------------------------------
---------------+
|Talend Commandline Plugin :
|
| * arguments can be surrounded by (") or (') characters
|
| * the semi-colon (;) character can be used to separate commands from each other
|
| * special characters (space \ " ' ;) can be escaped using the character (\)
|
+---------------------------------------------------------------------------------
-----------------+
|Usage:
|
| [addReference -ref <references> bridgeExcelExport -d <directory>|-filter <filter>
|
|buildItemSources <item-id> -im <main-job-only>|-iv <item-version> buildJob <jobName> -
af |
|<filename>|-ant|-az|-bin|-bt <build type>|-dd <path>|-em|-eo|-et|-ic|-ijs|-il|-its|-
jactc|-jall |
|<log4j level name>|-jc <context name>|-jstats|-jt <type>|-jv <version>|-maven|-od|-tc
buildRoute |
|<routeName> -af <filename>|-az|-bt <build type>|-dd <path>|-em|-jactc|-jc <context
|
|name>|-jstats|-jv <version>|-maven|-od cancelGroup changeMavenVersion <maven-version>
-if |
|<filterExpr>|-sj|-ss changeStatus <newStatusCode> -d|-if <filterExpr> changeVersion
<newVersion |
|(x.x | nextMajor | nextMinor)> -d|-flv|-if <filterExpr> checkAndUpdate -tu <user>|-tup
<password> |
|createJob <jobName> -o|-sf <file path> createProject -pa <author login>|-pap
<password>|-pd |
|<description>|-pl <language (java/perl)>|-pn <technical name> createTask <taskName> -
a|-asce|-b |
|<branch name>|-d <description>|-ese|-esn <execution server>|-jactc|-jc <job context>|-
jn <job |
|name>|-jv <job version>|-ouj <on unavailable jobserver>|-ousj <unknown state job>|-pn
<project |
|name>|-rjc deleteItems -if <filterExpr> deleteReference -ref <references>
deployJobToServer |
|<jobName> -es <name>|-jactc|-jall <level>|-jc <context name>|-jv <version>|-pd
<password>|-sp |
|<port>|-tp <port>|-un <username>|-useSSL executeAllJob -i <path>|-if <filterExpr>|-
jactc|-jc |
|<context name>|-jcp <key=value1> [<key=value2> ...]|-jrdd <path>|-jt <time (in sec)>
executeJob |
|<jobName> -i <path>|-jactc|-jc <context name>|-jcp <key=value1> [<key=value2> ...]|-
jrdd |
|<path>|-jt <time (in sec)>|-jv <version> executeJobOnServer <jobName> -es <name>|-
jactc|-jall |
|<level>|-jc <context name>|-jcp <key=value1> [<key=value2> ...]|-jrrd <jobResultDest
Dir>|-jsp |
|<port>|-jtp <port>|-jv <version>|-pd <password>|-ra <runas>|-un <username>|-useSSL
executeReport |
|-n <name>|-p <path>|-pc <context name> executeRoute <routeName> -i <path>|-jactc|-jc
<context |
|name>|-jcp <context param1> [<context param2> ...]|-jrdd <path>|-jt <time out>|-jv
<version1> |
|[<version2> ...] exportAllJob -dd <path>|-if <filterExpr>|-jactc|-jall <log4j level
name>|-jc |
|<context name> exportAsCWM <dbConnectionName> -dd <path> exportItems <destination
(dir|.zip)> |
|-d|-if <filterExpr> exportJob <jobName> -af <filename>|-ant|-dd <path>|-eo|-jactc|-
jall <log4j |
|level name>|-jc <context name>|-jstats|-jv <version>|-maven exportRoute <routeName> -
af |
|<filename>|-az|-bt <build type>|-dd <path>|-em|-jactc|-jc <context name>|-jstats|-jv
|
|<version>|-maven|-od exportService <serviceName> -af <filename>|-dd <path>|-maven|-sv
<version> |
|generateAuditReport <auditId> -fp <filePath>|-t <template> generateMigrationReport -dp
<dataPath |

547
CommandLine

|(dir)>|-from <from_version>|-ps <projects>|-rp <reportPath (dir)>|-to <to_version1>


|
|[<to_version2> ...] getCommandStatus <commandId> -ci <index> help <commandName>
helpFilter |
|importDatabaseMetadata <filePath> importDelimitedMetadata <filePath> importItems
<source |
|(dir|.zip)> -if <filterExpr>|-im|-o|-s|-sl initLocal initRemote <adminUrl> -ul
<login>|-up |
|<password> listCloudWorkspaces -n <name>|-p <password>|-r <url>|-u <username>
listCommand |
|-a|-q|-r listExecutionServer listItem -if <filterExpr>|-m listJob listProject -b
listRoute |
|listService logoffProject logonProject -br <branch>|-gt|-pn <technical name>|-ro|-ul
<login>|-up |
|<password> mCreateServer <serverName> -pd <password>|-u <url>|-un <username>
mDeployItem -on |
|<object name>|-ot <object type>|-ov <object version>|-sn <server name>|-sp <server
password> |
|mDeployJob <jobName1> [<jobName2> ...] -c <object name>|-ov <object version>|-sn
<server |
|name>|-sp <server password> mExportDataContainer -dn <data container name>|-path <zip
file |
|path>|-sn <server name>|-sp <server password> mImportDataContainer -d|-path <zip file
path>|-sn |
|<server name>|-sp <server password> mListServer mUnDeployItem -on <object name>|-ot
<object |
|type>|-sn <server name>|-sp <server password> mUpdateServer -sn <server name>|-sp
<server |
|password> migrationCheck -dp <dataPath (dir)> populateAndGenerateReport -dd <driver>|-
du |
|<user>|-fp <filePath>|-ju <jdbc_url>|-t <template>|-up <password> populateAudit -dd
<driver>|-du |
|<user>|-ju <jdbc_url>|-up <password> publishAction <actionName> -i|-na|-p <password>|-
r <url>|-u |
|<username>|-v <version>|-w <workspace> publishJob <jobName> -a <artifactId>|-g
<group>|-jactc|-jc |
|<contextName>|-p <password>|-pv <version>|-r <url>|-rt <repository-type>|-s|-t
<exportType>|-u |
|<username>|-v <version> publishRoute <routeName> -a <artifactId>|-g <group>|-p
<password>|-pv |
|<version>|-r <url>|-rt <repository-type>|-s|-u <username>|-v <version> publishService
|
|<serviceName> -a <artifactId>|-g <group>|-p <password>|-pv <version>|-r <url>|-rt
|
|<repository-type>|-s|-u <username>|-v <version> quit regenerateAllPoms -if
<filterExpr> |
|setUserComponentPath -c|-up <path> showVersion startGroup -o <infor> stopGroup]
|
+---------------------------------------------------------------------------------
-----------------+
| addReference Create project
reference |
| -ref (--references) references projectName/br
anches(tags)/bra|
| nchName(tagName),
'|' as |
| separator if add
many |
| projects
|
| bridgeExcelExport Massive export of
Job |
| metadata for Third
Party |
| -d (--directory) directory export target
directory |
| -filter (--filter) filter job filter
|
| buildItemSources item-id Build the source
code of one |
| job or route from
it's |
| internal id
|
| -im (--main-job-only) main-job-only Flag controls
whether to |

548
CommandLine

| build main job


only |
| -iv (--item-version) item-version Version of the
item needed |
| buildJob jobName buildJob in
commandline |
| -af (--archive-filename) filename archive filename
without |
| extension
|
| -ant (--add-ant-script) export job with
ant build |
| script(deprecated)
|
| -az (--export-as-zip) export
microservice as zip |
| -bin (--binaries) export binaries
when building |
| job with
maven(deprecated) |
| -bt (--build-type) build type select job build
type (Job by |
| default)
|
| argument:Job|M
icroservice |
| -dd (--destination-directory) path destination
directory |
| -em (--enable-prometheus-metrics-endpoint) enable Prometheus
metrics |
| endpoint
|
| -eo (--export-osgi) export job to osgi
system |
| -et (--execute-tests) execute tests when
building |
| job with maven
|
| -ic (--include-context) include context
when building |
| job with
maven(deprecated) |
| -ijs (--include-java-source) include java
source codes |
| when building job
with maven |
| -il (--include-libs) include libs when
building |
| job with
maven(deprecated) |
| -its (--include-test-source) include test
source codes |
| when building job
with |
| maven(deprecated)
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jall (--job-add-log4j-level) log4j level name generate with the
log4j |
| levels
|
| -jc (--job-context) context name chooses a job
context |
| -jstats (--job-add-statistics-code) generate with the
statistics |
| instructions(d
eprecated) |
| -jt (--job-type) type export the type of
job |
| (PROCESS by
default) |
| -jv (--job-version) version chooses a job
version |
| -maven (--add-maven-script) export job with
maven build |

549
CommandLine

| script(deprecated)
|
| -od (--export-only-default-context) microservice only
export the |
| default context
|
| -tc (--add-test-containers) export job with
test |
| cases(deprecated)
|
| buildRoute routeName build routes
|
| -af (--archive-filename) filename archive filename
without |
| extension
|
| -az (--export-as-zip) export
microservice as zip |
| -bt (--build-type) build type select job build
type |
| (Runtime by
default) |
| argument:Runtime|
Microservice |
| -dd (--destination-directory) path destination
directory |
| -em (--enable-prometheus-metrics-endpoint) enable Prometheus
metrics |
| endpoint
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) context name chooses a job
context |
| -jstats (--job-add-statistics-code) generate with the
statistics |
| instructions
|
| -jv (--job-version) version chooses a job
version |
| -maven (--add-maven-script) export job with
maven build |
| script
|
| -od (--export-only-default-context) microservice only
export the |
| default context
|
| cancelGroup cancel command
group |
| changeMavenVersion maven-version Change Maven
version of jobs |
| -if (--item-filter) filterExpr item filter
expression |
| -sj (--subjobs) Include all sub
jobs |
| -ss (--snapshot) Use Snapshot
|
| changeStatus newStatusCode changes items
status |
| Note: If you use
the "-d" arg |
| with the item
filter |
| together, we will
do the |
| filter first and
then get the |
| dependences of the
filter |
| items, finally
change all of |
| them.
|
| -d (--dependencies) update the
dependencies |

550
CommandLine

| -if (--item-filter) filterExpr item filter


expression |
| changeVersion newVersion (x.x | nextMajor | nextMinor) changes items
version |
| Note: If you use
the "-d" arg |
| with the item
filter |
| together, we will
do the |
| filter first and
then get the |
| dependences of the
filter |
| items, finally
change all of |
| them.
|
| -d (--dependencies) update the
dependencies |
| -flv (--fix-latest-version) fixing latest
version |
| -if (--item-filter) filterExpr item filter
expression |
| checkAndUpdate update from
archiva server |
| which returnd by a
specific |
| TAC
|
| -tu (--tac-user-name) user name of a tac user
|
| -tup (--tac-user-password) password password of a tac
user |
| createJob jobName create job from
script file |
| -o (--over_write) to overwrite if
job existed |
| -sf (--script_file) file path job of script file
|
| createProject creates a project
|
| -pa (--project-author) author login project author
(email) |
| -pap (--project-author-password) password password of author
|
| -pd (--project-description) description project
description |
| -pl (--project-language) language (java/perl) project language
|
| -pn (--project-name) technical name project name
|
| createTask taskName
|
| -a (--active) active
|
| -asce (--add-statistics-code-enable) enabled statistics
code |
| -b (--project-branch) branch name project branch
|
| -d (--description) description project
description |
| -ese (--execStatisticsEnabled) statistic enable
|
| -esn (--excution-server) execution server execution server
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) job context job-context
|
| -jn (--job-name) job name job name
|
| -jv (--job-version) job version job-version
|
| -ouj (--on-unavailable-jobserver) on unavailable jobserver
|

551
CommandLine

| -ousj (--on-unknown-state-job) unknown state job provide the


unknown state job |
| -pn (--project-name) project name project name
|
| -rjc (--regenerate-job-on-change) regenerate job on
change |
| deleteItems delete items
|
| -if (--item-filter) filterExpr item filter
expression |
| deleteReference Delete project
reference |
| -ref (--references) references projectName, '|'
as separator |
| if add many
projects |
| deployJobToServer jobName Deploy job to
server |
| -es (--execution-server) name execution
[virtual] server |
| name
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jall (--job-log4j-level) level log4j level
|
| -jc (--job-context) context name chooses a job
context |
| -jv (--job-version) version chooses a job
version |
| -pd (--jobserver-password) password jobserver password
|
| -sp (--statistics-port) port statistics port
|
| -tp (--trace-port) port trace port
|
| -un (--jobserver-username) username jobserver username
|
| -useSSL (--use-ssl-option) use ssl Protocol
or not |
| executeAllJob executes all
|
| -i (--interpreter) path perl/java
interpreter path |
| -if (--item-filter) filterExpr item filter
expression |
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) context name chooses a job
context |
| -jcp (--job-context-param) key=value [key=value ...] provides a job
context param |
| -jrdd (--job-result-destination-dir) path job execution
result |
| destination dir
|
| -jt (--job-timeout) time (in sec) timeout of
execution |
| executeJob jobName executes
|
| -i (--interpreter) path perl/java
interpreter path |
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) context name chooses a job
context |
| -jcp (--job-context-param) key=value [key=value ...] provides a job
context param |
| -jrdd (--job-result-destination-dir) path job execution
result |
| destination dir
|
| -jt (--job-timeout) time (in sec) timeout of
execution |
| -jv (--job-version) version chooses a job
version |

552
CommandLine

| executeJobOnServer jobName executes on server


|
| -es (--execution-server) name execution
[virtual] server |
| name
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jall (--job-log4j-level) level log4j level
|
| -jc (--job-context) context name chooses a job
context |
| -jcp (--job-context-param) key=value [key=value ...] provides a job
context param |
| -jrrd (--job-result-destination-dir) jobResultDestDir job execution
result |
| destination dir
|
| -jsp (--job-statistics-port) port setup the
statistics port |
| -jtp (--job-trace-port) port setup the trace
port |
| -jv (--job-version) version chooses a job
version |
| -pd (--jobserver-password) password jobserver password
|
| -ra (--run-as-user) runas run as user
|
| -un (--jobserver-username) username jobserver username
|
| -useSSL (--use-ssl-option) use ssl Protocol
or not |
| executeReport Execute reports
|
| -n (--name) name Report names
|
| -p (--path) path Report file pathes
(relative |
| path, e.g. /
Project/TDQ_Data |
| Profiling/Reports/
File). |
| -pc (--report-context) context name chooses a report
context |
| executeRoute routeName executeRoute
|
| -i (--interpreter) path perl/java
interpreter path |
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) context name chooses a job
context |
| -jcp (--job-context-para) context param [context param ...] provides a job
context param |
| -jrdd (--job-result-destination-dir) path job execution
result |
| destination dir
|
| -jt (--job-timeout) time out timeout of
execution |
| -jv (--job-version) version [version ...] chooses a job
version |
| exportAllJob exports all
|
| -dd (--destination-directory) path destination
directory |
| -if (--item-filter) filterExpr item filter
expression |
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jall (--job-add-log4j-level) log4j level name generate with the
log4j |
| levels
|
| -jc (--job-context) context name chooses a job
context |

553
CommandLine

| exportAsCWM dbConnectionName export a db


metadata as CWM |
| -dd (--destination-directory) path destination
directory |
| exportItems destination (dir|.zip) exports items
|
| -d (--dependencies) include
dependencies |
| -if (--item-filter) filterExpr item filter
expression |
| exportJob jobName exportJob in
commandline |
| (deprecated)
|
| -af (--archive-filename) filename archive filename
without |
| extension
|
| -ant (--add-ant-script) export job with
ant build |
| script
|
| -dd (--destination-directory) path destination
directory |
| -eo (--export-osgi) export job to osgi
system |
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jall (--job-add-log4j-level) log4j level name generate with the
log4j |
| levels
|
| -jc (--job-context) context name chooses a job
context |
| -jstats (--job-add-statistics-code) generate with the
statistics |
| instructions
|
| -jv (--job-version) version chooses a job
version |
| -maven (--add-maven-script) export job with
maven build |
| script
|
| exportRoute routeName export routes
(deprecated) |
| -af (--archive-filename) filename archive filename
without |
| extension
|
| -az (--export-as-zip) export
microservice as zip |
| -bt (--build-type) build type select job build
type |
| (Runtime by
default) |
| argument:Runtime|
Microservice |
| -dd (--destination-directory) path destination
directory |
| -em (--enable-prometheus-metrics-endpoint) enable Prometheus
metrics |
| endpoint
|
| -jactc (--job-apply-context-to-children) apply context to
children |
| -jc (--job-context) context name chooses a job
context |
| -jstats (--job-add-statistics-code) generate with the
statistics |
| instructions
|
| -jv (--job-version) version chooses a job
version |
| -maven (--add-maven-script) export job with
maven build |

554
CommandLine

| script
|
| -od (--export-only-default-context) microservice only
export the |
| default context
|
| exportService serviceName export service
|
| -af (--archive-filename) filename archive filename
without |
| extension
|
| -dd (--destination-directory) path destination
directory |
| -maven (--add-maven-script) export service
with maven |
| build script
|
| -sv (--service-version) version chooses a service
version |
| generateAuditReport auditId Generate the audit
report |
| -fp (--filePath) filePath filePath
|
| -t (--template) template Template of Audit
|
| generateMigrationReport Generate migration
check |
| report for upgrade
safe |
| guarding
|
| -dp (--data-path) dataPath (dir) a valid dir to
load migration |
| check data
|
| -from (--from-version) from_version migration check
report source |
| product version
|
| -ps (--projects) projects migration check
report |
| projects
sepereated by | |
| -rp (--report-path) reportPath (dir) report path
|
| -to (--to-version) to_version [to_version ...] migration check
report target |
| product version
|
| getCommandStatus commandId gets command
status |
| -ci (--childIndex) index child index in
Group Command |
| (if other
commands, will be |
| ignore)
|
| help commandName shows help or
detailed help |
| helpFilter shows filter help
|
| importDatabaseMetadata filePath imports database
metadata |
| importDelimitedMetadata filePath imports delimited
metadata |
| importItems source (dir|.zip) imports items
|
| -if (--item-filter) filterExpr item filter
expression |
| -im (--implicit) import implicit
|
| -o (--overwrite) overwrite existing
items |
| -s (--status) import the status
|

555
CommandLine

| -sl (--statslogs) import stats &


logs params |
| initLocal init local
repository |
| initRemote adminUrl init remote
repository |
| -ul (--user-login) login user login
|
| -up (--user-password) password user password
|
| listCloudWorkspaces list all available
workspaces |
| for user (and
action) |
| (deprecated)
|
| -n (--name) name action name
|
| -p (--password) password cloud user
password |
| -r (--url) url inventory service
URL |
| -u (--username) username cloud user login
name |
| listCommand lists commands
|
| -a (--all) lists all commands
|
| -q (--queue) lists
queue(waitting) |
| commands
|
| -r (--run) lists running
commands |
| listExecutionServer lists [virtual]
execution |
| servers
|
| listItem lists items
|
| -if (--item-filter) filterExpr item filter
expression |
| -m (--metadata) show item's
metadata |
| listJob lists jobs
|
| listProject lists available
projects |
| -b (--branch) Show branch
projects |
| listRoute lists routes
|
| listService lists services
|
| logoffProject logs off
|
| logonProject logs on a project
|
| -br (--branch) branch branches/<bran
chName> , |
| tags/<tagName>
|
| -gt (--generate-templates) generate templates
|
| -pn (--project-name) technical name project name
|
| -ro (--readOnly) readOnly on
current project |
| -ul (--user-login) login user login
|
| -up (--user-password) password user password
|
| mCreateServer serverName <MDM
Command>:Create a server |
| definition.
|

556
CommandLine

| -pd (--password) password password used to


connect |
| server, this
argument is |
| optional, if it is
not |
| appointed, the
created server |
| definition is not
shared |
| -u (--url) url url of the server
|
| -un (--username) username user name used to
connect |
| server
|
| mDeployItem <MDM
Command>:Deploy an item |
| object to MDM
server |
| -on (--object-name) object name The Object name
which will be |
| deployed to server
|
| -ot (--object-type) object type Object
type.Optional |
| argument:Custo
mLayout|DataCont|
| ainer|DataModel|
Menu|Role|Reso|
| urce|ServiceCo
nfig|StoredProc||
| Trigger|View|P
rocess |
| -ov (--object-version) object version The Object of this
version |
| will be deployed
to server |
| -sn (--server-name) server name Name of the server
definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| mDeployJob jobName [jobName ...] <MDM
Command>:Deploy a job |
| object to MDM
server |
| -c (--context) object name The context script
of job |
| object
|
| -ov (--object-version) object version The Object of this
version |
| will be deployed
to server |
| -sn (--server-name) server name Name of the server
definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| mExportDataContainer <MDM
Command>:Export data |
| container content
from MDM |
| server
|
| -dn (--datacontainer-name) data container name Name of the data
container |
| -path (--zip-path) zip file path
|

557
CommandLine

| -sn (--server-name) server name Name of the server


definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| mImportDataContainer <MDM
Command>:Import data |
| container content
to MDM |
| server
|
| -d (--on-demand) Automatically
create data |
| container if it
doesn't exist |
| -path (--zip-path) zip file path
|
| -sn (--server-name) server name Name of the server
definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| mListServer <MDM Command>:List
all server |
| definitions in
current |
| project.
|
| mUnDeployItem <MDM
Command>:Undeploy an |
| item object from
MDM server |
| -on (--object-name) object name The Object name
which will be |
| deployed to server
|
| -ot (--object-type) object type Object
type.Optional |
| argument:Custo
mLayout|DataCont|
| ainer|DataModel|
Menu|Role|Reso|
| urce|ServiceCo
nfig|StoredProc||
| SyncPlan|Trigger|
Version|View||
| Process|Job
|
| -sn (--server-name) server name Name of the server
definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| mUpdateServer <MDM
Command>:Deploy all |
| modified object to
server |
| -sn (--server-name) server name Name of the server
definition |
| -sp (--server-password) server password Optional
argument,it is |
| needed when using
not shared |
| connection
|
| migrationCheck Migration check
for upgrade |

558
CommandLine

| safe guarding
|
| -dp (--data-path) dataPath (dir) a valid dir to
save migration |
| check data
|
| populateAndGenerateReport Populate and
generate audit |
| report
|
| -dd (--db-driver) driver the driver of
database |
| -du (--db-user) user the user of
database |
| -fp (--filePath) filePath filePath
|
| -ju (--jdbc-url) jdbc_url jdbc url for
database |
| -t (--template) template Template of Audit
|
| -up (--user-password) password user password
|
| populateAudit Populate data into
database |
| -dd (--db-driver) driver the driver of
database |
| -du (--db-user) user the user of
database |
| -ju (--jdbc-url) jdbc_url jdbc url for
database |
| -up (--user-password) password user password
|
| publishAction actionName publish action
into cloud |
| workspace
(deprecated) |
| -i (--image) publish action
with |
| screenshot
|
| -na (--not_accelerate) don't accelerate
the publish |
| -p (--password) password cloud user
password |
| -r (--url) url inventory service
URL |
| -u (--username) username cloud user login
name |
| -v (--version) version action publishing
version |
| -w (--workspace) workspace target workspace
|
| publishJob jobName publish job to
Artifact |
| Repository
|
| -a (--artifactId) artifactId published
artifactId |
| -g (--group) group chooses a job
group |
| -jactc (--job-apply-context-to-children) specify this
parameter if you |
| want to apply
context to |
| children
|
| -jc (--job-context) contextName specify which job
context you |
| want to use
through inputing |
| the job context
name |
| -p (--password) password password
|
| -pv (--publish-version) version chooses a publish
version |

559
CommandLine

| -r (--artifact-repository) url artifact


repository |
| -rt (--repository-type) repository-type nexus2/nexus3
(nexus3 by |
| default)
|
| -s (--snapshot) publish as
SNAPSHOT version |
| -t (--type) exportType set export type,
can be [osgi |
| | standalone (or
std)]; osgi |
| by default
|
| -u (--username) username username
|
| -v (--version) version chooses a job
version |
| publishRoute routeName publish route to
Artifact |
| Repository
|
| -a (--artifactId) artifactId published
artifactId |
| -g (--group) group chooses a route
group |
| -p (--password) password password
|
| -pv (--publish-version) version chooses a publish
version |
| -r (--artifact-repository) url artifact
repository |
| -rt (--repository-type) repository-type nexus2/nexus3
(nexus3 by |
| default)
|
| -s (--snapshot) publish as
SNAPSHOT version |
| -u (--username) username username
|
| -v (--version) version chooses a route
version |
| publishService serviceName publish service to
Artifact |
| Repository
|
| -a (--artifactId) artifactId published
artifactId |
| -g (--group) group chooses a service
group |
| -p (--password) password password
|
| -pv (--publish-version) version chooses a publish
version |
| -r (--artifact-repository) url artifact
repository |
| -rt (--repository-type) repository-type nexus2/nexus3
(nexus3 by |
| default)
|
| -s (--snapshot) publish as
SNAPSHOT version |
| -u (--username) username username
|
| -v (--version) version chooses a service
version |
| quit exit
|
| regenerateAllPoms Regenerate all
poms |
| -if (--item-filter) filterExpr item filter
expression |
| setUserComponentPath Set user component
path |
| -c (--clear) clear the user
component path |

560
CommandLine

| -up (--userComponentPath) path user component


path |
| showVersion shows product
version |
| startGroup start command
group |
| -o (--origin) infor add some origin
informations |
| stopGroup stop command group
|
+---------------------------------------------------------------------------------
-----------------+
| Copyright (C) 2006-2021 Talend - www.talend.com
|
+---------------------------------------------------------------------------------
-----------------+

CommandLine examples
Generating a Job created with a Job creation API using the CommandLine

About this task


Talend offers you the possibility to create a data integration process without any user interface. You can write a script
describing the properties of all the elements of your process: components, connections, contexts, etc. in a jobscript file and
generate the corresponding Job via the CommandLine.
To do so:

Procedure
1. Launch your CommandLine. For more information on how to launch the CommandLine, see Operating modes on page
544.
2. Connect to your repository with the initLocal or initRemote commands. Example:

initRemote http://localhost:8080/org.talend.administrator -ul [email protected] -up


admin

The parameter values are given as examples and need to be replaced with your actual information (port, credentials).
For more information on how to use these commands, see the help provided in the CommandLine.
3. Connect to your project and branch/tag with the logonProject command. If you do not know the name of your
project or branch/tag, type in the listProject -b command first. Example:

logonProject -pn di_project -ul [email protected] -up admin -br branches/v1.0.0

The parameter values are given as examples and need to be replaced with your actual information (project/branch/tag
name, credentials). For more information on how to use this command, see the help provided in the CommandLine.
4. Type in the following command to generate a Job from your Job script:

createJob NameOfJob -sf path\yourJobscript.jobscript

The Job is created in your CommandLine workspace in the process folder: commandline-workspace\YourPr
ojectName\process.
If you want to open this Job in Talend Studio, you will have to import it in the Talend Studio workspace first.
For more information on how to import items in Talend Studio, see Importing items on page 157.
The creation of Job scripts and the generation of its corresponding Job design can also be done from Talend Studio
which provides a user-friendly Job script API Editor.

Executing a Job on a server with SSL enabled using the CommandLine

About this task


Talend offers you the possibility to execute a Job on a server via the CommandLine with SSL enabled . SSL allows you to
encrypt data prior to transmission.

561
CommandLine

For more information on how to generate a Job via the CommandLine, see Generating a Job created with a Job creation API
using the CommandLine on page 561.
To launch a Job on a JobServer with SSL enabled:

Procedure
1. Launch your CommandLine. For more information on how to launch the CommandLine, see Operating modes on page
544.
2. Connect to your repository with the iniLocal or initRemote commands. For more information on how to use these
commands, see the help provided in the CommandLine.
3. Connect to your project with the logonProject command. For more information on how to use this command, see
the help provided in the CommandLine.
4. Type in the following command to launch a Job (named jobName) on the server named myServer:

executeJobOnServer jobName --execution-server myServer --job-version 0.1 --job-


context myJobContext -useSSL

You can enter either -useSSL or -use-ssl-option as both commands result in enabling SSL.
You also have the possibility to enable SSL on your JobServer on the Studio side. For more information, see Running a
Job remotely with SSL enabled on page 205.

Building a Job using the CommandLine


Building a Job allows you to generate an archive of a specific Job along with all of the files required to execute the Job,
including the .bat and .sh as well as any context-parameter files or other related files. This archive can then be used to
deploy and execute the Job on a server without having to generate it first.
Talend allows you to build a Job via one of the following ways:
• using an external Continuous Integration tool (recommended)
• using the Build Job option in Talend Studio
• using Talend CommandLine

About this task


To build a Job using the CommandLine:

Procedure
1. Launch your CommandLine. For more information on how to launch the CommandLine, see Operating modes on page
544.
2. Connect to your repository with the initLocal or initRemote commands. Example:

initRemote http://localhost:8080/org.talend.administrator -ul [email protected] -up


admin

The parameter values are given as examples and need to be replaced with your actual information (port, credentials).
For more information on how to use these commands, see the help provided in the CommandLine.
3. Connect to your project and branch/tag with the logonProject command. If you do not know the name of your
project or branch/tag, type in the listProject -b command first. Example:

logonProject -pn di_project -ul [email protected] -up admin -br branches/v1.0.0

The parameter values are given as examples and need to be replaced with your actual information (project/branch/tag
name, credentials). For more information on how to use this command, see the help provided in the CommandLine.
4. (Optional) If you are using user components, copy them in a folder on your CommandLine machine, then type in the
following command with the path to this folder:

setUserComponentPath -up C:/components

The parameter values are given as examples and need to be replaced with your actual information (folder path). For
more information on how to use this command, see the help provided in the CommandLine.

562
CommandLine

5. Type in the following command to build your Job archive in the folder of your choice:

buildJob MyJob -dd C:/tac/builds -af MyJob_0.1 -jc Default -jv 0.1

The parameter values are given as examples and need to be replaced with your actual information (Job name/context/
version, target archive directory, archive name). In this example, a Job named MyJob is built in the archive named
MyJob_0.1.zip, in the C:/tac/builds folder. The best practice is to put the archive file in the Job archive folder,
which path is defined in the Job Conductor node of the Configuration page.
For more information on how to use this command, see the help provided in the CommandLine.
You can build a Route in the same way using the buildRoute command.

563
Appendices

Appendices
Customizing Talend Studio and setting Studio preferences
Customizing project settings

About this task


Talend Studio enables you to customize the information and settings of the project in progress, including the Palette, Job
settings and Job version management, for example.
To customize project settings:

Procedure
1.
Click on the Studio tool bar, or select File > Edit Project Properties from the menu bar.
The Project Settings dialog box opens.
2. In the tree diagram to the left of the dialog box, select the setting you wish to customize and then customize it, using
the options that appear to the right of the box.

Results
From the dialog box you can also export or import the full assemblage of settings that define a particular project:
• To export the settings, click on the Export button. The export will generate an XML file containing all of your project se
ttings.
• To import settings, click on the Import button and select the XML file containing the parameters of the project which
you want to apply to the current project.

Analyzing projects

Talend Studio provides an experimental project analysis tool, which generates a report to list the items to check and the
problems to fix manually.

Note: The report may not list all issues exhaustively.

Procedure
1. Click File > Edit Project Properties from the menu bar to open the Project Settings dialog box.
2. Click Audit to open the corresponding view.
3. Click Generate analysis report.
If any problems are found, a CSV report file <timestamp>_<project-name>_Analysis_Report.csv will be
generated under the directory <Talend-Studio>\workspace\report\analysisReport_<timestamp>,
where <timestamp> designates when the report is generated and <project-name> designates the name of your
project.

The table below describes the information presented in the report file.

Column Description

Task name the name of the analysis task

Task description the description of the analysis task

564
Appendices

Column Description

Link to details the link to the task details if any

Severity the severity of the item

Item type the type of the item

Path to item the path to the item

Details the details of the analysis result

The table below lists the analysis tasks this tool performs and the potential issues the tasks can help detect.
Note that the components and the types of project items listed in the table are available depending on your license.

Task name Task description

ItemComponentMissingAnalysisTask Checks if any component, Joblet, Routelet is missing from any Standard Job, Big Data Job, Route,
and Test Case in your project.

ChildJobMissingAnalysisTask Checks if any subJob is missing from any Standard Job and Big Data Job in your project.

UnresolvedRoutineDependencyAnalysisTask Checks if any library is missing from any routine and bean which has dependencies on it.

UnresolvedComponentsDependen Checks if any library is missing from any tLibraryLoad, cConfig and cMessagingEndpoint
ciesAnalysisTask components in your project.

CustomComponentsDepRiskAnalysisTask Checks if any custom component is used in any Standard Job, Big Data Job, Route, and Test Case
in your project, and adds an entry in the report each time a custom component is found.

Configuring build settings

You can configure whether to allow using recursive Jobs and Joblets when building Jobs.

Warning: This feature is kept only for compatibility reasons. Using recursive Jobs and Joblets is not supported any more
for new projects from Talend 8.0 onwards, as they are likely to cause performance issues when building Jobs.

Procedure
1. Click File > Edit Project properties from the menu bar, and in the Project Settings dialog box displayed, click Build on
the tree view to open the corresponding view.
2. Select the Allow recursive Jobs and Joblets check box if you want to use recursive Jobs and Joblets.
• For a newly created project, the Allow recursive Jobs and Joblets check box is cleared by default.
• For a project migrated from any previous version, the Allow recursive Jobs and Joblets check box is selected by
default for compatibility reasons.
3. Click Apply and Close to save your changes.

Setting the compiler compliance level

About this task


The compiler compliance level corresponds to the Java version used for Job code generation.
For more information on the compiler compliance levels compatibility, see Compatible Java environments.

Procedure
1.
Click on the toolbar of the Studio main window, or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. Expand the Build node and click Java Version.

565
Appendices

3. From the JDK Compiler compliance level list, select the compiler compliance level you want to use, and then click Apply
and Close.

Customizing shell command templates

About this task


Your Talend Studio provides a set of templates for the shell commands used to launch built Jobs. You can customize those
templates based on your needs.

Procedure
1.
Click on the toolbar of the Studio main window, or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. Expand the Build > Shell Setting nodes and click Bat, Ps1, or Sh depending on the operating system on which your built
Jobs are going to run.

3. Edit the code in the corresponding view based on your needs and click Apply and Close to finish your customization.

566
Appendices

Customizing Maven build script templates

Your Talend Studio provides the following default templates for generating build scripts:
• Docker image build settings
• Maven script templates for standalone Job export
• A Maven script template for OSGI bundle export of Jobs
Based on the default, global build templates, you can create folder-level build scripts. Build scripts generated based on these
templates are executed when building Jobs.
Skipping folders when building and running Jobs
Talend Studio allows you to skip folders when building and running Jobs.
This is useful if your Job is saved under a very deep path, potentially also with long folder names, and you get the
java.io.IOException error when building and running the Job because of the path or command length limit of the
operating system.

Procedure
1. From the menu bar, click File > Edit Project properties to open the Project Settings dialog box.
2. Expand Build and click Maven to open the corresponding view.
3. Select the Skip folders check box.
4. Click Force full re-synchronize poms.
5. When done, click Apply and Close to close the dialog box.
6. Restart your Talend Studio.
Customizing the global build script templates

About this task


In the Project Settings dialog box, you can find and customize the default, global build script templates under the Build >
Maven > Default node. These script templates apply to all Jobs in the root folder and all sub-folders except those with their
own build script templates set up.
The following example shows how to customize the global POM script template for standalone Jobs:

Procedure
1. From the menu bar, click File > Edit Project properties to open the Project Settings dialog box.
2. Expand the Build > Maven > Default nodes, and then click the Standalone Job node to open the relevant view that
displays the content of the POM script template.

567
Appendices

Note:
Depending on the license you are using, the project settings items in your Studio may differ from what is shown
above.

3. Modify the script code in the text panel and click Apply and Close to finish your customization.
Customizing the folder-level build script templates

About this task


Based on the global build script templates, you can add and customize script templates for Jobs folder by folder under the
Build > Maven > Setup custom scripts by folder node. The build script templates added for a folder apply to all Jobs in that
folder and all its sub-folders except those with their own build script templates set up.
The following example shows how to add and customize the POM script template for building standalone Jobs from Jobs in
the CA_customers folder:

Procedure
1. From the menu bar, click File > Edit Project properties to open the Project Settings dialog box.
2. Expand the Build > Maven > Setup custom scripts by folder > Job Designs > CA_customers nodes, and then click
the Standalone Job node to open the relevant view, from which you can add script templates or delete all existing
templates.

568
Appendices

Note:
Depending on the license you are using, the project settings items in your Studio may differ from what is shown
above.

3. Click the Create Maven files button to create script templates based on the global templates for standalone Jobs.
4. Select the script template you want to customize, pom.xml in this example, to display the script code in the code view.
Modify the script code in the text panel and click Apply and Close to finish your customization.

569
Appendices

Once the build script templates are created for a folder, you can also go to the directory where the XML files are stored,
<studio_installation_directory>\workspace\<project_name>\process\CA_customers in this
example, and directly modify the XML file of the template you want to customize. Your changes will affect all Jobs in
the folder and in all sub-folders except those with their own script set up.

Warning:
If you are working in a remote project and if you modify an XML file directly, your changes will not be automatically
committed to the version control system. To make sure your changes are properly committed, we recommend that
you customize the script templates in Project Settings of your Talend Studio instead.

Customizing build script templates for use with CommandLine


If you want to customize a build script template for the use with the CommandLine, which is the equivalent of the Talend
Studio without GUI, go to the directory where the script file is stored, for example, <studio_installation_directo
ry>\workspace\<project_name>\process\CA_customers. Then, customize the script file according to your need.
The modified script file will be taken into account when a Job is built with the Maven option activated.

Note: There is no direct customization to the global build script templates for use with the CommandLine. As a
workaround, you can add template files in the root directory <studio_installation_directory>\workspace\
<project_name>\process\ for Jobs, and then and modify the XML files. Note that these script templates will apply
to all Jobs in all folders except those with their own build script templates set up.

For more information about the CommandLine, see CommandLine on page 544.

Managing deployment versions of Jobs

About this task


Through the Project Settings dialog box, you can manage in a batch manner or individually the deployment version of each
Job item to be published to the artifact repository.

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand Build > Maven and select Deployment Versioning to open the corresponding
view.

3. In the Repository tree view, expand the node holding the items you want to manage their deployment versions and
then select the check boxes of these items.
The selected items are displayed in the Items list to the right along with their current version in the Version column.
4. Make changes as required:

570
Appendices

• To set a deployment version for all the items, enter the version in the Project version field, select the Apply project
version to items option, and click Apply version.
• Click Select all subJobs if you want to update all of the subJobs dependent on the selected items at the same time.
• To set the deployment version for one or more items individually, select the Change the version of each item
individually option, select the item or items in the table below the Options area, enter the deployment version New
version field, and click Apply.
• Select the Use job versions option if you want to use the latest Job versions as the deployment versions for the
selected items.
• If you want to publish a snapshot version of the item or items, select the Use snapshot check box before applying
the version setting.
5. Click Apply and Close to apply your changes and close the dialog box.
6. If any of the selected Jobs is opened in the design workspace, click OK in a new dialog box and then save your Job to
apply the new deployment version to it.
You can also set the deployment version of a Job in the Job view of it once opened in the design workspace. For more
information, see Customizing deployment of a Job on page 171.
The deployment version settings you make in the Project Settings dialog box will be reflected in the Deployment tab in
the Job view of the relevant item or items, and vice versa.

Palette Settings

About this task


You can customize the settings of the Palette display so that only the components used in the project are loaded. This will
allow you to launch the Studio more quickly.
To customize the Palette display settings:

Procedure
1.
On the toolbar of the Studio's main window, click or click File > Edit Project Properties on the menu bar to open
the Project Settings dialog box.

571
Appendices

Note:
In the General view of the Project Settings dialog box, you can add a project description, if you did not do so when
creating the project.

2. In the tree view of the Project Settings dialog box, expand Designer and select Palette Settings.
The settings of the current Palette are displayed in the panel to the right of the dialog box.
3. Select one or several components, or even set(s) of components you want to remove from the current project's Palette.
4. Use the left arrow button to move the selection onto the panel on the left.
This will remove the selected components from the Palette.
5. To re-display hidden components, select them in the panel on the left and use the right arrow button to restore them to
the Palette.
6. Click Apply to validate your changes and Apply and Close to close the dialog box.

Results

Note:
To get back to the Palette default settings, click Restore Defaults.

For more information on the Palette, see Changing the Palette layout and settings on page 589.

Displaying special characters for schema columns

You may need to retrieve a table schema that contains columns written with special characters like Chinese, Japanese,
Korean. In this case, you need to enable Talend Studio to read special characters. By default, reading special characters is
disabled.

572
Appendices

About this task


To enable Talend Studio to read special characters:

Procedure
1. Click File > Edit Project properties from the menu bar, and in the Project Settings dialog box displayed, click General on
the tree view to open the corresponding view.
2. On the right panel, select the Allow specific characters (UTF8,...) for columns of schemas check box.
3. Click Apply and Close to save your change.

Configuring screenshot generation

When working on a Git managed project, you can enable or disable screenshot creation or update when creating or updating
Jobs, Joblets, Routes and Routelets.

Procedure
1. Click File > Edit Project properties from the menu bar.
The Project Settings dialog box displays.
2. Click General on the tree view to open the corresponding view.
3. On the right panel, select the Do not create or update screenshots for Jobs, Joblets, Routes and Routelets check box if
you want to disable screenshot creation or update when creating or updating Jobs, Joblets, Routes and Routelets.
This check box is available only when you are working on a Git managed project and it is cleared by default.
4. Click Apply and Close to save your change.
Later, you will find in the Navigator view no .screenshot file is generated when creating Jobs, Joblets, Routes and
Routelets and the existing .screenshot files are not updated when updating Jobs, Joblets, Routes and Routelets if the
Do not create or update screenshots for Jobs, Joblets, Routes and Routelets check box is selected.

Type mapping

In Talend Studio, there are two data types in the database component schema: the Talend type and the database type. When
retrieving the table schema, there is always a default Talend type to map to the real database type, and there is a default
database type to map to the Talend type on the database schema, when linking a non-database component to a database
output component.
A Talend type is an intermediate Java type, mapped for each data type of different databases. These default data type
mappings are configured in an XML file. Each kind of database has a separate mapping configuration file. For example, the
file mapping_Mysql.xml maps MySQL data types to Talend types.
By editing the mapping file, you can modify any default data type mapping for your project rather than changing it manually
every time, and add new type mappings to server your needs.

573
Appendices

Accessing mapping files and defining type mappings

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand General and select Metadata of Talend Type to open the Metadata of
TalendType view, which lists the mapping files for all the database types used in Talend Studio.

You can import, export, or delete any of the conversion files by clicking Import, Export or Remove respectively.
You can modify a conversion file according to your needs by double-clicking the file or selecting the file and clicking
the Edit button to open the Edit Mapping File dialog box and then modify the XML code directly in the open dialog box.
When you define a type mapping, you need to map both from Talend type to database type and from database type to
Talend type.
• The <dbTypes> element with its child <dbType> elements defines the supported database types. To add a new
database type in the mapping file, you need to add a <dbType> element under the <dbTypes> element. The
example below adds two database types BOOLEAN and YESNO.

<dbType type="BOOLEAN"/>
<dbType type="YESNO"/>

You can set the default pattern for each date type. This allows the date pattern for the date type columns to be set
automatically when retrieving or guessing schema from a table. The following example adds two database types
DATE and DATETIME.

<dbType type="DATE" defaultPattern="yyyy-mm-dd"/>


<dbType type="DATETIME" defaultPattern="yyyy-mm-dd hh:mm:ss.SSSSSS"/>

• The <talendToDbTypes> element with its child <talendType> elements defines a suggested database type
list and the default database type when setting a Talend type for a metadata column. To map a Talend type to one
or more database types, you need to add a talendType element under the <talendToDbTypes> element. The
example below maps the Talend type id_Boolean to two database types BOOLEAN and YESNO.

<talendType type="id_Boolean">
<dbType type="BOOLEAN"/>
<dbType type="YESNO"/>
</talendType>

574
Appendices

• The <dbToTalendTypes> element with its child <dbType> elements defines a suggested Talend type list and
the default Talend type when retrieving schema from the database. To map a database type to one or more Talend
types, you need to add a dbType element under the <dbToTalendTypes> element. The example below maps
the database type YESNO to the Talend type id_Boolean.

<dbType type="YESNO">
<talendType type="id_Boolean"/>
</dbType>

Note: The default, defaultLength, defaultPrecision, ignoreLen, ignorePre, and preBeforelen


attributes in Talend type mapping files are not taken into account. You do not need to add these attributes when
defining new type mappings.

575
Appendices

Below is the XML metadata mapping file for the Access database:

<?xml version="1.0"?>
<mapping>
<dbms product="ACCESS" id="access_id" label="Mapping Access" default="true">
<dbTypes>
<dbType type="BIT" ignoreLen="true" ignorePre="true"/>
<dbType type="BOOLEAN" ignoreLen="true" ignorePre="true"/>
<dbType type="COUNTER"/>
<dbType type="DATE" ignoreLen="true" ignorePre="true"/>
<dbType type="DOUBLE" ignoreLen="true" ignorePre="true"/>
<dbType type="DECIMAL" ignoreLen="true" ignorePre="true"/>
<dbType type="FLOAT" ignoreLen="true" ignorePre="true"/>
<dbType type="INTEGER" ignoreLen="true" ignorePre="true"/>
<dbType type="NUMERIC" ignoreLen="true" ignorePre="true"/>
<dbType type="REAL" ignoreLen="true" ignorePre="true"/>
<dbType type="SMALLINT" ignoreLen="true" ignorePre="true"/>
<dbType type="TINYINT" ignoreLen="true" ignorePre="true"/>
<dbType type="TIME" ignoreLen="true" ignorePre="true"/>
<dbType type="TIMESTAMP" ignoreLen="true" ignorePre="true"/>
<dbType type="VARCHAR" default="true" defaultLength="200"
ignorePre="true"/>
<dbType type="DATETIME" ignoreLen="true" ignorePre="true"/>
<dbType type="MEMO" ignoreLen="true" ignorePre="true"/>
<dbType type="YESNO" ignoreLen="true" ignorePre="true"/>
</dbTypes>

<language name="java">
<talendToDbTypes>
<!-- Adviced mappings -->
<talendType type="id_List"/>
<talendType type="id_Boolean">
<dbType type="YESNO" default="true"/>
<dbType type="BOOLEAN"/>
</talendType>
<talendType type="id_Byte">
<dbType type="TINYINT" default="true"/>
<dbType type="SMALLINT"/>
<dbType type="INTEGER"/>
</talendType>
<talendType type="id_byte[]"> </talendType>
<talendType type="id_Character">
<dbType type="VARCHAR" default="true"/>
</talendType>
<talendType type="id_Date">
<dbType type="DATE" default="true"/>
<dbType type="TIMESTAMP"/>
<dbType type="TIME"/>
<dbType type="DATETIME"/>
</talendType>
<talendType type="id_BigDecimal">
<dbType type="NUMERIC" default="true"/>
<dbType type="DOUBLE"/>
<dbType type="FLOAT"/>
<dbType type="DECIMAL"/>
<dbType type="REAL"/>
</talendType>
<talendType type="id_Double">
<dbType type="DOUBLE" default="true"/>
<dbType type="NUMERIC"/>
<dbType type="FLOAT"/>
<dbType type="DECIMAL"/>
<dbType type="REAL"/>
</talendType>
<talendType type="id_Float">
<dbType type="FLOAT" default="true"/>
<dbType type="DOUBLE"/>
<dbType type="NUMERIC"/>
<dbType type="DECIMAL"/>
<dbType type="REAL"/>
</talendType>
<talendType type="id_Integer">
<dbType type="INTEGER" default="true"/>
<dbType type="SMALLINT"/>
<dbType type="TINYINT"/>

576
Appendices

<dbType type="COUNTER"/>
</talendType>
<talendType type="id_Long">
<dbType type="INTEGER" default="true"/>
<dbType type="SMALLINT"/>
<dbType type="TINYINT"/>
<dbType type="COUNTER"/>
</talendType>
<talendType type="id_Object"> </talendType>
<talendType type="id_Short">
<dbType type="SMALLINT" default="true"/>
<dbType type="INTEGER"/>
<dbType type="TINYINT"/>
<dbType type="COUNTER"/>
</talendType>
<talendType type="id_String">
<dbType type="VARCHAR" default="true"/>
<dbType type="MEMO"/>
</talendType>
</talendToDbTypes>
<dbToTalendTypes>
<dbType type="BIT">
<talendType type="id_Boolean" default="true"/>
</dbType>
<dbType type="BOOLEAN">
<talendType type="id_Boolean" default="true"/>
</dbType>
<dbType type="COUNTER">
<talendType type="id_Integer" default="true"/>
</dbType>
<dbType type="DATE">
<talendType type="id_Date" default="true"/>
</dbType>
<dbType type="DECIMAL">
<talendType type="id_Double"/>
<talendType type="id_BigDecimal" default="true"/>
<talendType type="id_Float"/>
</dbType>
<dbType type="DOUBLE">
<talendType type="id_Double" default="true"/>
<talendType type="id_BigDecimal"/>
<talendType type="id_Float"/>
</dbType>
<dbType type="FLOAT">
<talendType type="id_Float" default="true"/>
<talendType type="id_BigDecimal"/>
<talendType type="id_Double"/>
</dbType>
<dbType type="INTEGER">
<talendType type="id_Integer" default="true"/>
<talendType type="id_Short"/>
<talendType type="id_Long"/>
<talendType type="id_Byte"/>
</dbType>
<dbType type="NUMERIC">
<talendType type="id_Float"/>
<talendType type="id_BigDecimal" default="true"/>
<talendType type="id_Double"/>
</dbType>
<dbType type="REAL">
<talendType type="id_Float" default="true"/>
<talendType type="id_BigDecimal"/>
<talendType type="id_Double"/>
</dbType>
<dbType type="SMALLINT">
<talendType type="id_Short" default="true"/>
<talendType type="id_Integer"/>
<talendType type="id_Long"/>
<talendType type="id_Byte"/>
</dbType>
<dbType type="TINYINT">
<talendType type="id_Byte" default="true"/>
<talendType type="id_Integer"/>
<talendType type="id_Short"/>
<talendType type="id_Long"/>
</dbType>

577
Appendices

<dbType type="TIME">
<talendType type="id_Date" default="true"/>
</dbType>
<dbType type="TIMESTAMP">
<talendType type="id_Date" default="true"/>
</dbType>
<dbType type="VARCHAR">
<talendType type="id_String" default="true"/>
</dbType>
<dbType type="DATETIME">
<talendType type="id_Date" default="true"/>
</dbType>
<dbType type="MEMO">
<talendType type="id_String" default="true"/>
</dbType>
<dbType type="YESNO">
<talendType type="id_Boolean" default="true"/>
</dbType>
</dbToTalendTypes>
</language>
</dbms>
</mapping>

Supported Talend types


Talend supports the following types.

Note: You cannot define custom types in Talend Studio.

Talend Type Java Type

id_Boolean java.lang.Boolean

id_Byte java.lang.Byte

id_byte[] byte[]

id_Character java.lang.Character

id_Date java.util.Date

id_Double java.lang.Double

id_Float java.lang.Float

id_BigDecimal java.math.BigDecimal

id_Integer java.lang.Integer

id_Long java.lang.Long

id_Object java.lang.Object

id_Short java.lang.Short

id_String java.lang.String

id_List java.util.List

Version management
Upgrading the version of project items
Talend Studio allows you to upgrade the version of each item in the Repository tree view.

578
Appendices

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. Click General > Version Management > Upgrade to open the corresponding view.

3. In the Repository tree view, expand the node holding the items you want to manage their versions and then select the
check boxes of these items.
The selected items display in the Items list to the right along with their current version in the Version column and the
new version set in the New Version column.
4. Make changes as required:
• In the Options area, select the Change all items to a fixed version option to change the version of the selected
items to the same fixed version.
• Click Revert if you want to undo the changes.
• Click Select all dependencies if you want to update all of the items dependent on the selected items at the same
time.
• Click Select all subJobs if you want to update all of the subJobs dependent on the selected items at the same time.
• To increment each version of the items, select the Update the version of each item option and change them
manually.
• Select the Fix tRunjob versions if Latest check box if you want the father Job of current version to keep using the
child Job(s) of current version in the tRunJob to be versioned, regardless of how their versions will update. For
example, a tRunJob will update from the current version 1.0 to 1.1 at both father and child levels. Once this
check box is selected, the father Job 1.0 will continue to use the child Job 1.0 rather than the latest one as usual,
say, version 1.1 when the update is done.

Warning: To use this check box, the father Job must be using child Job(s) of the latest version as current version
in the tRunjob to be versioned, by having selected the Latest option from the drop-down version list in the
Component view of the child Job(s).

5. Click Apply and Close to apply your changes and close the dialog box.
Removing old versions of project items
Talend Studio allows you to remove old versions of each item in the Repository tree view.

Procedure
1. Click File > Edit Project properties from the menu bar.
The Project Settings dialog box displays.
2. In the tree view of the dialog box, click General > Version Management > Cleanup to open the corresponding view.

579
Appendices

3. In the Repository tree view, expand the nodes if needed, and select the check boxes for the corresponding items whose
old versions you want to remove.
4. Select either of the following two options according to your needs:
• Remove all old versions and keep only the latest one: to remove all old versions and keep only the latest one for
selected items.
• Remove all old versions lower than: to remove all old versions lower than a specific version for selected items.
With this option selected, you need to enter a specific version number, for example, 0.2 or 1.0, in the text field
next to it. Note that the latest version of any item will never be removed even if it is lower than the specified
version.

Warning: There is no dependency check when removing old versions of Jobs, Joblets, Routes, and Routelets. We
recommend you to do the removal and validate the cleanup on a branch.

5. Click Remove.
The old versions of selected items are removed. A dialog box pops up, which displays the number of removed items and
a link to the cleanup report. The report includes the type, the name, and the version of each removed item.

Status management

About this task


You can also manage the status of each item in the Repository tree view through General > Status Management of the
Project Settings dialog box.
To do so:

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand General and select Status Management to open the corresponding view.

3. In the Repository tree view, expand the node holding the items you want to manage their status and then select the
check boxes of these items.
The selected items display in the Items list to the right along with their current status in the Status column and the new
status set in the New Status column.
4. In the Options area, select the Change all technical items to a fixed status check box to change the status of the
selected items to the same fixed status.
5. Click Revert if you want to undo the changes.
6. To increment each status of the items, select the Update the Status of each item check box and change them manually.
7. Click Apply and Close to apply your changes and close the dialog box.

580
Appendices

Results

Note:
For further information about Job status, see Status settings on page 586.

Job Settings

About this task


You can automatically use Implicit Context Load and Stats and Logs settings you defined in the Project Settings dialog box
of the actual project when you create a new Job.
To do so:

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, click the Job Settings node to open the corresponding view.
3. Select the Use project settings when create a new job check boxes of the Implicit Context Load and Stats and Logs
areas.

4. Click Apply to validate your changes and then Apply and Close to close the dialog box.

Enabling runtime lineage for Jobs

Talend Studio allows you to enable runtime lineage for Standard Jobs, which can be leveraged in a future release by the
analysis capability of Talend Data Catalog for the runtime metadata, for example, the query with variables, the schema with
dynamic columns, etc.
When executing a Standard Job for which runtime lineage is enabled, the information needed by Talend Data Catalog, for
example, the Job name, the component name, the schema, the query, etc., will be written into a JSON file.

Note: To fully use this feature, you must install Talend Data Catalog.

For more information on Talend Data Catalog, see Talend Data Catalog User Guide.

About this task


To enable runtime lineage for Standard Jobs, complete the following:

581
Appendices

Procedure
1. Go to the installation directory of your Talend Studio.
2. Add the -Druntime.lineage=true attribute in the corresponding .ini file according to your operating system to
enable the runtime lineage feature in Talend Studio.
3. Save the file and start your Talend Studio.
4.
Click on the toolbar of the Studio main window or click File > Edit Project properties from the menu bar to open
the Project Settings dialog box.
5. In the tree view of the dialog box, expand the Job Settings node and then click Runtime lineage to display the
corresponding view.

6. Enable runtime lineage for Standard Jobs via either of the following two ways:
• To enable runtime lineage for all Standard Jobs, select the Use runtime lineage for all Jobs check box.
• To enable runtime lineage for specific Standard Jobs, select the check boxes corresponding to the Jobs in the Use
runtime lineage for selected Jobs area.
7. In the Output path field, specify the path where you want to save the JSON files used by Talend Data Catalog.
Later, each time you execute a Standard Job for which runtime lineage is enabled, a JSON file will be saved
under a directory with the format <output_path>/<project>/<jobname>/<version>/runtime_log_
<timestamp>.json, where
• <output_path> is the path specified in the Output path field,
• <project> is the name of the project,
• <jobname> is the name of the Job,
• <version> is the version of the Job, and
• <timestamp> is the timestamp when the JSON file is generated.
You can also set the output path by adding a JVM parameter -Druntime.lineage.outputpath=<output_path>
for the Job via one of the following ways:
• add the JVM parameter for a specific Job in the Run > Advanced settings view. For more information, see Setting
advanced execution settings on page 201.
• add the JVM parameter globally for all Jobs in the Preferences dialog box. For more information, see Debug and Job
execution preferences (Talend > Run/Debug) on page 611.
• add the JVM parameter in the shell script used for building Jobs in the Project Settings dialog box. For more
information, see Customizing shell command templates on page 566.

582
Appendices

Note: The output path must be specified for saving the JSON files. If the output path value is specified in multiple
places, one of them will take effect according to the following precedence: 1) the value of the JVM parameter for
specific Job, 2) the value of the Output path field, 3) the value of the JVM parameter for all Jobs, 4) the value of the
JVM parameter in the shell script.

8. Click Apply and Close to apply your changes and close the dialog box.

Activating and configuring Log4j

Talend Studio includes the Apache logging utility Log4j to provide logging at runtime. You can enable or disable Log4j
loggers in components and customize the Log4j configuration globally for the project.

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, click the Log4j node to open the Log4j view.

3. Select the Activate log4j in components check box to activate the Log4j feature.
By default, the Log4j feature is activated and Log4j 2 is enabled when a project is created.
When a project is imported from Talend 7.2 or an earlier version, which only supports Log4j 1, the deprecated Log4j 1 is
selected from the Log4j version drop-down list by default. All POM files for the project will be synchronized when you
change the Log4j version.
4. If needed, change the global Log4j configuration by modifying the XML instructions in the Log4j template area.
For example, to configure the root logger for Log4j 2 to output all debug or higher messages, go to the Loggers
section and set the value of the level attribute of the root node to debug.
For more information about Log4j 2 configuration, see http://logging.apache.org/log4j/2.x/manual/configuration
.html#XML.

583
Appendices

For more information about Log4j 1 configuration , see https://cwiki.apache.org/confluence/display/LOGGINGLOG4J/


Log4jXmlFormat.

Stats & Logs

When executing a Job, Talend Studio allows you to collect key runtime execution information, statistics, logs and
volumetrics, for tracking and analysis through the tStatCatcher Statistics option or through using a log component.
Examples include monitoring how often a Job is executed, how long it takes to run, how many records are processed,
whether it completes successfully, what is the reason it fails, etc.
The statistics, logs and volumetrics data can be written out to the Java console, delimited files and database tables, or any
combination of the three. If the data is stored in delimited files or database tables, it is written to three separate files or
tables based on the configuration as described in the following procedure.

Procedure
1.
Click on the toolbar of the Studio main window or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and click Stats & Logs to open the corresponding
view.

Note that the Stats & Logs preferences can be set for all Jobs in a project via Project Settings if the preferences for Stats
& Logs will not change depending upon the context of execution. It can also be set separately for individual Jobs via
the Stats & Logs view on the Job tab. For more information, see Automating the use of statistics & logs on page 137.
3. Specify the type of the information you want to capture by selecting the Use statistics (tStatCatcher), Use logs (tL
ogCatcher), and/or Use volumetrics (tFlowMeterCatcher) check boxes where relevant.
• When the Use statistics (tStatCatcher) check box is selected, the key information such as the Job start time, end
time, duration, and success or failure status is captured. If the Catch components statistics (tStatCatcher Statis
tics) check box is selected, or if the tStatCatcher Statistics check box on the Advanced settings view of a specific
component is selected, the key information for the component is also captured.
• When the Use logs (tLogCatcher) check box is selected, messages are harvested from Java Exception, the tDie
component, and/or the tWarn component according to the state of the Catch runtime errors, Catch user errors, and
Catch user warnings check boxes.

584
Appendices

• When the Use volumetrics (tFlowMeterCatcher) check box is selected, the row counts from all tFlowMeter
components in a Job is captured.
4. Select the On Console check box if you want to display data on the console.
5. Select the On Files check box if you want to store data in delimited files.
The relevant fields are enabled or disabled according to your settings. Specify the name of the delimited files in the
Stats File Name, Log File Name, and/or Meter file name fields according to the type of the information you select.
6. Select the On Database check box if you want to store data in database tables.
The relevant fields are enabled or disabled according to your settings. Specify the value of the database connection
parameters in the corresponding fields, and the name of the database tables in the Stats Table, Logs Table, and/or
Meter Table fields according to the type of the information you select.

Tip: The stats, logs, and/or meter tables are created automatically by Talend if they do not exist. You can also create
those tables using tCreateTable based on the schema of tStatCatcher, tLogCatcher, and tFlowMeterCatcher, with
customized column length to avoid data truncation issues.

Warning: Stats or logs is not supported with JDBC generic component for Teradata. To store statistics and log
data in a Teradata database, select Teradata from the Db Type drop-down list and provide related parameters for
connecting the Teradata database as prompted.

Context settings

You can define default context parameters you want to use in your Jobs.

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and then select the Implicit Context Load check box to
display the configuration parameters of the Implicit tContextLoad feature.

3. Select the From File or From Database check boxes according to the type of file you want to store your contexts in.
4. For files, fill in the file path in the From File field and the field separator in the Field Separator field.
5. For databases, select the Built-in or Repository mode in the Property Type list and fill in the next fields.
6. Fill in the Table Name and Query Condition fields.
7. Select the type of system message you want to have (warning, error, or info) in case a variable is loaded but is not in the
context or vice versa.
8. Click Apply and Close to apply your changes and close the dialog box.

585
Appendices

Applying Project Settings

About this task


From the Project Settings dialog box, you can choose to which Job in the Repository tree view you want to apply the Implicit
Context Load and Stats and Logs settings.
To do so:

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and then click Use Project Settings to display the use of
Implicit Context Load and Stats and Logs option in the Jobs.

3. In the Implicit Context Load Settings area, select the check boxes corresponding to the Jobs in which you want to use
the implicit context load option.
4. In the Stats Logs Settings area, select the check boxes corresponding to the Jobs in which you want to use the stats and
logs option.
5. Click Apply and Close to apply your changes and close the dialog box.

Status settings

About this task


In the Project Settings dialog box, you can also define the Status.
To do so:

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.

586
Appendices

2. In the tree view of the dialog box, click the Status node to define the main properties of your Repository tree view
elements.
The main properties of a repository item gathers information data such as Name, Purpose, Description, Author, Version
and Status of the selected item. Most properties are free text fields, but the Status field is a drop-down list.

3. Click the New... button to display a dialog box and populate the Status list with the most relevant values, according to
your needs. Note that the Code cannot be more than 3-character long and the Label is required.

Talend makes a difference between two status types: Technical status and Documentation status.
The Technical status list displays classification codes for elements which are to be running on stations, such as Jobs,
metadata or routines.
The Documentation status list helps classifying the elements of the repository which can be used to document
processes (documentation).
4. Once you completed the status setting, click OK to save
The Status list will offer the status levels you defined here when defining the main properties of your Job designs.
5. In the Project Settings dialog box, click Apply to validate your changes and then Apply and Close to close the dialog
box.

Security settings

About this task


You can hide or show your passwords on your documentations, metadata, contexts, and so on when they are stored in the
Repository tree view.

587
Appendices

To hide your password:

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, click the Security node to open the corresponding view.
3. Select the Hide passwords check box to hide your password.

Note:
If you select the Hide passwords check box, your password will be hidden for all your documentations, contexts,
and so on, as well as for your component properties when you select Repository in the Property Type field of the
component Basic settings view, as in the screen capture below. However, if you select Built-in, the password will not
be hidden.

4. In the Project Settings dialog box, click Apply to validate your changes and then Apply and Close to close the dialog
box.

Custom component

A new framework is available to build custom components and must be used from this version onwards. Refer to Developing
a component using Component Kit on Talend Help Center (https://help.talend.com).
When your studio is connected to Git, the Custom component node displays in the tree diagram to the left of the Project
Settings dialog box, in favor of sharing the external user or custom component installed in your studio with the other users
of the same project.

Procedure
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu bar to open
the Project Settings dialog box.
2. In the tree view of the dialog box, click the Custom component node to open the corresponding view on the right of the
dialog box. If you have already installed your user or custom component in your studio, these components display on
the left part of the Custom component view.

588
Appendices

The custom components can be installed from the Preferences dialog box or imported from Talend exchange .
For further information about how to install a user component from the Preferences dialog box, see How to define the
user component folder (Talend > Components) on page 604.
For further information about how to import external custom components, see Downloading/uploading Talend
Community components on page 90.
One example is provided on Talend Help Center, which describes how to download a custom component from Talend
Exchange and install it.
For more information, see How to install and update a custom component on Talend Help Center (https://help.t
alend.com).
3.
Click the custom or user component(s) of your interest to activate the button, and click this activated button to
move the selected component(s) into the Shared Component view.
To revoke this sharing, select the component(s) you want to stop sharing in the Shared Component view and click the

button to move the selected component(s) into the Custom Components view.
4. Click Apply to validate this move.
5. Click Apply and Close to close the dialog box.

Customizing the workspace


When using Talend Studio to design a data integration Job, you can customize the Palette layout and setting according to
your needs. You can as well change the position of any of the panels that exist in the Studio to meet your requirements.

Note:
All the panels, tabs, and views described in this documentation are specific to Talend Studio. Some views listed in the
Show View dialog box are Eclipse specific and are not subjects of this documentation. For information on such views,
check Eclipse online documentation at http://www.eclipse.org/documentation/.

Changing the Palette layout and settings

The Palette contains all basic technical components and shapes as well as branches for Job design and business modeling in
the design workspace. These components and shapes as well as branches are grouped in families and sub-families.
Talend Studio enables you to change the layout and position of your Palette according to your requirements. the below secti
ons explain all management options you can carry out on the Palette.
Showing, hiding the Palette and changing its position
By default, the Palette might be hidden on the right hand side of your design workspace.

589
Appendices

If you want the Palette to show permanently, click the left arrow, at the upper right corner of the design workspace, to make
it visible at all times.
You can also move around the Palette outside the design workspace within the Integration perspective. To enable the
standalone Palette view, select from the menu Window > Show View... > General > Palette.
If you want to set the Palette apart in a panel, right-click the Palette head bar and select Detached from the contextual
menu. The Palette opens in a separate view that you can move around wherever you like within the perspective.
Displaying/hiding components families
You can display/hide components families according to your needs in case of visibility problems, for example. To do so,
right-click the Palette and select Display folder to display components families and Hide folder to display components
without their families.

Note:
This display/hide option can be very useful when you are in the Favorite view of the Palette. In this view, you usually
have a limited number of components that if you display without their families, you will have them in an alphabetical list
and thus facilitate their usage. for more information about the Palette favorite, see Setting the Palette favorite on page
591.

Maintaining a component family open


If you often use one or many component families, you can add a pin on their names to stop them from collapsing when you
select components from other families.

To add a pin, click the pin icon on the top right-hand corner of the family name.
Filtering the Palette
You can select the components to be shown or hidden on your Palette. You can also add to the Palette the components that
you developed yourself.
For more information about filtering the Palette, see Palette Settings on page 571.

590
Appendices

For more information about adding components to the Palette, either from Talend or from your own development, see
Downloading/uploading Talend Community components on page 90 and/or How to define the user component folder
(Talend > Components) on page 604.
Setting the Palette favorite

About this task


The Palette offers you search and favorite possibilities that by turn facilitate its usage.
You can add/remove components to/from the Palette favorite view in order to have a quick access to all the components
that you mostly use.
To do so:

Procedure
1. From the Palette, right-click the component you want to add to Palette favorite and select Add To Favorite.

2. Do the same for all the components you want to add to the Palette favorite then click the Favorite button in the
upper right corner of the Palette to display the Palette favorite.

591
Appendices

Only the components added to the favorite are displayed.


To delete a component from the Palette favorite, right-click the component you want to remove from the favorite and
select Remove From Favorite.

To restore the Palette standard view, click the Standard button in the upper right corner of the Palette.
Changing components layout in the Palette
You can change the layout of the component list in the Palette to display them in columns or in lists, as icons only or as
icons with short description.
You can also enlarge the component icons for better readability of the component list.
To do so, right-click any component family in the Palette and select the desired option in the contextual menu or click
Settings to open the Palette Settings window and fine-tune the layout.

Changing panels positions

All panels in the open Studio can be moved around according to your needs.

592
Appendices

Procedure
• Click the head border of a panel or click a tab, hold down the mouse button and drag the panel to the target
destination, and then release to change the panel position.

Click the minimize/maximize icons ( / ) to minimize the corresponding panel or maximize it.
For more information on how to display or hide a panel/view, see Displaying Job configuration tabs/views on page
593.
• Click the close icon ( ) to close a tab/view. To reopen a view, click Window > Show View > Talend, then click the name
of the panel you want to add to your current view.
• If the Palette does not show or if you want to set it apart in a panel, go to Window > Show view... > General > Palette.
The Palette opens in a separate view that you can move around wherever you like within the perspective.

Displaying Job configuration tabs/views

The configuration tabs are located in the lower half of the design workspace of the Integration perspective. Each tab opens
a view that displays detailed information about the selected element in the design workspace.

593
Appendices

The Component, Run Job and Contexts views gather all information relative to the graphical elements selected in the design
workspace or the actual execution of the open Job.

Note:
By default, when you launch Talend Studio for the first time, the Problems tab will not be displayed until the first Job is
created. After that, Problems tab will be displayed in the tab system automatically.

The Modules and Schedulerdeprecated tabs are located in the same tab system as the Component, Logs and Run Job tabs.
Both views are independent from the active or inactive Jobs open on the design workspace.
Some of the configuration tabs are hidden by default such as the Error Log, Navigator, Job Hierarchy, Problems, Modules
and Schedulerdeprecated tabs. You can show hidden tabs in this tab system and directly open the corresponding view if you
select Window > Show view and then, in the open dialog box, expand the corresponding node and select the element you
want to display.

Filtering entries listed in the Repository tree view


Talend Studio provides the possibility to choose what nodes, Jobs or items you want to list in the Repository tree view.
You can filter the Repository tree view by Job name, Job status, the user who created the Job/items or simply by selecting/
clearing the check box next to the node/ item you want to display/hide in the view. You can also set several filters
simultaneously.

Filtering by Job name

About this task


To filter Jobs listed in the Repository tree view by Job name, complete the following:

Procedure
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter Setting...
from the contextual menu.
The Repository Filter Setting dialog box displays.

594
Appendices

2. Select the Filter By Name check box.


The corresponding field becomes available.

3. Follow the rules set below the field when writing the patterns you want to use to filter the Jobs.
In this example, we want to list in the tree view all Jobs that start with tMap or test.
4. In the Repository Filter dialog box, click OK to validate your changes and close the dialog box.
Only the Jobs that correspond to the filter you set are displayed in the tree view, those that start with tMap and test
in this example.

595
Appendices

Results

Note:

You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).

Filtering by user

About this task


To filter entries in the Repository tree view by the user who created the Jobs/items, complete the following:

Procedure
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter Setting...
from the contextual menu.
The Repository Filter dialog box displays.

596
Appendices

2. Clear the All Users check box.


The corresponding fields in the table that follows become available.

This table lists the authentication information of all the users who have logged in to Talend Studio and created a Job or
an item.
3. Clear the check box next to a user if you want to hide all the Jobs/items created by him/her in the Repository tree view.
4. Click OK to validate your changes and close the dialog box.
All Jobs/items created by the specified user will disappear from the tree view.

597
Appendices

Results

Note:

You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).

Filtering by Job status

About this task


To filter Jobs in the Repository tree view by the job status, complete the following:

Procedure
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter Setting...
from the contextual menu.
The Repository Filter dialog box displays.

2. In the Filter By Status area, clear the check boxes next to the status type if you want to hide all the Jobs that have the
selected status.
3. Click OK to validate your changes and close the dialog box.
All Jobs that have the specified status will disappear from the tree view.

598
Appendices

Results

Note:

You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).

Choosing what repository nodes to display

About this task


To filter repository nodes, complete the following:

Procedure
1.
In the Integration perspective of the Studio, click the icon in the upper right corner of the Repository tree
view and select Filter Setting... from the contextual menu.
The Repository Filter dialog box displays.

2. Select the check boxes next to the nodes you want to display in the Repository tree view.

599
Appendices

Consider, for example, that you want to show in the tree view all the Jobs listed under the Job Designs node, three of
the folders listed under the SQL Templates node and one of the metadata items listed under the Metadata node.
3. Click OK to validate your changes and close the dialog box.
Only the nodes/folders for which you selected the corresponding check boxes are displayed in the tree view.

Results

Note:
If you do not want to show all the Jobs listed under the Job Designs node, you can filter the Jobs using the Filter By Name
check box. For more information on filtering Jobs, see Filtering by Job name on page 594.

Setting Talend Studio preferences


You can define various properties for all the perspectives of Talend Studio according to your needs and preferences.
Numerous settings you define can be stored in the Preference and thus become your default values for all new Jobs you
create.

600
Appendices

The following sections describe specific settings that you can set as preference.
First, click the Window menu of Talend Studio, then select Preferences.

Java Interpreter path (Talend)

About this task


The Java Interpreter path is set based on the location of the Java file on your computer (for example C:\Program Files
\Java\jre1.8.0_51\bin\java.exe).

To customize your Java Interpreter path:

Procedure
1. If needed, click the Talend node in the tree view of the Preferences dialog box.
2. Enter a path in the Java interpreter field if the default directory does not display the right path.

Results
On the same view, you can also change the preview limit and the path to the temporary files or the OS language.

Designer preferences (Talend > Appearance)

About this task


You can set component and Job design preferences to let your settings be permanent in the Studio.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend > Appearance nodes.
3. Click Designer to display the corresponding view.
On this view, you can define the way component names and hints will be displayed.

601
Appendices

4. Select the relevant check boxes to customize your use of the Talend Studio design workspace.

Artifact repository connection preferences (Talend > Artifact Repository)

About this task


You can configure how long your Talend Studio keeps connection with the Artifact repository, from which your Talend Studio
retrieves updates and custom libraries and to which you can publish your artifacts.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Artifact Repository to display the relevant view.

602
Appendices

3. In the Timeout for artifact repository connection (ms) field, specify the time in milliseconds you want your Talend
Studio to wait for an interaction with the Artifact repository server before cutting the connection, 0 for an infinite
timeout.
4. Click Apply to apply your changes; click Apply and Close to validate the settings and close the Preferences dialog box.

Artifact repository for libraries preferences

About this task


You can configure whether to share libraries to the local libraries repository at Talend Studio startup. You can also set
preferences for Talend Studio to check for updates of custom libraries on the Artifact repository server.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend and Artifact Repository nodes in succession and then click Libraries to display the relevant view.
3. If needed, select the Share libraries to artifact repository at startup check box to share libraries to the local libraries
repository at Talend Studio startup.
Note that if you want to share libraries to the local libraries repository, you need to first configure the repository in your
management system, Talend Cloud Management Console (for Cloud) or Talend Administration Center (for on-premises).
For more information, see Configuring the artifact repository from Talend Cloud Management Console or Setting up the
user library location.
By default, the Share libraries to artifact repository at startup check box is cleared to improve the startup performance
of Talend Studio. You can share libraries after starting Talend Studio by clicking on the Modules view. For more
information, see Installing external modules.
4. In the Jars check frequency field, specify how often you want your Talend Studio to check for updates according to your
needs:
• -1 if you don't want your Talend Studio to check for updates at all.
• 0 if you want your Talend Studio to check for updates at any action that needs a Jar or Jars, for example when a Job
is built or executed from the Studio.
• the number of days between two checks.
5. Click Apply to apply your changes; click Apply and Close to validate the settings and close the Preferences dialog box.

603
Appendices

How to define the user component folder (Talend > Components)

A new framework is available to build custom components and must be used from this version onwards. Refer to Developing
a component using Talend Component Kit on Talend Help Center (https://help.talend.com). This document still applies for
components downloaded from Talend Exchange.
You can download and install custom components for use in the Integration perspective of Talend Studio.
For further information about downloading and installing components from Talend Exchange, see How to install and update
a custom component on Talend Help Center (https://help.talend.com).
The following procedure applies only to the external components.
The user component folder is the folder that contains the components you created and/or the ones you downloaded from
Talend Exchange. To define it, proceed as follows:

Procedure
1. In the tree view of the Preferences dialog box, expand the Talend node and select Components.

2. Enter the User component folder path or browse to the folder that contains the custom components to be added to the
Palette of the Studio.
In order to be imported to the Palette of the Studio, the custom components have to be in separate folders located at
the root of the component folder you have defined.

Note: If some pieces of code need to be reused by the javajet files of your custom components, you need to create
a folder templates under the user component folder and put your reusable code there. For example, if there is a
reusable code file Log4jFileUtil.javajet under the folder templates, you can reuse it in other javajet files
by adding <%@ include file="../templates/Log4jFileUtil.javajet"%>.

3. Click Apply to validate the preferences. You can also click Apply and Close to validate the preferences and close the
dialog box.
The Studio restarts and the external components are added to the Palette.

604
Appendices

Results
This configuration is stored in the metadata of the workspace. If the workspace of Talend Studio changes, you have to reset
this configuration again.

How to change specific component settings (Talend > Components)

About this task


You can modify some specific component settings such as the default mapping link display.
The following procedure applies to the external components and to the components included in the Studio.
To modify those specific components settings, proceed as follows:

Procedure
1. In the tree view of the Preferences dialog box, expand the Talend node and select Components.

2. Set your preferences as needed.


• In the Row limit field, set the number of the data rows you want to see in the data viewer. For further information
about the data viewer, see Viewing in-process data on page 132.
• From the Default mapping links display as list, select the mapping link type you want to use in the tMap.
• Under tRunJob, select the check box if you do not want the corresponding Job to open upon double clicking a
tRunJob component.

Note: You will still be able to open the corresponding Job by right clicking the tRunJob component and
selecting Open tRunJob Component.

• Under Joblet, select the check box if you do not want the corresponding Job to open upon double clicking a Joblet
component.

Note: You will still be able to open the corresponding Job by right clicking the Joblet component and selecting
Open Joblet Component.

605
Appendices

• Under Component Assist, select the Enable Component Creation Assistant check box if you want to be able to add
a component by typing its name in the design workspace. For more information, see Adding components to the Job
on page 49.
3. Click Apply to validate the set preferences. You can also click Apply and Close to validate the set preferences and close
the dialog box.

Results
This configuration is stored in the metadata of the workspace. If the workspace of Talend Studio changes, you have to reset
this configuration again.

Documentation preferences (Talend > Documentation)

About this task


You can include the source code on the generated documentation.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Documentation to display the documentation preferences.

3. Customize the documentation preferences according to your needs:


• Select the Source code to HTML generation check box to include the source code in the HTML documentation that
you will generate.
• Select the Automatic update of corresponding documentation of job/joblet check box to automatically update the
Job and Joblet documentation.
• In the User Doc Logo field, specify an image file if you want your documentation to include your own logo.
• In the Company Name field, enter your company name to show on your documentation, if needed.
• Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if you need to
use a CSS file to customize the exported HTML files.

Results
For more information on documentation, see Generating HTML documentation on page 186 and Documentation tab on page
85.

Exchange preferences (Talend > Exchange)

Before you begin

Warning: Make sure that the -Dtalend.disable.internet parameter is not present in the Studio .ini file or is set
to false.

About this task


You can set preferences related to your connection with Talend Exchange, which is part of the Talend Community, in
Talend Studio. To do so:

606
Appendices

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Exchange to display the Exchange view.

3. Set the Exchange preferences according to your needs:


• If you are not yet connected to the Talend Community, click Sign In to go to the Connect to Talend Community
page to sign in using your Talend Community credentials or create a Talend Community account and then sign
in.
If you are already connected to the Talend Community, your account is displayed and the Sign In button becomes
Sign Out. To get disconnected from the Talend Community, click Sign Out.
• If you are not yet connected to the Talend Community and you do not want to be prompted to connect to the
Talend Community when launching the Studio, select the Don't ask me to connect to TalendForge at login check
box.
• By default, while you are connected to the Talend Community, whenever an update to an installed community
extension is available, a dialog box appears to notify you about it. If you often check for community extension
updates and you do not want that dialog box to appear again, clear the Notify me when updated extensions are
available check box.

Metadata Bridge preferences (Talend > Import/Export)

You can set preferences for the Talend Metadata Bridge to make it work the way you want.
For more information on using the Talend Metadata Bridge to import/export metadata, see Getting Started with the Talend
Metadata Bridge on Talend Help Center (https://help.talend.com).
This feature is not shipped with your Talend Studio by default. You need to install it using the Feature Manager. For more
information, see Installing features using the Feature Manager.

Procedure
1. From the menu bar, click Window > Preferences to display the Preferences dialog box.
2. Expand the Talend and Import/Export nodes in succession and then click Metadata Bridge to display the relevant view.

607
Appendices

3. Set the preferences according to your use of the Talend Metadata Bridge:
• In the Location area, select the Embedded option to use the MIMB tool embedded in the Talend Metadata Bridge.
This is the default option.
To use the MIMB tool you have installed locally, select Local Directory and specify the installation directory of the
MIMB tool.
• In the Temp folder field, specify the directory to hold the temporary files generated during metadata import/
export executions, if you do not want to use the default directory.
• In the Log folder field, specify the directory to hold the logs files generated during metadata import/export
executions, if you do not want to use the default directory.
• Select the Show detailed logs check box to generate detailed log files during metadata import/export executions.
4. Click Apply to apply your changes; click Apply and Close to validate the settings and close the Preferences dialog box.

Results

Language preferences (Talend > Internationalization)

About this task


You can set language preferences in Talend Studio. To do so:

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Internationalization to display the relevant view.

608
Appendices

3. From the Local Language list, select the language you want to use for the graphical interface of Talend Studio.
4. Click Apply and then Apply and Close to validate your change and close the Preferences dialog box.
5. Restart the Studio to display the graphical interface in the selected language.

Palette preferences (Talend> Palette Settings)

Talend Studio allows you to configure the maximum number of components that can be displayed in the Recently Used list
on the Palette.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Palette Settings to display the corresponding view.
3. In the Recently Used list size field, enter the maximum number of components that can be displayed in the Recently
Used list on the Palette.
4. Click Apply and Close to save your change.
The change will take effect after you restart Talend Studio.

Performance preferences (Talend > Performance)

You can set performance preferences according to your use of Talend Studio. To do so, proceed as follows:

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node and click Performance to display the relevant view.

609
Appendices

Note: When working on a remote connection, the name of the two fields Default connection timeout (seconds) and
Default read timeout (seconds) are displayed as Connection timeout with Administration Center (seconds) and Read
timeout from Administration Center (seconds) respectively.

Note: You can improve your performance when you deactivate automatic refresh.

3. Set the performance preferences according to your use of Talend Studio:


• Select the Deactivate auto detect/update after a modification in the repository check box to deactivate the
automatic detection and update of the repository.
• Select the Check the property fields when generating code check box to activate the audit of the property fields
of the component. When one property filed is not correctly filled in, the component is surrounded by red on the
design workspace.

Note: You can optimize performance if you disable property fields verification of components, for example if
you clear the Check the property fields when generating code check box.

• Select the Generate code when opening the job check box to generate code when you open a Job.
• Select the Check only the last version when updating jobs or joblets check box to only check the latest version
when you update a Job or a Joblet.
• Select the Propagate contexts added in repository context groups check box to allow propagating contexts newly
added in repository context groups to Jobs.
With this option enabled, each time you open a Job that uses a repository context group, you will see a dialog box
asking you whether you want to perform a context propagation if any context has been added in the context group
but not synchronized to the Job yet.
This option is disabled by default.

610
Appendices

• Select the Propagate add/delete variable changes in repository contexts check box to allow propagating variable
changes in the Repository Contexts.
• Select the Activate the timeout for database connection check box to establish database connection time out. Then
set this time out in the Connection timeout (seconds) field.
• Select the Add all user routines to job dependencies, when creating a new job check box to add all user routines to
Job dependencies upon the creation of new Jobs.
• When working in a or Git managed project, select the Automatic refresh of locks check box to allow the Studio to
automatically retrieve the lock status of all items contained in the project upon each action made in the Studio.
If you find communications with the Talend Administration Center slow or if the project contains a big number of
locked items, you can clear this check box to gain performance of the Studio.
• When working on a local project, set a timeout value in the Default connection timeout (seconds) field to define
how long your Talend Studio retries connecting to the Artifact Repository server in case of a connectivity issue.
Enter 0 to disable connection timeout and allow infinite retries.
• When working on a local project, set a timeout value in the Default read timeout (seconds) field to define how
long your Talend Studio should wait for a response from the Artifact Repository server before throwing a timeout
exception. A smaller value helps improve Studio performance, while a larger value prevents frequent connection
timeout exceptions. The default value is 60 (seconds), and 0 means no read timeout.
• When working on a remote connection, set a timeout value in the Connection timeout with Administration Center
(seconds) field to define how long your Talend Studio retries connecting to the Artifact Repository server and the
Administration Center in case of a connectivity issue. Enter 0 to disable connection timeout and allow infinite
retries.
• When working on a remote connection, set a timeout value in the Read timeout from Administration Center
(seconds) field to define how long your Talend Studio should wait for a response from the Artifact Repository
server and the Administration Center before throwing a timeout exception. A smaller value helps improve Studio
performance, while a larger value prevents frequent connection timeout exceptions. The default value is 60
(seconds), and 0 means no read timeout.
• In the Code Format timeout (seconds) field, specify the number of seconds in which you want your Talend Studio
to stop formatting the source code upon code generation, for example when you switch from the Designer view
to the Code view or when you build a Job. The value must be an integer greater than 0. Setting a small timeout
value helps prevent performance issues at the price of lower readability of the source code, especially for a large,
complex Job.
• In the HBase/MapR-DB scan limit (for retrieving schema) field, specify the number of columns to be displayed for
all the HBase/MapR-DB connection metadata.

Project reference preferences (Talend > Repository)

About this task


You can set your preferences for project references in Talend Studio. To do so:

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Develop the Talend node and click Repository to display the relevant view.

3. Select the Merge the reference project dialog box to show referenced projects as part of the Job Designs folder.
4. Click Apply and then Apply and Close to close the Preferences dialog box.

Results
For more detailed information about working with reference projects, see Working with referenced projects on page 190.

Debug and Job execution preferences (Talend > Run/Debug)

About this task


You can set your preferences for debug and Job executions in Talend Studio. To do so:

611
Appendices

Procedure
1. From the menu bar, click Window > Preferences to display the Preferences dialog box.
2. Expand the Talend node and click Run/Debug to display the relevant view.

3. Set the parameters according to your needs.


• In the Talend client configuration area, you can define the execution options to be used by default:

Stats port range Specify a range for the ports used for generating statistics, in particular, if the ports defined by
default are used by other applications.

Trace port range Specify a range for the ports used for generating traces, in particular, if the ports defined by
default are used by other applications.

Save before run Select this check box to save your Job automatically before its execution.

Clear before run Select this check box to delete the results of a previous execution before re-executing the Job.

Exec time Select this check box to show Job execution duration.

Statistics Select this check box to show the statistics measurement of data flow during Job execution.

Traces Select this check box to show data processing during Job execution.

Pause time Enter the time you want to set before each data line in the traces table.

• In the Job Run VM arguments list, you can define the parameter of your current JVM according to your needs. The
default parameters -Xms256M and -Xmx1024M correspond respectively to the minimal and maximal memory
capacities reserved for your Job executions.
If you want to use some JVM parameters for only a specific Job execution, for example if you want to display the
execution result for this specific Job in Japanese, you need open this Job's Run view and then in the Run view,
configure the advanced execution settings to define the corresponding parameters.
For further information about the advanced execution settings of a specific Job, see Setting advanced execution settings
on page 201.
For more information about possible parameters, check the site http://www.oracle.com/technetwork/java/javase/tech/
vmoptions-jsp-140102.html.

612
Appendices

Configuring remote execution (Talend > Run/Debug)

Talend Studio allows you to deploy and execute your Jobs on a remote JobServer when you work on either a local project or
on a remote one on the condition that you are connected with the Talend Administration Center.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend and the Run/Debug nodes in succession and then click Remote.

3. Optional: To enable monitoring the JVM resource usage during Job execution on a remote JobServer, do the following:
a) Select the Enable remote monitoring check box.
b) In Remote JMX port field, enter a listening port number that is free in your system.
4. Optional: To define a specific Unix OS user allowed to start the Job execution on a remote JobServer, enter the user
name in the Run as (Set up user for Unix) field.
If left blank, any of the existing Operating System users can start the Job execution.

Tip: By default, the user name must start with a lower-case letter from a to z, followed by a combination of lower-
case letters (a to z) and numbers (from 0 to 9). To allow using characters other than those letters and numbers,
you need to modify the regular expression ^[a-z][-a-z0-9]*\$ in the value of the org.talend.rem
ote.jobserver.server.TalendJobServer.RUN_AS_USER_VALIDATION_REGEXP parameter in the file
{Job_Server_Installation_Folder}\agent\conf\TalendJobServer.properties. For example:
• To define a user name pattern that should include a dot, like firstname.lastname, modify the regular
expression to ^[a-z][-a-z0-9]*.[a-z][-a-z0-9]*\$.
• To allow using one or more underscores (_) in the user name, modify the regular expression to ^[a-z][-a-
z_0-9]*\$.

5. If the network condition is bad, enter an appropriate value in the Max size per package to transfer field to ensure that
the Job packages received on the JobServer are complete.
By default, the maximum package size is 1048576 bytes (1 MB).
6. To remotely execute your Jobs while you are working on a local project, configure the remote JobServer details.

613
Appendices

Note: When working on a remote project, you need to connect the Studio with Talend Administration Center so that
the JobServer settings are automatically retrieved.
If the Studio is disconnected from the Talend Administration Center, you can not run the Job remotely because the
JobServer settings can not be retrieved and you can not configure them manually.

a) In the Remote Jobs Servers area, click the [+] button to add a new line in the table.
b) Fill in all the fields for the JobServer: Name, Host name (or IP address), Standard port, Username, Password, and File
transfer Port.
The Username and Password fields are not required if no users are configured in the configuration file conf/
users.csv of the JobServer.
7. Click Apply and Apply and Close to validate the changes.

Schema preferences (Talend > Specific Settings)

About this task


You can define the default data length and type of the schema fields of your components.

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend node, and click Specific Settings > Default Type and Length to display the data length and type of
your schema.

3. Set the parameters according to your needs:


• In the Default Settings for Fields with Null Values area, fill in the data type and the field length to apply to the null
fields.
• In the Default Settings for All Fields area, fill in the data type and the field length to apply to all fields of the
schema.
• In the Default Length for Data Type area, fill in the field length for each type of data.

614
Appendices

SQL Builder preferences (Talend > Specific Settings)

About this task


You can set your preferences for the SQL Builder. To do so:

Procedure
1. From the menu bar, click Window > Preferences to open the Preferences dialog box.
2. Expand the Talend and Specific Settings nodes in succession and then click Sql Builder to display the relevant view.

3. Customize the SQL Builder preferences according to your needs:


• Select the add quotes, when you generated sql statement check box to precede and follow column and table
names with inverted commas in your SQL queries.
• In the AS400 SQL generation area, select the Standard SQL Statement or System SQL Statement check boxes to
use standard or system SQL statements respectively when you use an AS/400 database.
• Clear the Enable check queries in the database components (disable to avoid warnings for specific queries) check
box to deactivate the verification of queries in all database components.

SSL settings preferences (Talend> SSL)

About this task


You can set SSL preferences to configure your Talend Studio for secure communications with remote servers.

Procedure
1. From the menu bar, click Window > Preferences to display the Preferences dialog box.
2. Expand the Talend node and then click SSL to display the relevant view.
3. Define the Keystore Configuration for the local certificate to be sent to the remote host:
a) Click Browse next to the Path field and browse to the keystore file that stores your local credentials.
b) In the Password field, enter the keystore password.
c) From the Keystore Type list, select the type of keystore to use.
4. Define the Truststore Configuration for verification of the remote host's certificate:
a) Click Browse next to the Path field and browse to the truststore file.
b) In the Password field, enter the truststore password.
c) From the Keystore Type list, select the type of keystore to use.
5. Click Apply to apply your changes; click Apply and Close to validate the settings and close the Preferences dialog box.
6. Restart your Talend Studio for the configurations to take effect.

What to do next
You can reuse this SSL connection configuration when you define connections in Metadata and in component configurations
in Jobs.

Configuring update repositories

Talend Studio allows you to configure the update repositories for its feature packages and updates.

615
Appendices

Procedure
1. From the menu bar, click Window > Preferences to display the Preferences dialog box.
Alternatively, you can also configure the update repositories in the Preference dialog box opened from the Connections
window before logging into a project. For more information, see Launching Talend Studio.
2. On the tree view in the Preferences dialog box, expand the Talend node and then click Update settings to display the
relevant view.
3. In the Base URL and Update URL fields, enter the URL of the repository for Talend Studio feature packages and updates
respectively.
Note that if you have installed the 8.0 R2022-05 Studio monthly update or a later one provided by Talend, this step is
needed only when you want to use the local update settings.
Pay attention to the following before configuring the values for both fields:
• By default, the value of the Base URL field is set to the Talend official site, and the Update URL field is empty to
avoid conflicts with the settings in Talend Administration Center.
• The Base URL field cannot be empty. Talend Studio feature packages are required when installing Talend Studio
features. For more information, see Managing features in Talend Studio on page 7.
• If the Update URL field is empty, Talend Studio will not automatically detect and notify you about available
updates.
For more information about how to update Talend Studio, see Updating Talend Studio.
For more information about the update URL for each Talend Studio monthly update, see Data Fabric Release Notes.
• You can click Restore Defaults to fill both fields with the Talend official sites.
• You can create proxy repositories or host the official Talend repositories and fill both fields with proxy or host
URLs. For more information, see Setting up update repositories for Talend Studio and Continuous Integration.
4. Click Apply and Close to save your changes and close the dialog box.

Usage Data Collector preferences (Talend > Usage Data Collector)

By allowing Talend Studio to collect your Studio usage statistics, you help users better understand Talend products and help
Talend better learn how users are using the products, thus enabling Talend to improve product quality and performance to
serve users better.
By default, Talend Studio automatically collects your Studio usage data and sends this data on a regular basis to servers
hosted by Talend. You can view the usage data collection and upload information and customize the Usage Data Collector
preferences according to your needs.

Note: Be assured that only the Studio usage statistics data will be collected and none of your private information will be
collected and transmitted to Talend.

Procedure
1. From the menu bar, click Window > Preferences to display the Preferences dialog box.
2. Expand the Talend node and click Usage Data Collector to display the Usage Data Collector view.
3. Read the message about the Usage Data Collector, and, if you do not want the Usage Data Collector to collect and
upload your Studio usage information, clear the Enable capture check box.
4. To have a preview of the usage data captured by the Usage Data Collector, expand the Usage Data Collector node and
click Preview.

616
Appendices

5. To customize the usage data upload interval and view the date of the last upload, click Uploading under the Usage Data
Collector node.
The collected product usage data is sent to Talend servers every 5 days by default. To change the data upload interval,
enter a new integer value in days in the Upload Period field and click Apply to save your changes.
The read-only Last Upload field displays the date and time the usage data was last sent to Talend servers.

Setting an authentication-enabled Https proxy for an Azure storage connection

This task sets an Https proxy with user authentication enabled for an existing Azure storage connection.

About this task


This task enables Talend studio to connect Azure storage through an Https proxy server with user authentication enabled. It
assumes that the address of the Https server, the username and password set on the proxy server are available.

617
Appendices

Procedure
1. In the Repository pane, expand the Metadata>Azure Storage node.
The Azure storage connection appears under Azure Storage.
2. Right-click the Azure storage connection and select Azure Storage Connection from the context menu.
The Azure Storage dialog box appears.
3. Click Proxy Setting in the Azure Storage dialog box.
The Preferences (Filtered) dialog box appears.
4. Select Enable Basic User Authentication Header and then select General>Network Connection.
The Network Connections pane appears in the right part of the dialog box.
5. Select the HTTPS row in Proxy entries table and then click Edit....
The Edit Proxy Entry dialog box appears.
6. Provide the following settings in the corresponding fields:
• Host, the IP address of the Https server;
• Port, the port number used;
• Require Authentication, select this option to enter username and password.
• User, username for authentication;
• Password, password for authentication.
7. Click OK and then Apply and Close to validate the settings.

Using SQL templates


What is ELT
Extract, Load and Transform (ELT) is a data manipulation process in database usage, especially in data warehousing.
Different from the traditional ETL (Extract, Transform, Load) mode, in ELT, data is extracted, loaded into the database and
then is transformed where it sits in the database, prior to use. This data is migrated in bulk according to the data set and the
transformation process occurs after the data has been loaded into the targeted DBMS in its raw format. This way, less stress
is placed on the network and larger throughput is gained.
However, the ELT mode is certainly not optimal for all situations, for example,
• As SQL is less powerful than Java, the scope of available data transformations is limited.
• ELT requires users that have high proficiency in SQL tuning and DBMS tuning.
• Using ELT with Talend Studio, you cannot pass or reject one single row of data as you can do in ETL. For more
information about row rejection, see Row connection on page 97.
Based on the advantages and disadvantages of ELT, the SQL templates are designed as the ELT facilitation requires.

Introducing Talend SQL templates


SQL is a standardized query language used to access and manage information in databases. Its scope includes data query
and update, schema creation and modification, and data access control. Talend Studio provides a range of SQL templates
to simplify the most common tasks. It also comprises a SQL editor which allows you to customize or design your own SQL
templates to meet less common requirements.
These SQL templates are used with the components from the Talend ELT component family including tSQLTemplate,
tSQLTemplateFilterColumns, tSQLTemplateCommit, tSQLTemplateFilterRows, tSQLTemplateRollback, tSQLTemplateAg
gregate and tSQLTemplateMerge. These components execute the selected SQL statements. Using the UNION, EXCEPT and
INTERSECT operators, you can modify data directly on the DBMS without using the system memory.
Moreover, with the help of these SQL templates, you can optimize the efficiency of your database management system by
storing and retrieving your data according to the structural requirements.
Talend Studio provides the following types of SQL templates under the SQL templates node in the Repository tree view:
• System SQL templates: They are classified according to the type of database for which they are tailored.
• User-defined SQL templates: these are templates which you have created or adapted from existing templates.
More detailed information about the SQL templates is presented in the below sections.

618
Appendices

Note:
As most of the SQL templates are tailored for specific databases, if you change database in your system, it is inevitable to
switch to or develop new templates for the new database.

Managing Talend SQL templates


Talend Studio enables you via the SQL Templates folder in the Repository tree view to use system or user-defined SQL
templates in the Jobs you create in the Studio using the ELT components.
The below sections show you how to manage these two types of SQL templates.

Types of system SQL templates

This section gives detail information related to the different types of the pre-defined SQL templates.
Even though the statements of each group of templates vary from database to database, according to the operations they are
intended to accomplish, they are also grouped on the basis of their types in each folder.
The below table provides these types and their related information.

Name Function Associated components Required component parameters

Aggregate Realizes aggregation (sum, average, tSQLTemplateAggregate Database name


count, etc.) over a set of data.
Source table name
Target table name

Commit Sends a Commit instruction to tSQLTemplate tSQLTemplateAg Null


RDBMS. gregate tSQLTemplateCo
mmit tSQLTemplateFi
lterColumns tSQLTemplateFi
lterRows tSQLTemplateMerge
tSQLTemplateRollback

Rollback Sends a Rollback instruction to tSQLTemplate tSQLTemplateAg Null


RDBMS. gregate tSQLTemplateCo
mmit tSQLTemplateFi
lterColumns tSQLTemplateFi
lterRows tSQLTemplateMerge
tSQLTemplateRollback

DropSourceTabl Removes a source table. tSQLTemplate tSQLTemplateAg Table name (when use tSQLTemplate)
e gregate tSQLTemplateFilterColumns
Source table name
tSQLTemplateFilterRows

DropTargetTabl Removes a target table. tSQLTemplateAggregate Target table name


e tSQLTemplateFilterColumns
tSQLTemplateFilterRows

FilterColumns Selects and extracts a set of data from tSQLTemplateAggregate Target table name (and schema)
given columns in RDBMS. tSQLTemplateFilterColumns
Source table name (and schema)
tSQLTemplateFilterRows

FilterRow Selects and extracts a set of data from tSQLTemplateAggregate Target table name (and schema)
given rows in RDBMS. tSQLTemplateFilterColumns
Source table name (and schema)
tSQLTemplateFilterRows
Conditions

MergeInsert Inserts records from the source table tSQLTemplateMerge Target table name (and schema)
to the target table. tSQLTemplateCommit
Source table name (and schema)
Conditions

MergeUpdate Updates the target table with records tSQLTemplateMerge Target table name (and schema)
from the source table. tSQLTemplateCommit
Source table name (and schema)
Conditions

619
Appendices

Accessing a system SQL template

To access a system SQL template, expand the SQL Templates node in the Repository tree view.

Each folder contains a system sub-folder containing pre-defined SQL statements, as well as a UserDefined folder in which
you can store SQL statements that you have created or customized.
Each system folder contains several types of SQL templates, each designed to accomplish a dedicated task.
Apart from the Generic folder, the SQL templates are grouped into different folders according to the type of database for
which they are to be used. The templates in the Generic folder are standard, for use in any database. You can use these as a
basis from which you can develop more specific SQL templates than those defined in Talend Studio.

Note:
The system folders and their content are read only.

From the Repository tree view, proceed as follows to open an SQL template:

Procedure
1. In the Repository tree view, expand SQL Templates and browse to the template you want to open.
2. Double-click the class that you want to open, for example, Aggregate in the Generic folder.
The Aggregate template view displays in the workspace.

620
Appendices

Results
You can read the predefined Aggregate statements in the template view. The parameters, such as TABLE_NAME_TAR
GET, operation, are to be defined when you design related Jobs. Then the parameters can be easily set in the associated
components, as mentioned in the previous section.
Every time you click or open an SQL template, its corresponding property view displays at the bottom of the studio. Click the
Aggregate template, for example, to view its properties as presented below:

For further information regarding the different types of SQL templates, see Types of system SQL templates on page 619.

Creating user-defined SQL templates

As the transformation you need to accomplish in ELT may exceed the scope of what the given SQL templates can achieve,
Talend Studio allows you to develop your own SQL templates according to some writing rules. These SQL templates are
stored in the UserDefined folders grouped according to the database type in which they will be used.
For more information on the SQL template writing rules, see SQL statements on page 626.
To create a user-defined SQL template:

Procedure
1. In the Repository tree view, expand SQL Templates and then the category you want to create the SQL template in.

621
Appendices

2. Right-click UserDefined and select Create SQLTemplate to open the New SQLTemplate wizard.

3. Enter the information required to create the template and click Finish to close the wizard.
The name of the newly created template appears under UserDefined in the Repository tree view. Also, an SQL template
editor opens on the design workspace, where you can enter the code for the newly created template.
For further information about how to create a user-defined SQL template and how to use it in a Job, see the section
about iterating on DB tables and deleting their content using a user-defined SQL template at MySQL.

A use case of system SQL templates

As there are many common, standardized SQL statements, Talend Studio allows you to benefit from various system SQL
templates.
This section presents you with a use case that takes you through the steps of using MySQL system templates in a Job that:
• opens a connection to a Mysql database.
• collects data grouped by specific value(s) from a database table and writes aggregated data in a target database table.

622
Appendices

• deletes the source table where the aggregated data comes from.
• reads the target database table and lists the Job execution result.
To connect to the database and aggregate the database table columns:
Configuring a connection to a MySQL database

Procedure
1. Drop the following components from the Palette onto the design workspace: tMysqlConnection, tSQLTemplateAg
gregate, tSQLTemplateCommit, tMysqlInput, and tLogRow.
2. Link tMysqlConnection to tSQLTemplateAggregate using a Trigger > On Subjob Ok connection.
3. Do the same to link tSQLTemplateAggregate to tSQLTemplateCommit and link tSQLTemplateCommit to tMysqlInput.
4. Link tMysqlInput to tLogRow using a Row > Main connection.

5. Double-click tMysqlConnection to open its Basic settings view.


6. In the Basic settings view, set the database connection details manually.

623
Appendices

7. Double-click tSQLTemplateCommit to open its Basic settings view.


8. On the Database Type list, select the relevant database type, and from the Component List, select the relevant database
connection component if more than one connection is used.
Grouping data, writing aggregated data and dropping the source table

Procedure
1. Double-click tSQLTemplateAggregate to open its Basic settings view.

2. On the Database Type list, select the relevant database type, and from the Component List, select the relevant database
connection component if more than one connection is used.
3. Enter the names for the database, source table, and target table in the corresponding fields and define the data
structure in the source and target tables.
The source table schema consists of three columns: First_Name, Last_Name and Country. The target table sch
ema consists of two columns: country and total. In this example, we want to group citizens by their nationalities
and count citizen number in each country. To do that, we define the Operations and Group by parameters accordingly.
4. In the Operations table, click the [+] button to add one or more lines, and then click the Output column cell and select
the output column that will hold the counted data from the drop-down list.
5. Click the Function cell and select the operation to be carried on from the drop-down list.
6. In the Group by table, click the [+] button to add one or more lines, and then click the Output column cell and select the
output column that will hold the aggregated data from the drop-down list.
7. Click the SQL Template tab to open the corresponding view.

624
Appendices

8. Click the [+] button twice under the SQL Template List table to add two SQL templates.
9. Click on the first SQL template row and select the MySQLAggregate template from the drop-down list. This template
generates the code to aggregate data according to the configuration in the Basic settings view.
10. Do the same to select the MySQLDropSourceTable template for the second SQL template row. This template generates
the code to delete the source table where the data to be aggregated comes from.

Note:
To add new SQL templates to an ELT component for execution, you can simply drop the templates of your choice
either onto the component in the design workspace, or onto the component's SQL Template List table.

Note:
The templates set up in the SQL Template List table have priority over the parameters set in the Basic settings view
and are executed in a top-down order. So in this use case, if you select MySQLDropSourceTable for the first template
row and MySQLAggregate for the second template row, the source table will be deleted prior to aggregation,
meaning that nothing will be aggregated.

Reading the target database and listing the Job execution result

Procedure
1. Double-click tMysqlInput to open its Basic settings view.

625
Appendices

2. Select the Use an existing connection check box to use the database connection that you have defined on the
tMysqlConnection component.
3. To define the schema, select Repository and then click the [...] button to choose the database table whose schema is
used. In this example, the target table holding the aggregated data is selected.
4. In the Table Name field, type in the name of the table you want to query. In this example, the table is the one holding
the aggregated data.
5. In the Query area, enter the query statement to select the columns to be displayed.
6. Save your Job and press F6 to execute it.
The source table is deleted.

A two-column table citizencount is created in the database. It groups citizens according to their nationalities and
gives their total count in each country.

SQL template writing rules


SQL statements
An SQL statement can be any valid SQL statement that the related JDBC is able to execute. The SQL template code is a
group of SQL statements. The basic rules to write an SQL statement in the SQL template editor are:
• An SQL statement must end with ;.
• An SQL statement can span lines. In this case, no line should be ended with ; except the last one.

Comment lines
A comment line starts with # or --. Any line that starts with # or -- will be ignored in code generating.

Note:
There is no exception to the lines in the middle part of a SQL statement or within the <%... %> syntax.

The <%...%> syntax


This syntax can span lines. The following list points out what you can do with this syntax and what you should pay attention
to.
• You can define new variables, use Java logical code like if, for and while, and also get parameter values.
For example, if you want to get the FILE_Name parameter, use the code as follows:

<%
String filename = __FILE_NAME__;
%>

626
Appendices

• This syntax cannot be used within an SQL statement. In other words, it should be used between two separated SQL
statements.
For example, the syntax in the following code is valid.

#sql sentence
DROP TABLE temp_0;
<%
#loop
for(int i=1; i<10; i++){
%>
#sql sentence
DROP TABLE temp_<%=i %>;
<%
}
%>
#sql sentence
DROP TABLE temp_10;

In this example, the syntax is used between two separated SQL templates: DROP TABLE temp_0; and DROP TABLE
temp_<%=i%>;.
The SQL statements are intended to remove several tables beginning from temp_0. The code between <% and %> generate
a sequence of number in loop to identify tables to be removed and close the loop after the number generation.
• Within this syntax, the <%=...%> or </.../> syntax should not be used.
<%=...%> and </.../> are also syntax intended for the SQL templates. The below sections describe related information.

Note:
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose
and can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.

The <%=...%> syntax


This syntax cannot span lines and is used for SQL statement. The following list points out what you can do with this syntax
and what you should pay attention to.
• This syntax can be used to generate any variable value, and also the value of any existing parameter.
• No space char is allowed after <%=.
• Inside this syntax, the <%...%> or </.../> syntax should not be used.
The statement written in the below example is a valid one.

#sql sentence
DROP TABLE temp_<%=__TABLE_NAME__ %>;

The code is used to remove the table defined through an associated component.
For more information about what components are associated with the SQL templates, see What is a Job design? on page 47.
For more information on the <%...%> syntax, see The <%...%> syntax on page 626.
For more information on the </.../> syntax, see the following section.

Note:
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose
and can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.

The "</.../>" syntax


This syntax cannot span lines. The following list points out what you can do with this syntax and what you should pay
attention to.
• It can be used to generate the value of any existing parameter. The generated value should not be enclosed by
quotation marks.

627
Appendices

• No space char is allowed after </ or before />.


• Inside this syntax, the <%...%> or <%=...%> syntax should not be used.
The statement written in the below example is a valid one.

#sql sentence
DROP TABLE temp_</TABLE_NAME/>;

The statement identifies the TABLE_NAME parameter and then removes the corresponding table.
For more information on the <%...%> syntax, see The <%...%> syntax on page 626.
For more information on the <%=...%> syntax, see The <%=...%> syntax on page 627.
The following sections present more specific code used to access more complicated parameters.

Note:
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose
and can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.

Code to access the component schema elements


Component schema elements are presented on a schema column name list (delimited by a dot "."). These elements are
created and defined in components by users.
The below code composes an example to access some elements included in a component schema. In the following example,
the ELT_METADATA_SHEMA variable name is used to get the component schema.

<%
String query = "select ";
SCHEMA(__ELT_METADATA_SHEMA__);
for (int i=0; i < __ELT_METADATA_SHEMA__.length ; i++) {
query += (__ELT_METADATA_SHEMA__[i].name + ",");
}
query += " from " + __TABLE_NAME__;
%>
<%=query %>;

In this example, and according to what you want to do, the __ELT_METADATA_SHEMA__[i].name code can be replaced
by __ELT_METADATA_SHEMA__[i].dbType, __ELT_METADATA_SHEMA__ [i].isKey, __ELT_METADATA
_SHEMA__[i].length or __ELT_METADATA_SHEMA__[i].nullable to access the other fields of the schema column.
The extract statement is SCHEMA(__ELT_METADATA_SHEMA__);. In this statement, ELT_METADATA_SHEMA is the
variable name representing the schema parameter to be extracted. The variable name used in the code is just an example.
You can change it to another variable name to represent the schema parameter you already defined.

Warning:
Make sure that the name you give to the schema parameter does not conflict with any
name of other parameters.

For more information on component schema, see Basic Settings tab on page 74.

Code to access the component matrix properties


The component matrix properties are created and changed by users according to various data transformation purposes.
These properties are defined by tabular parameters, for example, the operation parameters or groupby parameters that
users can define through the tSQLTemplateAggregate component.
To access these tabular parameters that are naturally more flexible and complicated, two approaches are available:
• The </.../> approach:
</.../> is one of the syntax used by the SQL templates. This approach often needs hard coding for every parameter to be
extracted.

628
Appendices

For example, a new parameter is created by user and is given the name NEW_PROPERTY. If you want to access it by using
</NEW_PROPERTY/>, the below code is needed.

else if (paramName.equals("NEW_PROPERTY")) {
List<Map<String, String>> newPropertyTableValue = (List<Map<String, String>>)
ElementParameterParser.getObjectValue(node, "__NEW_PROPERTY__");
for (int ii = 0; ii <newPropertyTableValue.size(); ii++) {
Map<String, String> newPropertyMap =newPropertyTableValue.get(ii);
realValue += ...;//append generated codes
......
}
}

• The EXTRACT(__GROUPBY__); approach:


The below code shows the second way to access the tabular parameter (GROUPBY).

<%
String query = "insert into " + __TABLE_NAME__ + "(id, name, date_birth) select
sum(id), name, date_birth from cust_teradata group by";
EXTRACT(__GROUPBY__);
for (int i=0; i < __GROUPBY_LENGTH__ ; i++) {
query += (__GROUPBY_INPUT_COLUMN__[i] + " ");
}
%>
<%=query %>;

When coding the statements, respect the rules as follows:


• The extract statement must use EXTRACT(__GROUPBY__);. Upcase should be used and no space char is allowed. This
statement should be used between <% and %>.
• Use __GROUPBY_LENGTH__, in which the parameter name is followed by _LENGTH, to get the line number of the
tabular GROUPBY parameters you define in the Groupby area on a Component view. It can be used between <% and %>
or <%= and %>.
• Use code like __GROUPBY_INPUT_COLUMN__[i] to extract the parameter values. This can be used between <% and
%> or between <%= and %>.
• In order to access the parameter correctly, do not use the identical name prefix for several parameters. For example in
the component, avoid to define two parameters with the names PARAMETER_NAME and PARAMETER_NAME_2, as the
same prefix in the names causes erroneous code generation.

Talend Studio shortcuts

Keyboard shortcuts available in Talend Studio


The following keyboard shortcuts are available in Talend Studio.

Shortcut Operation Context

F1 Show context-sensitive help information in the Global application


Help tab view.

F2 Show the Component settings view of the Global application


selected component

F4 Show the Run Job view Global application

F6 Run the current Job or shows the Run Job view Global application
if no Job is open

Ctrl + F2 Show the Module view Global application

Ctrl + F3 Show the Problems view Global application

Ctrl + H Show the Designer view of the current Job Global application

629
Appendices

Shortcut Operation Context

Ctrl + G Show the Code view of the current Job Global application

Ctrl + Shift + F3 Synchronize components javajet code Global application

Ctrl + Shift + R Open the Open Resource dialog Global application

Ctrl + Shift + J Open the Find a Job dialog Global application (on Windows)

F7 Switch to Debug mode From the Run Job view

F8 Kill the current Job From the Run Job view

F5 Refresh the Repository view From the Repository view

Ctrl + R Restore the initial Repository view From the Repository view

F5 Refresh the Modules install status From the Modules view

F5 Open the New Context Parameter dialog to From the Components view
create a context variable

Ctrl + L Execute SQL queries Talend commands (on Windows)

Ctrl + Space bar Access global and user-defined variables. From any component field in Job or Component
They can be error messages or line numbers views
for example, depending on the component
selected.

Drag and drop shortcuts available in Talend Studio


You can drag and drop database connections and tables from the Metadata node of the Repository tree view to the design
workspace.
The following drag and drop shortcuts are available in Talend Studio:

Action Operation

Drag and drop a connection Open the Components dialog to select the component you want to create.

Drag and drop a table or a connection + Ctrl Create the output component (for example, tMysqlOutput).
Drag and drop a table or a connection + fn (on MacOS) This shortcut is not available in all wizards.

Drag and drop a table or a connection + Ctrl + Shift Create the input component (for example, tMysqlInput).
Drag and drop a table or a connection + Ctrl (on MacOS) This shortcut is not available in all wizards.

630

You might also like